Client-Side Caching of Static Objects
Hi,
We just installed SPS12 for NWs. I learned of this new client-side caching of static objects while reading through the release notes, and I went in to our portal to check it out. I went to the location where the <a href="http://help.sap.com/saphelp_nw04s/helpdata/en/45/7c6336b6e5694ee10000000a155369/content.htm">release notes</a> (and an <a href="https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/6622">article in the blog</a>) specified, System Administration > System Configuration > Knowledge Management > Content Management > Global Services.
Problem is that I do not see Client Cache Service and Client Cache Patterns. I enabled "Show Advanced Options," but I do not see those two configuration items.
Does anyone know why I am not seeing those two configuration items and how I can get to those?
Thank you.
Hi,
We are using SPS 12.
KMC-CM 7.00 SP12 (1000.7.00.12.0.20070509052145)
KMC-BC 7.00 SP12 (1000.7.00.12.0.20070509052008)
CAF-KM 7.00 SP12 (1000.7.00.12.0.20070510091043)
KMC-COLL 7.00 SP12 (1000.7.00.12.0.20070509052223)
KM-KW_JIKS 7.00 SP12 (1000.7.00.12.0.20070507080500)
Similar Messages
-
Hello Everyone,
We would like to implement a script to delete the offline files in the Client Side Cache (CSC) for a nominated user (on Windows 7 x64 enterprise).
I am aware that;
1. We can use a registry value to flush the entire CNC cache (for all users) next time the machine reboots.
2. If we delete the user's local profile it appears that Windows 7 also removes their content from the local CSC.
However, we would like to just delete the CSC content for a particular nominated user without having to delete their local user profile.
In our environment we have many users that share workstations but only use them occasionally. We don't use roaming profile so we would like to retain all the users' local profiles but still delete the CSC content for any users that haven't
logged on in a week.
Any ideas or info would be appreciated !
Thanks, MakesHi,
I don't think this is possible.
If you want to achieve it via script, I suggest you post it in official script forum for more professional help:
http://social.technet.microsoft.com/Forums/scriptcenter/en-US/home?category=scripting
The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
Karen Hu
TechNet Community Support -
Hi all,
I wonder how client side caching (in memory) would work with BlazeDS.
There was a project called FAST that would support client side caching for Remoting:
http://www.adobe.com/devnet/flex/articles/fast_userguide.html, but this doesn't seem to work for Flex 3.
Do I have to invent my own caching mechanism?
Without client side caching support I don't see how an application build on BlazeDS remoting would ever scale to many users.
Regards,
MarkusHi Markus. I took a look at that FAST project. There isn't anything like that in BlazeDS currently. It probably wouldn't be too hard to port that code to make it work with Flex 3 but it's usefulness looks somewhat limited. There doesn't look to be any way to timeout a request that has been cached for example which I think most people would want. If you think this is something that would be useful you can log an enhancement request in Jira.
http://bugs.adobe.com/jira/browse/BLZ
As to your point about not seeing how an application built on BlazeDS remoting would scale, I think it's a fairly common practice to not expose functions to the end user that directly call backend services. It's pretty straightforward, for example, to request some data when the application loads and then refresh the data at some interval. You don't need a caching API to do this.
-Alex -
Hi,
We have a Java Web Start application, which caches around 500MB of heavy weight objects in memory. These corresponds to different reports and it kind of grows over the time. I am looking for some kind of architecture framework or some work around to maintain this cache at the client side. It is bit late to change the architecture and it is important to cache all the objects.
Could somebody suggest your views on this?
Thank You,
VikramCross post:
http://forum.java.sun.com/thread.jspa?threadID=649133
http://forum.java.sun.com/thread.jspa?threadID=649134
http://forum.java.sun.com/thread.jspa?threadID=649135
http://forum.java.sun.com/thread.jspa?threadID=649136
http://forum.java.sun.com/thread.jspa?threadID=649137
http://forum.java.sun.com/thread.jspa?threadID=649138 -
Client-side caching of Javascript (for Google Web Toolkit *.cache.* files
Hi all,
I'm trying out the use of Google Web Toolkit (GWT) for AJAX code development, leveraging RPC calling back-end Web Services for a document browser prototype. When the JavaScript code is generated by GWT, it has the ability to automatically distinguish between cacheable and non-cacheable content via file extensions (.cache. and .nocache.).
Now when running in a Tomcat environment with appropriate caching settings; the application runs extremely fast even on really poor latency sites (>500ms round trip); however on a NetWeaver stack, I can't find any information on how to set an attribute on .cache. files to set an expiry of 1 year.
Anyone know how to do this on a NetWeaver J2EE stack?
Thanks,
Matt
PS. For reference, GWT is a very cool open source project worth watching in the Enterprise space for targeted high-usability, high performance apps. Just the image bundles concept themselves are an awesome approach to minimizing impact of small images on performance.Hi again,
I thought I should post the solution I came up with in case people search on GWT in the future. In terms of caching, the Portal does a good job of making nearly everything that GWT produces to be cached at the client; and for the life of me, I couldn't get nocache files not cached at the client side.
So thanks to my friendly local SAP experts in the portal space; I discovered there's not much you can do to get this working except try use standard browser tags like the following to prevent caching on the browser:
Note - Can't seem to save this post in SDN with Meta tags that I'm trying to display so check out http://www.willmaster.com/library/tutorials/preventing_content_cache.php for more info...
Unfortunately, the way that GWT creates the nocache files, it is not possible to modify the .js files to do this so I needed an alternative approach.
Then I thought, well, the html file that includes the JS file is cached, but it's not very big, so how about I just write a script to modify the html file when I deploy to production to point to the nocache file with a date suffix.
Let me explain in more detail:
1. The original html file that includes the nocache.js file gets modified to point to nocache.<date>.js.
2. The original nocache.js file gets changed to nocache.<date>.js
3. Now produce an external bat file to do this automatically for you before deployment which DevStudio can call as part of an Ant script or similar.
That's it. So when you are developing, you can manually delete your cache, but just before you go to production, you do this conversion, and you will never have an issue with clients having the wrong version of javascript files.
Note - GWT takes care of caching of most of the files by using a GUID equivalenet each time you compile, so it's just a couple of files that really need to not be cached.
Anyway, I'm not explaining this solution very well, but drop us a line if you need to understand this approach further.
And lastly, my feedback about GWT - It Rocks! Everyone loves the result, it's easy to build and maintain, it offers a great user experience, and it's fast. Not a replacement for transactional apps; but for the occasional user - go for it.
Matt
Edited by: Matt Harding on May 22, 2008 7:48 AM -
Controlling the client-side cache
At the current stage on my project I'm finding that I'm changing my mapping definitions fairly frequently (because the country side is just the wrong shade of egg-shell). Clearing the server side cache is fairly easy to do, provided you remember to do it, but my problem is on the client side.
Right now, the tiles are returned with a one week time to live. So if I change the mapping definitions it might not propagate through to all the clients until up to a week later. What I'd like to do, at least for now, is turn off storing of map tiles on the client-side. The easiest way to do that would be to specify the cache-control header but I can't seem to find a configuration option for that.
Has anyone setup MapViewer to use a different cache-control?Hi Mark,
You can specify the cache control statements for the page itself.
If this does not help, try to set expires header for the images. ex for Apache see mod_expires for a directory/location setting
regards, michael -
Let's say I have a FLV that "lives" on a server, and I serve
it up through, say, Ruby. The Ruby script takes care of obtaining
the FLV from the filesystem and renders it to the browser.
Inside my client-side SWF, my code to connect to the Ruby
application and get the FLV may look like this:
var nc:NetConnection = new NetConnection();
nc.connect(null);
var ns:NetStream = new NetStream(nc);
tvid.attachVideo(ns);
ns.setBufferTime(2);
statusID = setInterval(videoStatus, 200);
ns.onStatus = function(info) {
trace(info.code);
if(info.code == "NetStream.Buffer.Full") {
bufferClip._visible = false;
ending = false;
clearInterval( statusID );
statusID = setInterval(videoStatus, 200);
if(info.code == "NetStream.Buffer.Empty") {
if ( !ending ) {
bufferClip._visible = true;
if(info.code == "NetStream.Play.Stop") {
bufferClip._visible = false;
//ending = true;
if(info.code == "NetStream.Play.Start") {
ending = false;
if(info.code == "NetStream.Buffer.Flush") {
ending = true;
//Play it
ns.play("
http://localhost:3000/stream");
==============
It seems to me that the "decision" of whether or not to cache
is entirely dependent on the access method within the client-side
SWF. So, if in this case, I'm using NetStream to stream the video,
will it still be cached on the client end? Or do I -have- to use
FMS to prevent client caching - and if so, why? How does FMS
prevent the client from caching the data (isn't it up to the client
to delete the data bits after they're viewed?)
Thanks a bunch for the help.Let's say I have a FLV that "lives" on a server, and I serve
it up through, say, Ruby. The Ruby script takes care of obtaining
the FLV from the filesystem and renders it to the browser.
Inside my client-side SWF, my code to connect to the Ruby
application and get the FLV may look like this:
var nc:NetConnection = new NetConnection();
nc.connect(null);
var ns:NetStream = new NetStream(nc);
tvid.attachVideo(ns);
ns.setBufferTime(2);
statusID = setInterval(videoStatus, 200);
ns.onStatus = function(info) {
trace(info.code);
if(info.code == "NetStream.Buffer.Full") {
bufferClip._visible = false;
ending = false;
clearInterval( statusID );
statusID = setInterval(videoStatus, 200);
if(info.code == "NetStream.Buffer.Empty") {
if ( !ending ) {
bufferClip._visible = true;
if(info.code == "NetStream.Play.Stop") {
bufferClip._visible = false;
//ending = true;
if(info.code == "NetStream.Play.Start") {
ending = false;
if(info.code == "NetStream.Buffer.Flush") {
ending = true;
//Play it
ns.play("
http://localhost:3000/stream");
==============
It seems to me that the "decision" of whether or not to cache
is entirely dependent on the access method within the client-side
SWF. So, if in this case, I'm using NetStream to stream the video,
will it still be cached on the client end? Or do I -have- to use
FMS to prevent client caching - and if so, why? How does FMS
prevent the client from caching the data (isn't it up to the client
to delete the data bits after they're viewed?)
Thanks a bunch for the help. -
Hi,
Is there any approved way to change the location of the offline files cache in Windows 8.1? I'm asking before moving client systems from Windows 7 where a registry change was required - is this still the way to go?
I believe that if the default user-profile directory is moved, this causes problems for updates to Windows 8/8.1 now, is the same true if you move CSC?
Just a bit of background. The systems I want to do this for have smallish SSD boot drives with a higher capacity spinning disk. The machines are members of a domain with Folder Redirection setup. I could setup group policies to disable
caching of some of the larger directories or even disable CSC entirely on the machines which have an SSD boot drive, but would prefer not to do so.
Thanks in advance.Hi,
According to your descryption, In windows 7, the tool is migwiz.exe, namely the Windows Easy Transfer, you can modify the registry in the way listed in the post below:
http://social.technet.microsoft.com/Forums/en-US/bbf5890c-b3d7-4b38-83d8-d9a5e025fb2b/how-to-move-clientside-caching-to-a-new-location?forum=w7itproinstall
In Windows 8, Windows easy transfer is also a built in tool to migrate user profiles and user settings, but as for the offline transfer, I suggest to use the USMT for your situation.
Because with USMT you can specify the folder that you want to migrate the files to, by using an offline.xml you can set the path just as the sample listed below:
<offline>
<winDir>
<path>C:\Windows</path>
<path>D:\Windows</path>
<path>E:\</path>
</winDir>
<failOnMultipleWinDir>1</failOnMultipleWinDir>
</offline>
You can also refer t othe details at:
http://technet.microsoft.com/en-us/library/hh824880.aspx
Regards
Wade Liu
TechNet Community Support -
Client Side Cache Manipulation
This thread is to clarify a previous thread.
In JNLP is there a directory that gets created when a program gets installed, it is never cleared by JNLP while the program is installed, and, upon uninstallation of that program, that directory gets completely cleared of all files, not just jars, or, the directory is removed.
Thanks =)Your comments make me suspect you have not discovered the
JNLP cache yet.. It is mentioned in the Java Console settings
and I have discussed how to find it, on messages to this
forum over the last 24-48 hours (cannot quite recall to whom).
Do you know where to find the cache?
(I am guessing that being able to explore the cache
would answer many of your questions) -
Hello,
I try to get the client side result set cache working, but i have no luck :-(
I'm using Oracle Enterprise Edition 11.2.0.1.0 and as client diver 11.2.0.2.0.
Executing the query select /*+ result_cache*/ * from p_item via sql plus or toad will generate an nice execution plan with an RESULT CACHE node and the v$result_cache_objects contains some rows.
After I've check the server side cache works. I want to cache the client side
My simple Java Application looks like
private static final String ID = UUID.randomUUID().toString();
private static final String JDBC_URL = "jdbc:oracle:oci:@server:1521:ORCL";
private static final String USER = "user";
private static final String PASSWORD = "password";
public static void main(String[] args) throws SQLException {
OracleDataSource ds = new OracleDataSource();
ds.setImplicitCachingEnabled(true);
ds.setURL( JDBC_URL );
ds.setUser( USER );
ds.setPassword( PASSWORD );
String sql = "select /*+ result_cache */ /* " + ID + " */ * from p_item d " +
"where d.i_size = :1";
for( int i=0; i<100; i++ ) {
OracleConnection connection = (OracleConnection) ds.getConnection();
connection.setImplicitCachingEnabled(true);
connection.setStatementCacheSize(10);
OraclePreparedStatement stmt = (OraclePreparedStatement) connection.prepareStatement( sql );
stmt.setLong( 1, 176 );
ResultSet rs = stmt.executeQuery();
int count = 0;
for(; rs.next(); count++ );
rs.close();
stmt.close();
System.out.println( "Execution: " + getExecutions(connection) + " Fetched: " + count );
connection.close();
private static int getExecutions( Connection connection ) throws SQLException {
String sql = "select executions from v$sqlarea where sql_text like ?";
PreparedStatement stmt = connection.prepareStatement(sql);
stmt.setString(1, "%" + ID + "%" );
ResultSet rs = stmt.executeQuery();
if( rs.next() == false )
return 0;
int result = rs.getInt(1);
if( rs.next() )
throw new IllegalArgumentException("not unique");
rs.close();
stmt.close();
return result;
100 times the same query is executed and the statement exection count is incemented every time. I expect just 1 statement execution ( client database roundtrip ) and 99 hits in client result set cache. The view CLIENT_RESULT_CACHE_STATS$ is empty :-(
I'm using the oracle documentation at http://download.oracle.com/docs/cd/E14072_01/java.112/e10589/instclnt.htm#BABEDHFF and I don't kown why it does't work :-(
I'm thankful for every tip,
André KullmannI wanted to post a follow-up to (hopefully) clear up a point of potential confusion. That is, with the OCI Client Result Cache, the results are indeed cached on the client in memory managed by OCI.
As I mentioned in my previous reply, I am not a JDBC (or Java) expert so there is likely a great deal of improvement that can be made to my little test program. However, it is not intended to be exemplary, didactic code - rather, it's hopefully just enough to illustrate that the caching happens on the client (when things are configured correctly, etc).
My environment for this exercise is Windows 7 64-bit, Java SE 1.6.0_27 32-bit, Oracle Instant Client 11.2.0.2 32-bit, and Oracle Database 11.2.0.2 64-bit.
Apologies if this is a messy post, but I wanted to make it as close to copy/paste/verify as possible.
Here's the test code I used:
import java.sql.ResultSet;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
class OCIResultCache
public static void main(String args []) throws SQLException
OracleDataSource ods = null;
OracleConnection conn = null;
PreparedStatement stmt = null;
ResultSet rset = null;
String sql1 = "select /*+ no_result_cache */ first_name, last_name " +
"from hr.employees";
String sql2 = "select /*+ result_cache */ first_name, last_name " +
"from hr.employees";
int fetchSize = 128;
long start, end;
try
ods = new OracleDataSource();
ods.setURL("jdbc:oracle:oci:@liverpool:1521:V112");
ods.setUser("orademo");
ods.setPassword("orademo");
conn = (OracleConnection) ods.getConnection();
conn.setImplicitCachingEnabled(true);
conn.setStatementCacheSize(20);
stmt = conn.prepareStatement(sql1);
stmt.setFetchSize(fetchSize);
start = System.currentTimeMillis();
for (int i=0; i < 10000; i++)
rset = stmt.executeQuery();
while (rset.next())
if (rset != null) rset.close();
end = System.currentTimeMillis();
if (stmt != null) stmt.close();
System.out.println();
System.out.println("Execution time [sql1] = " + (end-start) + " ms.");
stmt = conn.prepareStatement(sql2);
stmt.setFetchSize(fetchSize);
start = System.currentTimeMillis();
for (int i=0; i < 10000; i++)
rset = stmt.executeQuery();
while (rset.next())
if (rset != null) rset.close();
end = System.currentTimeMillis();
if (stmt != null) stmt.close();
System.out.println();
System.out.println("Execution time [sql2] = " + (end-start) + " ms.");
System.out.println();
System.out.print("Enter to continue...");
System.console().readLine();
finally
if (rset != null) rset.close();
if (stmt != null) stmt.close();
if (conn != null) conn.close();
}In order to show that the results are cached on the client and thus server round-trips are avoided, I generated a 10046 level 12 trace from the database for this session. This was done using the following database logon trigger:
create or replace trigger logon_trigger
after logon on database
begin
if (user = 'ORADEMO') then
execute immediate
'alter session set events ''10046 trace name context forever, level 12''';
end if;
end;
/With that in place I then did some environmental setup and executed the test:
C:\Projects\Test\Java\OCIResultCache>set ORACLE_HOME=C:\Oracle\instantclient_11_2
C:\Projects\Test\Java\OCIResultCache>set CLASSPATH=.;%ORACLE_HOME%\ojdbc6.jar
C:\Projects\Test\Java\OCIResultCache>set PATH=%ORACLE_HOME%\;%PATH%
C:\Projects\Test\Java\OCIResultCache>java OCIResultCache
Execution time [sql1] = 1654 ms.
Execution time [sql2] = 686 ms.
Enter to continue...This is all on my laptop, so results are not stellar in terms of performance; however, you can see that the portion of the test that uses the OCI client result cache did execute in approximately half of the time as the non-cached portion.
But, the more compelling data is in the resulting trace file which I ran through the tkprof utility to make it nicely formatted and summarized:
SQL ID: cqx6mdvs7mqud Plan Hash: 2228653197
select /*+ no_result_cache */ first_name, last_name
from
hr.employees
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 10000 0.10 0.10 0 0 0 0
Fetch 10001 0.49 0.54 0 10001 0 1070000
total 20002 0.60 0.65 0 10001 0 1070000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 94
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
107 107 107 INDEX FULL SCAN EMP_NAME_IX (cr=2 pr=0 pw=0 time=21 us cost=1 size=1605 card=107)(object id 75241)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 10001 0.00 0.00
SQL*Net message from client 10001 0.00 1.10
SQL ID: frzmxy93n71ss Plan Hash: 2228653197
select /*+ result_cache */ first_name, last_name
from
hr.employees
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 11 22 0
Fetch 2 0.00 0.00 0 0 0 107
total 4 0.00 0.01 0 11 22 107
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 94
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
107 107 107 RESULT CACHE 0rdkpjr5p74cf0n0cs95ntguh7 (cr=0 pr=0 pw=0 time=12 us)
0 0 0 INDEX FULL SCAN EMP_NAME_IX (cr=0 pr=0 pw=0 time=0 us cost=1 size=1605 card=107)(object id 75241)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
log file sync 1 0.00 0.00
SQL*Net message from client 2 1.13 1.13The key differences here are the execute, fetch, and SQL*Net message values. Using the client-side cache, the values drop dramatically due to getting the results from client memory rather than round-trips to the server.
Of course, corrections, clarifications, etc. welcome and so on...
Regards,
Mark -
Convergence - problem enabling browser-side caching
I am following the ["Enhancing Browser-Side Caching of Static Files" section of the Convergence Performance Tuning|http://wikis.sun.com/display/CommSuite/Convergence+Performance+Tuning#ConvergencePerformanceTuning-EnhancingBrowserSideCachingofStaticFiles] wiki document.
I installed the class file
-rw-r--r-- 1 root root 2724 Oct 2 08:50 /opt/SUNWappserver/domains/domain1/config/lib/classes/ExpiresFilter.classAnd I added the following to default-web.xml (just below the last </servlet-mapping>)
<filter>
<filter-name>ExpiresFilter</filter-name>
<filter-class>iwc.ExpiresFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>ExpiresFilter</filter-name>
<url-pattern>/iwc_static/js/*</url-pattern>
<url-pattern>/iwc_static/layout/*</url-pattern>
<dispatcher>REQUEST</dispatcher>
<dispatcher>FORWARD</dispatcher>
</filter-mapping>When I restart glassfish, I get this error in iwc_admin.log
ADMIN: ERROR from com.sun.comms.client.admin.web.util.ConfigHelper Thread pool-1-thread-4 ipaddress= sessionid= at 09:21:35,355- Error occured while loading the configuration from the file Parsing Error
Line: 209
URI: file:/var/opt/wisc/wtcnv4/iwc/config/configuration.xml
Message: cvc-complex-type.2.4.b: The content of element 'BackendServiceDetails' is not complete. One of '{Enable, EnableSSL, ServiceURI, Timeout}' is expected.
ADMIN: ERROR from com.sun.comms.client.admin.web.util.ConfigHelper Thread pool-1-thread-4 ipaddress= sessionid= at 09:21:35,355- Parsing error while loading configuration: Error occured while loading the configuration from the file /var/opt/wisc/wtcnv4/iwc/config/configuration.xml due to: Everything works fine if I remove the <filter> stanza from the config.
I'm guessing that it isn't loading the class correctly. Any ideas?jessethompson wrote:
I am following the ["Enhancing Browser-Side Caching of Static Files" section of the Convergence Performance Tuning|http://wikis.sun.com/display/CommSuite/Convergence+Performance+Tuning#ConvergencePerformanceTuning-EnhancingBrowserSideCachingofStaticFiles] wiki document.
What version of Convergence are you using (./iwcadmin -V)?
I tried the steps with Convergence patch 10 (update 3) and Glassfish 2.1 (on Redhat Linux) and it worked fine -- I see "Expires" and "Cache-Control" headers for Convergence file requests.
I installed the class file
-rw-r--r-- 1 root root 2724 Oct 2 08:50 /opt/SUNWappserver/domains/domain1/config/lib/classes/ExpiresFilter.class
The class file should be in:
/opt/SUNWappserver/domains/domain1/lib/classes/iwc/Regards,
Shane. -
Server Side Caching of pdf files
Hi all,
I've run into a rather odd problem that I can't find a way around. I've created some URL iViews to display some html files and some pdf files. I've adjusted the Load parameters for these iviews to the following:
Allow Client-Side caching: Yes
Cache Level: Shared
Cache Validity Period: 12 hrs
Fetch Mode: Server-Side
I've found that the html URL iViews work fine like this, but the pdf URL iViews do not. The pdf ones seem to just hang.
I've noticed that if I change the Fetch Mode to client-side for the pdf files, the iview then begins to work. However, I don't believe that this is the way that I want to configure my URL iviews. I believe that I want them all to be server-side cached.
I've followed the instructions on 'setting up system properties for proxy server' (System Administration -> System Configuration -> Service Configuration -> Applications -> com.sap.portal.ivs.httpservice -> Services -> Proxy). After making these configuration changes, I've stopped/restarted portal services. The problem still occurs.
Does anyone have any idea as to what may be causing this problem and how do I fix it?
Thanks!
-Stephen Spalding
Web Developer
GraybarHello.
I have the same problem. When I tried open a pdf file with fecth = server-side, I waitting for 500 seconds... But if I tried open with Fetcf = clien-side I only wait 2 seconds.
Did you found the solution?
Thanks.
Paco. -
Cache distribution - Java object cache
Hi.
I'm trying to use Oracle Java Object Cache (cache.jar), the one included in the 9iAS 9.0.3.
Everything works fine but the cache distribution between different JVM:s.
Anyone got this to work?
Regards
Jesper
package test;
import oracle.ias.cache.*;
* Singleton Cache class.
public class Cache {
/** The singleton instance of the object. */
private static Cache instance = null;
/** The root region. */
private final static String APP_NAME = "Test";
* Protected constructor - Use <code>getInstance()</code>.
* @throws Exception if error
protected Cache() throws Exception {
CacheAccess.defineRegion(APP_NAME);
* Gets the singleton instance.
* @return The instance of the Cache object.
public static Cache getInstance() throws Exception {
if (instance==null) {
createInstance();
return instance;
* Creates the singleton instance in a thread-safe manner.
synchronized private static void createInstance() throws Exception {
if (instance==null) {
instance = new Cache();
* Put an object on the cache.
* @param name The object name
* @param subRegion The sub region
* @param object The object to cache
* @throws Exception if error
public static void put(String name, String subRegion, Object object) throws Exception {
CacheAccess appAcc = null;
CacheAccess subAcc = null;
try {
appAcc = CacheAccess.getAccess(APP_NAME);
// Create a group
Attributes a = new Attributes();
a.setFlags(Attributes.DISTRIBUTE);
appAcc.defineSubRegion(subRegion, a);
subAcc = appAcc.getSubRegion(subRegion);
if (!subAcc.isPresent(name)) {
subAcc.put(name, a, object);
} else {
subAcc.replace(name, object);
} catch (CacheException ex){
// handle exception
System.out.println(ex.toString());
} finally {
if (subAcc != null) {
subAcc.close();
if (appAcc != null) {
appAcc.close();
* Gets a cached object from the specified sub region
* @param name The object name
* @param subRegion The sub region
* @return The cached object
* @throws Exception if requested object not in cache
public static Object get(String name, String subRegion) throws Exception {
CacheAccess appAcc = null;
CacheAccess subAcc = null;
Object result = null;
try {
appAcc = CacheAccess.getAccess(APP_NAME);
subAcc = appAcc.getSubRegion(subRegion);
// define an object and set its attributes
result = (Object)subAcc.get(name);
} catch (CacheException ex){
// handle exception
throw new Exception("Object '" + name + "' not in cache region '" + subAcc.getRegionName() + "'.");
} finally {
if (subAcc != null) {
subAcc.close();
if (appAcc != null) {
appAcc.close();
return result;
* Invalidates all objects in all regions
public static void invalidateAll() throws Exception {
CacheAccess appAcc = CacheAccess.getAccess(APP_NAME);
appAcc.invalidate(); // invalidate all objects
appAcc.close(); // close the CacheAccess access
// Main method for testing purposes.
public static void main(String[] args) throws Exception {
try {
System.out.println(">> Caching object OBJ1 into region TEST1.");
Cache.getInstance().put("OBJ1", "TEST1", "Object cached in TEST1.");
System.out.println(">> Getting OBJ1 from cache region TEST1.");
System.out.println(Cache.getInstance().get("OBJ1", "TEST1"));
} catch (Exception ex) {
System.out.println(ex.getMessage());
Contents of JAVACACHE.PROPERTIES:
# discoveryAddress is a list of cache servers and ports
discoveryAddress = host1.myserver.com:12345,host2.myserver.com:12345
logFileName = c:\javacache.log
logSeverity = DEBUG
distribute = trueI have same problem
Exist some reason?
I'm testing Cache with isDistributed() method and I still got false!
Thanx -
Is NFS client data cacheing possible?
Yesterday, I was viewing an HD 1080 video with VLC, and noticed that Activity Monitor was showing about 34MB/sec from my NAS box. My NAS box runs OpenSolaris (I was at Sun for over 20 years, and some habits die hard), and the 6GB video file was mounted on my iMac 27" (10.7.2) using NFSv3 (yes, I have a gigabit network).
Being a long term UNIX performance expert and regular DTrace user, I was able to confirm that VLC on Lion was reading the file at about 1.8MB/sec, and that the NFS server was being hit at 34MB/sec. Further investigation showed that the NFS client (Lion) was requesting each 32KB block 20 times!
Note: the default read size for NFSv3 over TCP is 32KB).
Digging deeper, I found that VLC was reading the file in 1786 byte blocks. I have concluded that Lion's NFSv3 client implement at least one 32KB read for each application call to read(2), and that no data is cached betweem reads (this fully accounts for the 20x overhead in this case).
A workaround is to use say rsize=1024, which will increase the number of NFS ops but dramatically reduce the bandwidth consumption (which means I might yet be able to watch HD video over wifi).
That VLC should start issuing such small reads is a bug, so I have also written some notes in the vlc.org forums. But client side cacheing would hide the issue from the network.
So, the big question: is it possible to enable NFS client data cacheing in Lion?The problem solved itself mysteriously overnight, without any interference from myself.
The systems are again perfectly happily mounting the file space (650 clients of them all at the same time
mounting up to 6 filesystems from the same server) and the server is happily serving again as it has been for the past 2 years.
My idea is that there has been a network configuration clash, but considering that the last modification of NIS hosts file was about 4 weeks ago and the latest server was installed then and has been serving since then, I have no
idea how such clash could happen without interference in the config files. It is a mystery and I will have to make
every effort to unravel it. One does not really like to sweep incidents like that un-investigated under the carpet.
If anybody has any suggestions and thoughts on this matter please post them here.
Lydia -
Hi,
The requirement is to create ""Document Sets in Bulk" using JSOM. I am using the following posts:-
http://blogs.msdn.com/b/mittals/archive/2013/04/03/how-to-create-a-document-set-in-sharepoint-2013-using-javascript-client-side-object-model-jsom.aspx
http://social.msdn.microsoft.com/Forums/sharepoint/en-US/1904cddb-850c-4425-8205-998bfaad07d7/create-document-set-using-ecma-script
But, when I am executing the code, I am getting error "Cannot read property 'DocumentSet' of undefined "..Please find
below my code. I am using Content editor web part and attached my JS file with that :-
<div>
<label>Enter the DocumentSet Name <input type="text" id="txtGetDocumentSetName" name="DocumentSetname"/> </label> </br>
<input type="button" id="btncreate" name="bcreateDocumentSet" value="Create Document Set" onclick="javascript:CreateDocumentSet()"/>
</div>
<script type="text/javascript" src="//ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.2.min.js"> </script>
<script type="text/javascript">
SP.SOD.executeFunc('sp.js','SP.ClientContext','SP.DocumentSet','SP.DocumentManagement.js',CreateDocumentSet);
// This function is called on click of the “Create Document Set” button.
var ctx;
var parentFolder;
var newDocSetName;
var docsetContentType;
function CreateDocumentSet() {
alert("In ClientContext");
var ctx = SP.ClientContext.get_current();
newDocSetName = $('#txtGetDocumentSetName').val();
var docSetContentTypeID = "0x0120D520";
alert("docSetContentTypeID:=" + docSetContentTypeID);
var web = ctx.get_web();
var list = web.get_lists().getByTitle('Current Documents');
ctx.load(list);
alert("List Loaded !!");
parentFolder = list.get_rootFolder();
ctx.load(parentFolder);
docsetContentType = web.get_contentTypes().getById(docSetContentTypeID);
ctx.load(docsetContentType);
alert("docsetContentType Loaded !!");
ctx.executeQueryAsync(onRequestSuccess, onRequestFail);
function onRequestSuccess() {
alert("In Success");
SP.DocumentSet.DocumentSet.create(ctx, parentFolder, newDocSetName, docsetContentType.get_id());
alert('Document Set creation successful');
// This function runs if the executeQueryAsync call fails.
function onRequestFail(sender, args) {
alert("Document Set creation failed" + + args.get_message());
Please help !!
Vipul JainHello,
I have already tried your solution, however in that case I get the error - "UncaughtSys.ArgumentNullException: Sys.ArgumentNullException:
Value cannot be null.Parameter name: context"...
Also, I tried removing SP.SOD.executeFunc
from my code, but no success :(
Kindly suggest !!!
Vipul Jain
Maybe you are looking for
-
Extremely Urgent (BPS - STS Transport to Quality)
Hello Everyone, I am trying to transport *Status and Tracking System* (Transport Request includes Sub Plan and Planning Session). The objects get attached to the Transport Request (TR) in the Development System. We release the request to Quality Syst
-
How can i get a java executable (*.class) run on a web browser?
I have created a simple appilcation in java and i would like to view it through a web browser. In order to do so, do i need to use an application server or just a web server? Can you come up with another solution to solve my problem(view the java app
-
SCCM 2012 Update Group inconsistency Problem with Red marked Updates
Hi everybody, we have a big Problem with our SCCM 2012 R2 CU3 Enviroment, regarding of Update Groups which got out of sync. we have a SCCM Site Infastructure with one CAS and 4 Primary Sites and on every Site is the SUP Role installed. I don't know w
-
SSO problems after system copy
Hi, We have done a system copy of our PRD system to a new QAS server with new server host name. We also have a new ITS server with a new host name as well. So we now have 2 QAS R/3 systems and 1 portal QAS. I have reconfigured the portal system lands
-
About Berkeley DB Structure And Header Segment
Hi, I want to know about the structure of BDB. for example: The header segment of the DB file ,etc... because i have a db file , i confirm it is using BDB,but it's header was rewritted, the DATA segment not changed, so it cannot be opened normally by