HTTP caching headers - WL6
I'm evaluating using WL6 as my Web server but can't find a way to specify
what caching headers should be sent with html files. I would like to be able
to specify an expiration for the html files so client browsers will pickup
future updates instead of pulling from non-expiring cache.
Tim Kuntz
[email protected]
Here is an example of the HTTP headers for two of the images in the feed.
Any red flags in there that you think might be causing iPhoto to think it needs to re-download the images after it's already downloaded them once?
HTTP/1.1 200 OK
Content-Length: 1251893
Date: Fri, 27 Jan 2012 15:37:49 GMT
Server: Apache
Set-Cookie: PHPSESSID=1b9d47ae170ba8c0a088ab7124a1677e; path=/
Expires: Mon, 24 Jan 2022 15:37:49 GMT
Cache-Control: max-age=315360000,public
Pragma: public
Last-Modified: Thu, 26 Jan 2012 23:34:38 GMT
Accept-Ranges: none
Connection: close
Content-Type: image/jpeg;
HTTP/1.1 200 OK
Content-Length: 744151
Date: Fri, 27 Jan 2012 15:39:01 GMT
Server: Apache
Set-Cookie: PHPSESSID=086c112f99ccecc266a47d66d6b47733; path=/
Expires: Mon, 24 Jan 2022 15:39:01 GMT
Cache-Control: max-age=315360000,public
Pragma: public
Last-Modified: Thu, 26 Jan 2012 02:54:40 GMT
Accept-Ranges: none
Connection: close
Content-Type: image/jpeg;
Similar Messages
-
I've noticed a practical issue with iPhoto's File->"Subscribe to Photo Feed..." feature.
Every time that iPhoto checks a subscribed RSS Photo Feed for new content it will re-download each and every photo in the feed. This occurs even if iPhoto has already downloaded each image, they all have unique GUIDs set, and no content in the feed has changed since the last time it polled for it.
My best guess is that this happens if the server offering the feed does not produce the proper cache-control parameters in the HTTP Headers that are sent by the server when the RSS feed is accessed by iPhoto.
Does anyone know what parameter/value pairs need to be set in the HTTP headers to prevent iPhoto from re-downloading RSS enclosures that it already has?
This is a very practical problem for feeds with a large number of photos of large file size. Besides the obvious massive waste of bandwidth, the user receives an annoying error whenever they try to quit iPhoto before the feed and all of it's images have been re-downloaded yet again.Here is an example of the HTTP headers for two of the images in the feed.
Any red flags in there that you think might be causing iPhoto to think it needs to re-download the images after it's already downloaded them once?
HTTP/1.1 200 OK
Content-Length: 1251893
Date: Fri, 27 Jan 2012 15:37:49 GMT
Server: Apache
Set-Cookie: PHPSESSID=1b9d47ae170ba8c0a088ab7124a1677e; path=/
Expires: Mon, 24 Jan 2022 15:37:49 GMT
Cache-Control: max-age=315360000,public
Pragma: public
Last-Modified: Thu, 26 Jan 2012 23:34:38 GMT
Accept-Ranges: none
Connection: close
Content-Type: image/jpeg;
HTTP/1.1 200 OK
Content-Length: 744151
Date: Fri, 27 Jan 2012 15:39:01 GMT
Server: Apache
Set-Cookie: PHPSESSID=086c112f99ccecc266a47d66d6b47733; path=/
Expires: Mon, 24 Jan 2022 15:39:01 GMT
Cache-Control: max-age=315360000,public
Pragma: public
Last-Modified: Thu, 26 Jan 2012 02:54:40 GMT
Accept-Ranges: none
Connection: close
Content-Type: image/jpeg; -
Firefox adding no-cache headers to all http POSTs?
Our server-side cache is configured to invalidate appropriate cached data on PUT/POST/DELETE, but is also set to obey no-cache headers and ignore the cache entirely.
Firefox is the only browser that is forcefully appending no-cache headers to all POST requests, resulting in the cache not invalidating properly. Is there some standard that suggests a POST should include no-cache directives, or is this something I can't work around?Reload the webpage while bypassing the cache using '''one''' of the following steps:
*Hold down the ''Shift'' key and click the ''Reload'' button with a left click.
OR
*Press ''Ctrl'' + ''F5'' or ''Ctrl'' + ''Shift'' + ''R'' (Windows and Linux)
*Press ''Command'' + ''Shift'' + ''R'' (Mac)
Let us know if this solves the issues you are having. -
FireFox 7 does not obey no-cache headers?
When I ever I visit my site that sends no-cache headers, FireFox still caches it. It's really annoying in the Admin CP for my forums as it "seems" to make the settings go back, and the success message is not shown. It worked fine with FireFox 6.
Have the same problem with my servlet.
Using these headers:
resp.setHeader("Cache-Control","no-cache, no-store"); //HTTP 1.1
resp.setHeader("Pragma","no-cache"); //HTTP 1.0
resp.setDateHeader ("Expires", 0); //prevents caching at the proxy server
Getting no love from FF7. No problems with FF 3.6, Safari 5, or Chrome 13 & 15. -
What is the HTTP caching flowchart/logic in Firefox?
Hello,
I am configuring a reverse HTTP proxy and I am trying to optimize it as much as possible. I found the following article which was written on Oct. 9, 2002 and it describes the HTTP caching logic in Firefox.
http://www-archive.mozilla.org/projects/netlib/http/http-caching-faq.html
However, the article is pretty old and I don't think that Firefox uses the same flowchart in the latest versions of the browser. Do you know how exactly Firefox caches certain object. What I mean is does FF check the Cache-control header first and then the Expires header and finally the Last-Modified. What happens if there are both Cache-control header and Expires header?
Kind Regards,
Daniel K.WebLogic Server is a single java process, that has two listen ports, one SSL
and non-ssl.
These two ports use protocol discrimination to handle multiple protocols on
a single port.
NON-SSL --> http, t3 (proprietary rmi protocol), iiop
SSL --> https, t3s, iiops
So WebLogic comes with a build in Webserver. Or you can use a third party
webserver in front of WLS with plugin to proxy to WLS.
See;
http://edocs.bea.com/wls/docs81/plugins/
Cheers
mbg
"Manoj" <[email protected]> wrote in message
news:3edb0ba5$[email protected]..
>
Is the built-in web server in weblogic Apache or is it some other httpserver that
BEA owns ? -
Who can tell me how to set HTTP request headers,for example ,If-Modified-Since,i only know how to get If-Modified-Since in servlet and use it.
Hy,
Headers are provided by the Browser you use. He controls the Content of it, eg. the accepted File types, the Browser name and type, HTTP Version used, ...
You can, however, add your custom Headers with the addHeader Methods in HttpServletResponse. Then, you can read them out on the Client, eg.
using Jsp, and react on the Client based on the given values.
Best regards
jherunter -
Allow-http-request-headers-from Question
I have a crossdomain.xml file in place, but am still getting this error:
"Security error accessing url"
I searched the web and is it true that because I am accessing a SOAP web service I need to add this to the crosssdomain.xml file:
<allow-http-request-headers-from domain="MYDOMAINHERE" headers="SOAPAction"/>
Is there anything else from a crossdomain.xml file perspective, or anything else in general to be able to access these web services from my Flex app?
Thanks for your responses.The SWF is at MyDomain.com (for example).
The crossdomain.xml file is on the 3rd party web service provider at the root of their web server, and it correctly has an entry to MyDomain.com.
The exact error is "Security error accessing url".
I read on the web that when accessing SOAP web services you need to add these entries to the crossdomain.xml file:
<allow-http-request-headers-from domain="MyDomain.com" headers="SOAPAction"/>
Is that the case and is there anything else extra I need to do to get web services to work in this case?
Thanks Alex, for helping with this. -
Hi-
What role is required for an Integration Server (ABAP) HTTP cache refresh?
This is accessible if you go to XI Administration -> Cache Overview and click on Full Cache Refresh for INTEGRATIONSERVER_. It calls this URL.
http://hostname:8000/sap/xi/cache?sap-client=800&mode=F
XISUPER has all SAP_XI* roles. I get a "403 Forbidden" unless I include SAP_ALL.
Thanks,
J WolffHi,
Check this document:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1a69ea11-0d01-0010-fa80-b47a79301290
Also verify if the user has such roles:
SAP_XI_ID_SERV_USER
SAP_XI_IS_SERV_USER
SAP_XI_IR_SERV_USER
Regards,
Wojciech -
Programatically disabling the HTTP cache in the JVMlt
How do I programatically disable the HTTP cache in the JVM that is being used by the URLConnection classes?
Working through the HTTP request properties did the trick, but I actually used the if-modifed-since property, which was more appropriate for me for a number of reasons.
Edited by: beuchelt on Nov 25, 2008 9:02 PM -
HTTP Cache control / GZIP compression
Hi,
is it possible to add / enable cache control / compression in Connect Pro (7 or 8)? If so, how?
I suspect we'll have to tweak the underlying http server for it but I'd like to know if this is documented somewhere and if it's supported. Especially, we'd like to
- add / configure Cache-Control http headers
- add / configure Expires http headers
- activate GZIP/Deflate compression
If this isn't something Connect offers out of the box, any recommendations for proxy servers to put in front of Connect and things to watch out for?
Thanks,
Dirk.Thanks. We solved it by adding a custom Servlet Filter to the web.xml.
Dirk. -
Help needed about HTTP Request Headers
HI All,
Regarding HTTP RFC 1945 I have to use headers request-line and message is there any functionality in Servlets or JSP to access both them. I used “request.getHeaderNames()”, which return the headers “ACCEPT,ACCEPT-ENCODING ,ACCEPT-LANGUAGE ,CACHE-CONTROL ,CONNECTION ,CONTENT-LENGTH ,CONTENT-TYPE ,COOKIE ,HOST ,REFERER ,USER-AGENT” but not request-line and message.
Kindly provide the help to access them.
Regards,
Kashif Bashir
AdamSoft Intl
Lahore, PakistanI had a look at RFC 1945. It gives an example of what it calls a Request-Line:GET http://www.w3.org/pub/WWW/TheProject.html HTTP/1.0But as far as I can see it doesn't describe this anywhere as being a header. So it doesn't surprise me that getHeaderNames() doesn't return it.
I'm also surprised you're even trying to access this low-level information with a high-level tool like a servlet. You could probably reconstruct it with various servlet methods, though. You know the method (e.g. "GET") and you can find out the URL from various methods of the HttpServletRequest. -
How do I add data to HTTP response headers in Tomcat?
Hello
I am in the process of making our site Platform for Privacy Preferences(P3P) compliant. We are running Tomcat 3.2.2(standalone) on a Linux box. I am looking for information regarding adding data to HTTP header responses(see example below). One of the aspects of P3P compliancy is referencing where a Policy is located and referencing the "Compact Policy" via the "CP" tag. I need to add these values and as I am
new to Tomcat I need know where to find the header information.
<!-- example -->
HTTP/1.1 200 OK
P3P: policyref="http://somesite.com/P3P/PolicyReferences.xml",
CP="NON DSP COR CURa ADMa DEVa CUSa TAIa OUR SAMa IND"
Content-Type: text/html
Content-Length: 8104
Server: ...
...content...
<!-- end example -->
Any suggestions would be appreciated.
Joe Dalessandro
e: [email protected]Hmm... If you check your server.xml -file, you'll probably find that Tomcat has been configured to use org.apache.tomcat.service.http.HttpConnectionHandler as the connection handler (right under the "Connector classname" value)...
Would it be possible to tweak that handler or extend it in such a way that you could add your headers?
Unfortunately I don't work with Tomcat myself, otherwise I could try it...
If this seems too complicated, I'd just recommend you to install Apache, and do it there... -
Hey all,
I'm a fairly new Arch user, and to be honest, a fairly new Linux user in general. Over the years I tried so many times to make the transition from Windows to Linux and after finding Arch, I felt it was finally time, but I digress...
I now run 2 Arch machines here and one Arch virtual machine, but I found myself with a little problem. Having 3 machines meant that it required either 3 times the bandwidth to keep them updated, or a lot of hassle with copying packages around the network. I even experimented with using the Pacman cache directory on an NFS share, but none of these were acceptable to me.
So this afternoon I sat down and coded a solution I was happy with.
I run a small Linux server here, Debian based, which serves as a small file server to the local network and also hosts a few services for myself and a friend. This server also runs Lighttpd which I use for developing with PHP and Perl.
The idea was that this machine would be a mirror for Arch to update from, but I didn't want to mirror everything, just those packages which I used. After much searching through Google I discovered someone who had done something similar for Debian based distributions, apt-cache. It's essentially a small web-server which, when queried for a package, first checks it's local cache, and if it doesn't find the file, it downloads it from an official mirror, both storing it locally and sending it to the client.
I've never coded in Java personally and I didn't want to have 2 web-servers running when one would suffice, so I set about coding something similar in PHP.
The end result is 130 lines of code, and a url.rewrite rule, which achieves exactly what I was after. It works like this:
1) Pacman requests a file from the local server
2) Local server checks to see if it has the file
3a) Local server cannot find the file so it requests it from an Arch mirror
4a) The file is simultaneously downloaded, written to disk and sent to Pacman.
3b) Local server has the file
4b) The file is sent to Pacman
The end result is a transparent cache which will only have to download the file once. An example of the speed increase is as follows:
# pacman -S --downloadonly kernel26
kernel26 21.7M 806.4K/s 00:00:28
# rm /var/cache/pacman/pkg/kernel26-2.6.21.5-1.pkg.tar.gz
# pacman -S --downloadonly kernel26
kernel26 21.7M 8.5M/s 00:00:03
As you can see, after the package was cached on the server, it didn't need re-downloading and as such it transferred to the local machine at LAN speeds.
My mirror entries for the repositories looks like this:
Server = http://192.168.0.1/pacman-cache/pkg/current/os/i686/
Server = http://192.168.0.1/pacman-cache/pkg/extra/os/i686/
etc....
So, my question is this; would anyone out there be interested in the code? Right now it still needs a lot of work before it could be made public as there's very little error checking; I need to handle unexpected conditions like a broken download, and I also have to add handling to deal with the db.tar.gz files being updated, but as and when I feel it's ready would anyone use it?
I'd appreciate any input anyone felt like sharing, even feature requests =^.^=
PS: I hope this is in the right sub-forum... I didn't think it belonged in the actual Pacman forum, but if it did, apologies!Just a little update
I solved the timeout problem. It wasn't a misconfiguration but rather a bug in the code that was causing it to stall randomly when retrieving a remote package, it'll now happily retrieve multiple packages without a problem.
The following new features have been added since I last posted:
Logging support
It's a little primitive, but it works. The location of the log file is customizable so it should be possible for logrotate to work with it, hence I've not added any rotation system of my own.
Setup support
The cache script now functions correctly when initially installing Arch via the /arch/setup script. I've run it through once or twice, but it could do with further testing.
Testing & Unstable repository support
Self explanatory really
Currently the script only supports the i686 architecture (due to some hard-coded paths), I'd need to do some recoding to support x86_64 as well, it's something I'm considering, assuming I can get VMWare to run a 64bit Arch install. Resuming is still on the "maybe" pile as I'm still trying to come up with a way of coding it cleanly, it's a case of trying to balance effort vs reward on this one.
I'm also currently working on a quick-and-simple administration interface for the cache. It should let you see what files are cached, remove selected ones or an entire repository worth of cache. Perhaps even have the ability to verify the local files against the md5 summary files I build. It's in the early stages right now, but it'll hopefully be complete in the near future.
Overall it works very well and depending on how many bugs I run into when giving it a real test, I should be able to release it here in the not-too-distant future, then anyone who's interested can play with it and perhaps improve it beyond my original design. -
Http cache don't work as expected in an applet
Hi, everyone!
I write an applet which dynamic connect a http server and fetch content from it. I wish the content could be cached in browser's cache system.
Here is my code, it's work well in MS JVM, but when I turn to sun plugin, the cache system don't work. Every time I get a Response code 200 ( wish to be 304)
==================My code================================
URLConnection conn=(URLConnection)url.openConnection();
conn.setUseCaches(true);
conn.setAllowUserInteraction(false);
conn.connect();
java.io.ObjectInputStream ois=null;
Object o=null;
try
ois=new ObjectInputStream
(new java.util.zip.InflaterInputStream(conn.getInputStream()));
return ois.readObject();
finally
if(ois != null)
ois.close();
==================My code================================
I tested on sun JDK 1.42 and IE6.
My Http Server is Jetty ,and I do have add a last_modified tag in http response.
I look into the Java plugin control panel ,only the jar file is cached.Sorry for the confusion, of course you are right and it seems to be a bug.
I ment that the method name setUseCaches itself might be confusing because it does not force caching, it just allows it when available.
There are several bugs about the plugin caching (4912903, 5109018, 6253678, 4845728, ...) even if they might not fit 100% to your problem they show that plugin caching is an issue which sun really should improve.
You might have a look at them and vote for them or submit a new bug. -
CSS not LB http delete headers
Software Version: SW Version: 4.00 Build 7
I cannot get a connection to any of the servers in our server
farm when the HTTP header type is DELETE. It works fine with GET and POST over
the same port number. The process works if we bypass the CSS and go directly to
a server. Does anyone know how can we support this thru the CSS?This is not supported by the CSS. The following text is taken from here:
http://www.cisco.com/en/US/products/hw/contnetw/ps792/products_configuration_guide_chapter09186a008029c621.html#wp1096916
"A Layer 5 content rule supports the HTTP CONNECT, GET, HEAD, POST, PUSH, and PUT methods. In addition, the CSS recognizes and forwards the following HTTP methods directly to the destination server in a transparent caching environment but does not load balance them: RFC 2068 - OPTIONS, TRACE and RFC 2518 - PROPFIND, PROPPATCH, MKCOL, MOVE, LOCK, UNLOCK, COPY, DELETE."
~Zach
Maybe you are looking for
-
My Duaghter is out of the country studing abroad. Her PC would no longer boot up so she took it to a support shop. They replaced her hard drive, fixed an over heating issue and reloaded her PC. Now she is getting this new error message and she is d
-
Update query in sql taking more time
Hi I am running an update query which takeing more time any help to run this fast. update arm538e_tmp t set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m where m.vndr#=t.vndr# and m.cust_type_cd=t.cust_type and m.cus
-
PSA deletion -clarification regarding QM status
Hi Guru's We r on BI 7.0 SP13. We are loading HR data with full loads via infopackages to PSA and subsequently from PSA to infocubes via DTP. We have configured a process chain to delete the PSA requests which are 1 day old in order to make sure that
-
Hi, I am new to SAP FICA, this is my first project and in my current project I was assigned to write a Functional Spec for an interface. Work flow is like this, we get an Incomming file from an external source and the interface should load all the in
-
How does session bean state replicate?
I can't figure out how state of statefull session bean gets replicated for clustering. Can any one explain how this works? Lets say we have two copies of the same bean A and B where B is suppose to be activated in case of a failover. If A h