Best practice to avoid Firefox cache collision

Hi everybody.
Reading through this page of Google Developers
https://developers.google.com/speed/docs/best-practices/caching#LeverageBrowserCaching
I found this line that I can't figure out the exact meaning:
"avoid the Firefox hash collision issue by ensuring that your application generates URLs that differ on more than 8-character boundaries".
Can you please explain this with practical examples ?
For example, are this two strings different enough?
"hello123hello"
"hello456hello"
or these
"1aaaaaa1"
"2aaaaaa2"
Also, what version of firefox does have this problem?
Thank you

Personally, I would work out the application design before you consider performance settings. There are so many variables involved that to try to do it upfront is going to be difficult.
That being said each developer has their own preferred approach and some will automatically add certain expressions into the Essbase.cfg file, set certain application level settings via EAS (Index Cache, Data Cache, Data File Cache).
There are many posts discussing these topics in this forum so suggest you do a search and gather some opinions.
Regards
Stuart

Similar Messages

  • HTTP Web Response and Request Best Practices to Avoid Common Errors

    I am currently investigating an issue with our web authentication server not working for a small subset of users. My investigating led to attempting to figure out the best practices for web responses.
    The code below allows me to get a HTTP status code for a website.
    string statusCode;
    CookieContainer cookieContainer = new CookieContainer();
    HttpWebRequest myHttpWebRequest = (HttpWebRequest) WebRequest.Create(url);
    // Allow auto-redirect
    myHttpWebRequest.AllowAutoRedirect = true;
    // make sure you have a cookie container setup.
    // Probably not saving cookies and getting caught
    // in a redirect loop.
    myHttpWebRequest.CookieContainer = cookieContainer;
    WebResponse webResponse = myHttpWebRequest.GetResponse();
    statusCode = "Status Code: " + (int)((HttpWebResponse)webResponse).StatusCode + ", " + ((HttpWebResponse)webResponse).StatusDescription;
    Through out my investigation, I encountered some error status codes, for example, the "Too
    many automatic redirections were attempted" error. I fixed this by having a Cookie Container, as you can see above.
    My question is - what are the best practices for requesting and responding to HTTP requests?
    I know my hypothesis that I'm missing crucial web request methods (such as implementing a cookie container) is correct, because that
    fixed the redirect error. 
    I suspect my customers are having the same issue as a result of using our software to authenticate with our server/URL. I would like to avoid as many web request issues as much as possible.
    Thank you.

    Hello faroskalin,
    This is issue is more reagrding ASP.NET, I suggest you asking it at
    http://forums.asp.net/
    There are ASP.NET experts who will help you better.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • 'Best practice' for avoiding duplicate inserts?

    Just wondering if there's a 'best practice' approach for
    handling potential duplicate database inserts in CF. At the moment,
    I query the db first to work out if what I'm about to insert
    already exists. I figure I could also just send the SQL and catch
    the error, which would then tell me the data's already in there,
    but that seemed a bit dodgy to me. Which is the 'proper' way to
    handle this kind of thing?

    MrBonk wrote:
    > Just wondering if there's a 'best practice' approach for
    handling potential
    > duplicate database inserts in CF. At the moment, I query
    the db first to work
    > out if what I'm about to insert already exists. I figure
    I could also just
    > send the SQL and catch the error, which would then tell
    me the data's already
    > in there, but that seemed a bit dodgy to me. Which is
    the 'proper' way to
    > handle this kind of thing?
    i wouldn't consider letting the db handle this as "dodgy". if
    you're seeing the
    majority of inserts as "ok" then you're saving at least 1 db
    interaction per
    insert which can add up in high transaction environments.

  • Project Server / Project Pro = Cut and Paste official best practices to avoid corruption

    It is pretty common knowledge among Project Server veterans that Cut and Paste between Project Pro files can cause corruption with the assignment table. This leads to Publish and Reporting (Project Publish)
    failures. I just went through an upgrade from Project Server 2007 to 2013 and a LOT of files had to be rebuilt from scratch. 
    I ask my users to not Cut and Paste within or between project files. They do not always listen.
    I have looked and cannot find official Microsoft text on the subject or at least some sort of official best practices doc. It would help me prevent some corruption if I had a doc with a Microsoft logo that says “It is not recommended….”
    Or “If you do this you will make your admin cry…”
    If this doc is right in front of me and I just missed it I apologize in advance.
    Thanks

    John --
    To the best of my knowledge, there is no such Best Practice document from Microsoft.  Based on my experience with Project Server from Project Central through Project Server 2013, following are some items that can cause corruption in your project:
    Use of special characters in the name of the project.
    Cutting/copying/pasting entire task rows within a single enterprise project or between multiple enterprise projects.
    Opening an enterprise project on your laptop while connected to a wireless network, then closing the laptop lid, walking to another room in the building when you pass from one wireless access point to another, and then opening the laptop lid again. 
    I learned this from one of our former clients.
    Beyond this, there are some other items that will not corrupt your project, but which will cause problems with saving or publishing a project.  These items include:
    Using blank rows in the project.
    Using special characters in task names.
    Task names that start with a space character (this sometimes happens when copying and pasting from an Excel workbook).
    In both Project Server 2010 and 2013, the recommended method for resolving a corrupted project is to use the Save for Sharing feature. Refer to my directions in the following thread on how to use the Save for Sharing feature:
    http://social.technet.microsoft.com/Forums/projectserver/en-US/ec2c5135-f997-40e1-8fe1-80466f86d373/task-default-changing?forum=projectprofessional2010general
    Beyond what guidance I have provided, I am hoping other members of this community will add their thoughts as well.  Hope this helps.
    Dale A. Howard [MVP]

  • Oracle's Best practice for avoiding contingency data problems

    What is Oracle's recommendation to avoid having contingency database problems? Is the timestamp data type good enough for avoiding having the a record updated twice?
    Any feedback is welcome.

    It means you need to lock the records all by yourself.
    3 month ago,I try to simulate the Oracle Developer_2000_Form_6I ,and I found that they use "select .. for update no wait where .. " to lock record in Text_Change event. Then I did it,It seems working ok now in my vb.net program.
    Jimy Ho
    [email protected]

  • Best practice - over issue of move orders in WIP

    Hi Business analyst guru's,
    I am working for a electronics manufacturing industry. I am trying to use work orders functionality across our organizations. I am stuck with a logic. Please advise me what you do in this scenario.
    BoM : I have a a board assembly which requires 10 components say one of each component on BoM.
    Setups:
    all component's WIP supply type is push, Supply subinventory blank for all BoM components
    Release backflush components is not enabled in WIP parameters.
    one fine day I want to build 100 board assemblies and created the work order for 100, and released the move order through compoentn pick release form with transaction type as WIP component issue.
    Now the warehouse picker went into warehouse to pick 100 of each component with a move order with 10 lines on it. 2 components are in reels which he can not split the reel and split reel can not be used in production floor (reel size is 1000). He can issue 1000 against on the move order
    Now all the material went into production and build the assembly by 100 units. But two reels are left over with 900 of each. here I can do a WIP component return against the job.
    But expectation is if I want to build another work order (say qty by 100 again) this 900 of two components should be available for the next work order. If I do component pick release, the move order should request only 8 components.
    Is this possible? If not what is the best practice to avoid WIP return transaction and avoid splitting the reel size?
    Thanks
    Veerakumar

    1) If you have these items as Push, you will get a move order line for 100. Therefore, even if you move a whole reel the floor, only 100 will get transacted out of warehouse. Someone will have to manually transact additional 900 to the floor otherwise your inventory accuracy will go for a toss.
    Have you considered making those items pull?
    1) As and when the worker needs reel, he raises a signal. (different ways of doing this - could be Oracle kanban, could be a visual signal, could be a holler - whatever works for you)
    2) You transfer 1000 to the fllor
    3) As and when jobs complete, the 100 units get issued to work order
    4) Whatever is left over (say 800 after the 2 work orders) and not needed is transferred back to warehouse and you do a subinv xfer transaction.
    2) If you can't make them pull, then you will be forced to move the 900 back to warehouse when the first job is done.
    3) If you can't make them pull, you do a component pick release (CPR) for multiple jobs at a time? You can group your pick tickets by destination operation. This way, upon component release, you will have 1 move order line with qty =200. The picker transacts the move order line for 200 and a subinv xfer to WIP for 800.
    4) Here is the best case scenario. Don't know if your floor layout or factory processes will support this.
    You make the items pull on BOM. You have a temporary holding area on the floor (aka supermarket) . When operator needs the item, a visual signal is raised. The supervisor (aka spider) checks the supermarket and brings a reel to the operator. Upon completing the job(s), whatever is left of the reel goes to the supermarket. Once the reel is no more needed for the day (or week or month), you do a subinv xfer from supermarket to warehouse of whatever is left. The components get issued to work order upon completion (of operation/assembly).
    Do the best you can out of this scenario. :)
    Sandeep Gandhi

  • Best practice to maintain code across different environments

    Hi All,
    We have a portal application and we use
    JDEV version: 11.1.1.6
    fusion middleware control 11.1.1.6
    In our application we have  created many portlets by using iframe inside our jspx files and few are in navigation file as well and the URL's corresponding to these portlets are different
    across the environments (dev,test and prod). we are using subversion to maintain our code .
    problem we are having is: Apart from changing environment details while deploying to Test and prod, we also have to change the portlet URL' s from dev portlet URL's to corresponding env manually.
    so is there any best practice to avoid this cumbersome task? can we achieve this by creating deployment profile?
    Thanks
    Kotresh

    Hi.
    Put a sample of the two different URLs. Anyway you can use EL Expression to get current host instead of hardcoded. In addition, you can think in using a common DNS mapped in hosts file for all environments.
    Regards.

  • Best practice - Heartbeat discovery and Clear Install Flag settings (SCCM client) - SCCM 2012 SP1 CU5

    Dear All,
    Is there any best practice to avoid having around 50 Clients, where the Client version number shows in the right side, but client doesn't show as installed, see attached screenshot.
    SCCM version is 2012 SP1 CU5 (5.00.7804.1600), Server, Admin console have been upgraded, clients is being pushed to SP1 CU5.
    Following settings is set:
    Heartbeat Discovery every 2nd day
    Clear Install Flag maintenance task is enabled - Client Rediscovery period is set to 21 days
    Client Installation settings
    Software Update-based client installation is not enabled
    Automatic site-wide client push installation is enabled.
    Any advise is appreciated

    Hi,
    I saw a similar case with yours. They clients were stuck in provisioning mode.
    "we finally figured out that the clients were stuck in provisioning mode causing them to stop reporting. There are two registry entries we changed under [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\CCM\CcmExec]:
    ProvisioningMode=false
    SystemTaskExcludes=*blank*
    When the clients were affected, ProvisioningMode was set to true and SystemTaskExcludes had several entries listed. After correcting those through a GPO and restarting the SMSAgentHost service the clients started reporting again."
    https://social.technet.microsoft.com/Forums/en-US/6d20b5df-9f4a-47cd-bdc3-2082c1faff58/some-clients-have-suddenly-stopped-reporting?forum=configmanagerdeployment
    Best Regards,
    Joyce
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Best Practices to update Cascading Picklist mapping for Account record type

    1. Most of the existing picklist values name in parent and related picklist has been modified in external app master list, so the same needs to be updated in CRMOD.
    2. If we need to update picklist value, do we need to DISABLE the existing value and CREATE a new picklist.
    3. Is there any Best Practices to avoid doing Manual Cascading picklist mapping for Account record type? because we have around 500 picklist values to be mapped with parent and related picklist.
    Thanks!

    Mahesh, I would recommend disabling the existing values and create new ones. This means manually remapping the cascading picklists.

  • Using cache - best practice

    We created a banner using toplink. The buttons (images) are all stored in the database. The banner is in 300+ pages and they cange depending on some business rules. I want the best performance available. Yet, I want to make sure that once a record is updated so is the banner. What is the best cache option to use? Also, where should I set the cache? If I click Toplink Mapping (I'm using Jdeveloper 10.12), should I double-click on the specific mapping? I only see two options:
    - default (checked)
    - always refresh
    -only refresh if newer version
    Is there some type of "best practices" of using Toplink's cache?
    Thanks,
    Marcelo

    Hello Marcelo,
    Can't be sure exactly, but are you modifying the database outside of TopLink? This would explain why the cached data is stale. If so, what is needed is a strategy to handle refreshing the cache once changes are known be made outside TopLink, or revise the caching strategy being used. This could be as easy as calling session.refreshObject(staleObject), or configuring specific class descriptors to always refresh when queried.
    Since this topic is rather large and application dependent, I'd recommend looking over the 10.1.3 docs:
    http://download-west.oracle.com/docs/cd/B25221_01/web.1013/b13593/cachun.htm#CHEJAEBH
    There are also a few other threads that have good discussions on how to avoid stale data, such as:
    Re: Only Refresh If Newer Version and ReadObjectQuery
    Best Regards,
    Chris

  • Best practice - caching objects

    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

    problem with using object id fields instead of PC object references in your
    object model is that it makes your object model less useful and intuitive.
    Taking to the extreme (replacing all object references with their IDs) you
    will end up with object like a row in JDBC dataset. Plus if you use PM per
    HTTP request it will not do you any good since organization data won't be in
    PM anyway so it might even be slower (no optimization such as Kodo batch
    loads)
    So we do not do it.
    What you can do:
    1. Do nothing special just use JVM level or distributed cache provided by
    Kodo. You will not need to access database to get your organization data but
    object creation cost in each PM is still there (do not forget this cache we
    are talking about is state cache not PC object cache) - good because
    transparent
    2. Designate a single application wide PM for all your read-only big
    things - lookup screens etc. Use PM per request for the rest. Not
    transparent - affects your application design
    3. If large portion of your system is read-only use is PM pooling. We did it
    pretty successfully. The requirement is to be able to recognize all PCs
    which are updateable and evict/makeTransient those when PM is returned to
    the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
    all managed object of a certain class) so you do not have stale data in your
    PM. You can use Apache Commons Pool to do the pooling and make sure your PM
    is able to shrink. It is transparent and increase performance considerably
    One approach we use
    "Gary" <[email protected]> wrote in message
    news:[email protected]...
    >
    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

  • Best practice to have cache and calclockblock setting?

    Hi,
    I want to implement hyperion planning.
    What should be the best practice to set essbase settings for optimize performance?

    Personally, I would work out the application design before you consider performance settings. There are so many variables involved that to try to do it upfront is going to be difficult.
    That being said each developer has their own preferred approach and some will automatically add certain expressions into the Essbase.cfg file, set certain application level settings via EAS (Index Cache, Data Cache, Data File Cache).
    There are many posts discussing these topics in this forum so suggest you do a search and gather some opinions.
    Regards
    Stuart

  • Best Practice Building Block Library not accessible in Firefox

    Hello SAP Documentation Team,
    I've just get aware of the <a href="http://help.sap.com/bp_bblibrary/500/BBlibrary_start.htm">Best Practice Building Block Library</a>. Unfortunately it can't be used with Firefox 1.5.0.4 because of a script error. I see the dropdown lists but when I select for example Country "Germany" nothing happens. In IE it works perfect.
    Regards
    Gregor

    Hope that this will change with later Best Practice releases.

  • Best practice for IE cache

    Hi, all.
    Over the weekend, we applied the SPS17 to the ECC6.0 server running on dual stack. We also updated the HCM EHP3 to stack 6.
    We have a lot of WD for ABAP and WD for JAVA applications running on the ECC dual stack server. The contents are federated to the consumer portal running on EP7.0 SPS21. Note the consumer was NOT patched during the weekend.
    On Monday morning, we get many calls from users that their HCM apps are not working on the consumer portal. The error can come in many different ways. The fix so far is to clear their IE cache and everything works again. Note that the problem doesn't happen to everybody, less than 10% of the user population. But the 10% is enough to flood our helpdesk with calls.
    I am not sure if any of you has run into this problem before. Is that a best practice to delete the IE cache from all the users after an SP upgrade? Any idea to see what caused the error?
    Thanks,
    Jonathan.

    Hi Jonathan,
    I have encountered a similar situation before but have unfortunately never got to the root cause of it. One thing I did notice was that browser versions tended to affect how the cache was handled for local users. We noticed that IE7 handled changes in the WDA apps much better than certain versions of IE6. Not sure if this is relevant in your scenario.
    I assume also that you are not using ACCAD or other WAN acceleration devices (as these have their own cache that can break on upgrades) and that you've cleared out your portal caches for good measure. As far as I know in ITS, if you've stopped and started the WDA services during the upgrade then the caching shouldn't be a problem.
    Cheers,
    E

  • AD Built-In groups that should be avoided as best practice

    I am on a 2008r2 domain.  I spoke to a security engineer from Microsoft a few years ago.  He mentioned some known issues that can occur from using some of the built-in security groups like Account Operators, Backup Operators, Server Operators,
    etc. I know all of the users of those protected groups get stamped with the adminsdholder account, but does anyone have a good webpage that talks about which built-on groups should really be avoided as best practice?  I am not as concerned with the actual
    admin groups as those are pretty obvious.
    Thanks,
    Dan Heim

    Because being member of those groups mentioned means you can effectively become a Enterprise Admin of the entire forest cause they can reset passwords, logon to domain controllers etc. Take his advice and don't use them, they exists as a legacy from Windows
    NT 4.0.
    Use delegation in Active Directory instead to delegate proper permissions.
    Enfo Zipper
    Christoffer Andersson – Principal Advisor
    http://blogs.chrisse.se - Directory Services Blog

Maybe you are looking for