Measuring Performance Of CSS Optimizations?

Hi,
I'm relatively new to CSS and standards-based coding. I've been trying like crazy to get all my sites compliant and I am beginning now to wonder about optimizations for speed of execution. Things like...
1. Making class names shorter
2. Removing obsolete classes from style sheets
...and so on.
Are there some commonly used metrics for decided whether or not this sort of thing is 'worth it'. ie. how -much- it improves performance to remove an obsolete class that is, say 100 chars long. I know I'm being vague, but I maintain many sites---some have very little traffic and some a lot (well--to me it's a lot). This sort of clean up is very time consuming so I'm just trying to figure out what will give the most bang for buck. It's hard for me to judge because of all the variables: different ISPs, # of users, size of site, size of stylesheets, size of pages, etc.
Any general thoughts?
TIA,
---JC

Thanks Nancy.
A couple of things, though...
1. Web Page Analyzer -
http://www.websiteoptimization.com/services/analyze/
---Can't get it to 'work'. Always replies with 'no data returned' for my web site http://jchmusic.com/discography.htm
2. CSS code optimizer. * Note: do a site test before you replace old code  with optimized code.*
http://www.cssportal.com/generators/optimize.htm*
---It replies with a list of optimisations, which is great, but the  Copy To Clipboard doesn't work for me. And the Output to File option  doesn't do anything either. Is there a way to 'see' the complete results  all in one go?
3. I knew about Dust Me (thanks to you!) but it sure seems -slow-. I was hoping for other options (like the above) to make the process a bit faster.
TIA,
---JC

Similar Messages

  • Use SE30 to measure performance in background

    Hi,
      I want to measure performance of an existing program by running it in background because it takes a really long time to run.
      I m trying to use the "Schedule Measurements for user service" option from the menu. But when I click on the new icon after going to Schedule Measurements for user service menu, I get a short dump with message
    MOVE_NOT_SUPPORTED.
    on the following line of code of standard SAP program SAPMS38T
    > convert time stamp l_ts time zone sy-zonlo.
    My question is
    1. Am I using the right option to measure the performance in bacground or is there any other way (except changing code to put log statements)
    2. How can I fix the above problem.
    Will give points to the right answer. Thanks for reading.

    I generally use ST05 to measure performance.
    Rob

  • DBA Staff KPI(s) - Measuring Performance

    <font color="#333333"><font color="#000080"><strong>DBA Staff KPI(s) - Measuring Performance</strong></font>
    </font>
    <p>
    <font color="#333333">
    Hi,
    I am an<strong> IT manager</strong>, <u>managing two solutions: "SCM" & "WMS (Warehouse Management System)"</u>, and there is a team for each solution.
    <strong>DBA staff</strong> are involved, almost, with everything (database administration, handling system/application performance & stability, Application support, creating & automating reports for all levels, monitoring interfaces issues, troubleshooting, handling investigations for operations/management/business/research/system issues ...etc.)
    Its not easy to identify their KPI(s) and the additional responsibilities.
    <u>
    If you know some common KPI(s) that have been used before, I really appreciate if you can share them with me, so I can evaluate my team properly.</u>
    </font>
    </p>
    <p>
    <font color="#333333">
    Thanks ......
    <em><strong>Regards,
    Ala' </strong></em></font>
    </p>

    Just a comment that while your Subject says "DBA", the job responsibilities include many other things which are outside the definition of a DBA role, eg Application support, creating "reports" for "all" levels, troubleshooting "business" issues, etc
    So what does "all" levels mean? Does that mean reports from the Application? or just the technical aspects of database administration like I/O, memory, latches etc. What does troubleshooting "business" issues mean? Does that mean identifying why the accounting books don't balance?
    I understand that while the DBA role can be clearly defined in theory (Oracle Database Administrator's Guide - Task of a DBA), in practice many organizations, especially the smaller ones, are closer to what you have described. In such cases, the approach I'm familiar with has been to define as much as practicable the primary and secondary responsibilities of each position, and define the KPIs for the primary and make rather vague statements for the secondary role(s).
    Thank you that was valuable,
    So what does "all" levels mean?Answer: Application Reports go to all level of management, from department managers to senior mgmt and directors.
    It includes,
    - Productivity measurements.
    - Identifying some of business requirements.
    - Creating/Building estimates for any CR.
    - Should be involved with warehouse expansions plans.
    - Reporting critical operations & inventory activities.
    - Building interactive reports.
    - Cost allocation, ... etc.
    Then the normal DBA comes which includes I/O, memory, DB performance, application & DB users ... etc.
    You are right: I can take their primary responsibilities to define their KPIs and leave secondary responsibilities on vague statements for the secondary role(s), so, I can evaluate them according to the primary part and leave the secondary part for them to compete on it.
    Thanks again

  • Measuring Performance of Storage Systems

    Hi there
    I know, measuring performance of storage systems with benchmarking tools is nothing compared to realworld scenarios, but they give a nice overview in test environments.
    Under Windows there is a de-facto standard called IOMeter, which contains the traffic generator (dynamo) and the graphical test suite IOMeter.
    It seems that the dynamo part is available for Mac OS X, but only for the PPC architecture (funny, as IOMeter was originally developed by Intel), and the graphical part is available for windows only.
    I've seen sites on the Apple HP where they said, they have tested the xraid with IOMeter on a Intel XEON. Does anyone know how they did this, as IOMeter is not really available for Mac OS X on Intel ....
    Are there any comparable tools, i.e. a traffic pattern generator and an analzer? Maybe one can use dtrace / Instruments fur the analyzer part, but what to use for the generator?
    I need stuff more specific than the activity monitor, etc. as I need to define different block sizes, sequential / random rads / writes and their balance, etc. namely, evenrything IOMeter can do. I can't believe there is nothing out there running on Mac OS X Intel ......
    Thanks a lot in advance

    It seems like I can get it to compile correctly. However, when I try to start the IOManager I get the following errors...
    ./IOmanagerosx.cpp: line 52: /Applications: is a directory
    ./IOmanagerosx.cpp: line 53: /Applications: is a directory
    ./IOmanagerosx.cpp: line 65: //: is a directory
    ./IOmanagerosx.cpp: line 66: //: is a directory
    ./IOmanagerosx.cpp: line 67: //: is a directory
    ./IOmanagerosx.cpp: line 68: //: is a directory
    ./IOmanagerosx.cpp: line 69: //: is a directory
    ./IOmanagerosx.cpp: line 70: syntax error near unexpected token `('
    ./IOmanagerosx.cpp: line 70: `int Manager::ReportDisks(TargetSpec * disk_spec)'
    Any idea? Has anyone gotten IOMETER to work on OSX 10.5.x or 10.4.x?

  • Can anyone tell me the difference in performance between CSS URL switching

    first
    Can anyone tell me the difference in performance between CSS URL switching and F5 Big-IP?
    second
    Can anyone tell me the difference in performance between CSS URL switching and alteon ?
    third what is best overally?
    i think alteon is best
    is that right?
    best regard

    It looks like the primary question here is performance, in which case:Performance is not an issue, so long is it is sufficient. In the case of CSS 11000 and Alteon, performance fall within the same order of magnitude (supporting web sites with several billion hits per day.) F5 does not have sufficient performance, due to platform and OS limitations (I’ve heard as low as 50 connections per second in complex configurations). The Cisco CSM posts up over 10x the performance at 200k flows/second.
    Typically the primary concern is features, then CSS 11000 switches lead with a wide and flexible array of features that are not only helpful to network and web administrators, but well integrated too. CSS 11000 switches offer configuration through CLI, Web GUI, and XML. CSS 11000 collects statistics that can be exported to non-Cisco applications for billing and management.
    The CSS11000 also supports URL load balancing and HTTP header balancing within one content rule, with complex matching. Further, it supports user agent, pragma/no-cache, host field, cookie field, language field, accept, accept charset, accept-encoding, and Connection within the http header field.
    In addition, the CSS11000 matches up to 128 bytes with support of wildcards anywhere within the string.On the other hand, Nortel (Alteon) - HTTP header load balancing is not supported on the same VIP as URL load balancing. Eg. This means that a simulated WAP user cannot be directed to a server while load balancing "normal" browsers to their servers based on URL without using two separate VIPs. Only ONE http header load balance is supported on the entire switch. This limits you to either User Agent (WAP, Netscape, IE, Palm, etc), OR pragma/no cache (do not send the user to cache, allow (the user) to go to the origin), or Host field (allowing you to direct on domain name), or Cookie. Also, Nortel does not support the language field.
    In regards to F5, many of the performance claims are based on HTTP 1.0 requests (most web sites today are not using HTTP 1.0), Many emerging applications rely on HTTP 1.1 rather than HTTP 1.0. Also, the BIG-IP cannot spoof the connection to detect the URL, cannot do NAT on the flow, and cannot maintain states for persistent connections.
    Overall, I think CSS switches are the lowest cost to own, and most effective of all the load balancing platforms on the market.

  • Performance analysis and optimization tools

    Hello I am looking for some tools for Performance analysis and optimization for Oracle. For now I looked over Spotlight, Ignite and Embarcadero DB Optimizer. Can you please point out some links or something for comparing such tools?
    What tools do you use?
    Thanks,

    For performance analysis you can use AWR and ASH.
    -- How to analyze AWR/statpack
    http://jonathanlewis.wordpress.com/statspack-examples/
    -- how to take AWR and ASH report
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/autostat.htm#PFGRF02601
    http://www.oracle.com/technology/pub/articles/10gdba/week6_10gdba.html
    Regards
    Asif Kabir

  • Tool to measure performance of the web application

    Among these tool which one is most authentic to show correct data & is widely used to measure performance (both for on-premise and online):
    Fiddler
    DotTrace
    HttpWatch
    Developer Dashboard
    F12 in IE browser etc.

    among the tool i have mentioned which one is dependable to measurejquery coding?
    Fiddler
    DotTrace
    HttpWatch
    Developer Dashboard
    F12 in IE browser etc.
    Please share your experience only among given choice. There are many tools but people says they don't show correct results so I am not interested with other 3rd party tools.

  • Measuring Performance of Purchase department

    Is it possible to measure performance of purcjasing department, based on minimizing the inventory.
    Regards
    Mahesh

    Hi,
    Yes you can measure the performance of the Purchase department.
    Purchase department is mainly linked with the Purchaseing groups.
    You can take the report based on the Purchasing groups.
    Check with MC01 transaction --- Logisistics information library
    Here goto Purchasing -- Select the option Purchasing group
    You will get lot of reports based on the Purchasing group.
    1) Purchasing values (purchasing group view)
    2) Purchasing activities (purchasing group view)
    3) Overview of purchase orders (purchasing group view)
    4) Overview of scheduling agreements (purchasing group view)
    5) Overview of RFQs (purchasing group view)
    6) Overview of contracts (purchasing group view)
    Some other standard reports are
    1) How many POs are created in a Month per Purchasing groups ( ME2N etc)
    2) How many POs are open in a month ( ME2N etc)
    3) Price comparision with the vendors ( ME49)
    4) Analysis of Net order values ( ME81N)
    other wise you can consolidate all the reports & take a help of ABAPer to develop the Z report which can taken when ever you required about the performance of the Purchasing department.
    Hope this will help you
    rgds
    gsc

  • How to measure performance of supplier when using scheduling agreement ?

    Hello all,
    My client has an absolute need to be able to measure the performance of its suppliers based on delivery dates and delivered quantities. That is to say he needs to be able to compare what dates and quantities were asked to what has been really delivered.
    Most of the procurement processes used are based on scheduling agreements : schedule lines are generated by MRP and forecast is sent to supplier while firm requirements are sent through JIT calls.
    It seems that when doing GR in MIGO, it is done against the outline agreement number, and not against the call. Therefore, we have no way to compare dates and quantity with what was expected (in the JIT call).
    Do you know if SAP proposes a standard solution to this, and what could be a solution to this issue ?
    Thanks for your help
    E. Vallez

    Hi,
    My client faced the same problem and we ended up developing an own analysis in LIS. Since the GR is not linked to specific schedule line (SAP does some kind of apportioning, but it doesn't have to correlate to the correct match), one needs to do assumptions. Our assumption was the closest schedule line, i.e. each GR is related to the schedule line with the closest date. Then all GR the same day are totaled together before the quantity reliability is calculated, since the very same shipment can be reported through several GR transactions in SAP (one per pallet).
    If anybody has info about what SAP has to offer in this question (or is developing), please tell us!
    BR
    Raf

  • How do you measure performance of an item renderer?

    I'm creating an ItemRenderer in Flex 4.6 and I want to know how to measure total time to create, view and render an item renderer and how long it takes to view and render that item renderer when it's being reused.
    I just watched the video, Performance Tips and Tricks for Flex and Flash Development and it describes the creation time, validation time and render time and also the reset time. This is described at 36:43 and 40:25.
    I'm looking for a way to get numbers in milliseconds for total item renderer render time and reset time (what is being done in the video). 

    To answer your first question, in this video Ryan Frishberg recommends measuring and tuning your code. I'm trying to follow his example for my own item renderers.
    I've taken some key slides out to show you.

  • How to measure performance?

    Hi all,
    I have a scenario where in i need to check performance of design being used.
    I have one infocube in which data is on Per calender day basis. I have loaded that data in other cube on fiscal year basis with only specific characteristics and key figures i wanted from first cube.
    How do i check performance if data is fetched from firtst cube and compare it with time taken to fetch data from other cube.
    can i measure it if i am fetching data in Function module using ABAP?

    Hi..........
    If u want that then..............create query on both the cubes...........then take query statistics............
    For this u can use tcode........... RSRT ........it shows the raw time and not percentage of the time that the query spent in each area............
    For the percentages, you can either calculate them yourself or use transaction ST03 (expert mode). this will show the breakdown by %.........
    Or u can schedule the following chains....order to load BI Statistics data to the Technical Content:
    Master Data
    System Master Data - 0TCT_MD_S_FULL_P01
    This loads text for objects like u2018Process Statusu2019, u2018BI Object typeu2019, u2018Process Typeu2019
    Content Master Data - 0TCT_MD_C_FULL_P01
    This loads attributes & text for objects like u2018Process Variantsu2019, u2018Process Chainu2019
    Initialization Loads
    Query Runtime Statistics - Init - 0TCT_C0_INIT_P01
    Data Load Statistics - Init - 0TCT_C2_INIT_P01
    These process chains need to run only once (Immediate scheduling).
    Delta Loads
    Query Runtime Statistics - Delta 0TCT_C0_DELTA_P01
    Data Load Statistics - Delta 0TCT_C2_DELTA_P01
    These process chains can be scheduled for periodic execution
    Delta Loads
    Query Runtime Statistics - Delta 0TCT_C0_DELTA_P01
    Data Load Statistics - Delta 0TCT_C2_DELTA_P01
    These process chains can be scheduled for periodic execution
    I hav already given u a link.......check that........
    Hope this helps.......
    Regards,
    Debjani........
    Edited by: Debjani  Mukherjee on Nov 17, 2008 2:05 PM

  • How to measure performance in HourGlass Model and Modified HourGlass Model

    Hello All,
    I'm trying to understand as to how the HourGlass Model (which says that the outline should be designed as Dimension tagged as Account, Dimension tagged as Time, Dense Dimensions from most dense to least dense, Sparse Dimensions from least sparse to most sparse) exactly works in terms of optimizing performance and aggregation.
    Also I want to understand the working of the new Modified HourGlass Model on Stick (which says that the outline should be designed as Dimension tagged as Account, Dimension tagged as Time, Dense Dimensions from most dense to least dense, Aggregating Sparse Dimensions from least sparse to most sparse, Non Aggregating Sparse Dimensions).
    Why are these approaches better and how do they work internally in the system?
    How exactly does it pick up combinations during calculations and aggregations?
    In some documents I learned that we should keep the Time dimension as the first dimension in the outline since it is dense and there are more chances of having similar kind of data values across the same fiscal year, due to which compression takes place efficiently. So if this is the case doesn’t it conflict with the HourGlass model and at such times which model to go with?
    Thank You,
    MM

    Hi Damian,
    Here are a few more general tips for query performance:
    1) Always gather statistics for the query optimizer. In addition, we usually see better performance with column group statistics for PS and PC column groups.
    exec sem_apis.analyze_model('my_model',METHOD_OPT =>'FOR COLUMNS (P_VALUE_ID, CANON_END_NODE_ID) SIZE AUTO',DEGREE=>4);
    exec sem_apis.analyze_model('my_model',METHOD_OPT =>'FOR COLUMNS (P_VALUE_ID, START_NODE_ID) SIZE AUTO',DEGREE=>4);
    exec sem_perf.gather_stats(just_on_values_table=>true,degree=>4);
    Note: the DEGREE argument is for degree of parallelism
    Usually, you would load data, then gather statistics, and then periodically re-gather them as updates are done (maybe when 20% of the data is new).
    2) Create appropriate semantic network indexes. We generally recommend PCSM and PSCM indexes. PCSM is always there, and PSCM is created by default in the latest patch but not in 11.2.0.1.0 release (11.2.0.1.0 has a PSCF index that should be dropped and replaced with PSCM).
    Both of these items are covered in the documentation.
    You may also find the following presentation from SemTech 2010 helpful. It covers many best practices for load, query and inference.
    http://download.oracle.com/otndocs/tech/semantic_web/pdf/2010_ora_semtech_wkshp.pdf
    Thanks,
    Matt

  • Measure Performance

    I have an IDOC to XI to Vendor scenario.
    There is no need of a BPM at this point, because it is an Asynchronous.
    But what I feel is introducing a BPM will help lots of development cost in the future. Because, in future if the scope changes to Build the IDOC or if they are looking for the Technical response, then we need to go for BPM.
    But bringing the BPM will definitely hit the performance.
    I want to study the two cases and want to submit a report on that. How can I do a STRESS testing and can measure the performance.
    Thanks

    Hi,
    one of the ways would be to get a stress tool
    like loadrunner (http://www.mercury.com/us/products/loadrunner/)
    it comes with a free trial and do a stress test with that
    then you can just compare the result in the XI
    with such tools you can just record a transaction in
    the sap or an http call or almost anything
    and then reuse it many times
    but you can also write your own scripts if you wish
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

  • Measuring performance of EJBs

    Hi,
    I am interested in measuring the performance of my beans. I realize that "performance" by itself is rather generic. To be very specific, I want to measure the time taken by my EJB to perform a database activity. For example, how does the update of my CMP bean compare with a JDBC call from a BMP bean ?
    Is the capture of such timing data possible from the affected bean itself ? I thought of using AOP for this but ran into issues which caused me to dump that approach.
    Are there any tools available which can be used to capture such data - even from a client perspective ?
    Thanks

    try:
    http://www.jamonapi.com it can measure JSP, SErvlets, EJB and more
    but just for EJB it is maybe better write own tool, it can not be so difficult.
    System.currentMillis(); at the begginig and end might be easiest and maybe best
    performance tool.
    Point not to slow donw process be measuring tool itself.
    Marian

  • Bad Font-Measuring Performance under Windows 8.1

    Why does the following little loop (in c#) perform so bad using current version of WPF (.NET 4.5.2) if i change the font-family from "Segoe UI" to "Arial" (or something else...tried "Times New Roman" and "Courier New"
    - Same problem)?
    var tb = new TextBlock {Text = "Testtext", FontFamily = new FontFamily("Arial")};
    for(int i = 0; i < 100000; i++)
    tb.InvalidateMeasure();
    tb.Measure(new Size(double.MaxValue, double.MaxValue));
    With font-family set to Arial this block of code takes about 7.6s on my machine. Font-family set to "Segoe UI" takes about 1.9s. Why do (most) other fonts than Segoe UI perform so bad during measurement? Is there any tweak around, that does avoid
    this enormous loss in performance?
    As i found found out, "Tahoma", "Lucida Sans" and "Microsoft Sans Serif" are being measured really fast, also. Is this some "System-internal font"-thing?
    Yes, i know, this is really constructed and broken down to a minimal reproducable example. The whole component is a custom datagrid with complex ui- and data-virtualization - much too big to post here. If i set the font of my
    grid to, let's say "Arial", the scrolling-performance gets really bad. Using Visual Studio's Profiler i tracked the problem down to the measurement of my single grid-cells, which basically measure single textblocks and so i wrote the little test-code
    above. Please keep in mind: My problem is NOT, that the code above is slow. (I know, this loop is totally senseless...it's for demonstration purposes only) My problem IS, that changing the font impacts measurement in such an enormous amount...
    What is the difference between fonts like "Segoe UI", "Tahoma", "Lucida Sans" or "Microsoft Sans Serif" and fonts like "Arial", "Times New Roman" or "Courier New" that causes this huge
    impact in measurement?
    Btw.: This problem not only arises within my own grid component, it can also be reproduced with WPF's internal datagrid. Scrolling performance degrades dramatically when using "Arial" as the font-family.

    Hi Max
    I realise that WPF doesn't use Win32, but I suspected something similar (i.e. the font is being loaded and unloaded everytime a measure takes place rather than using a cached font). I just looked up TextBlock in ILSpy and can see a MeasureOverride method
    which does quite a lot of work with the font before using it to measure text and then discards all of this information when it's finished. That's obviously where the bottleneck is. I don't program in WPF so I have no idea how to fix it.
    protected sealed override Size MeasureOverride(Size constraint)
    this.VerifyReentrancy();
    this._textBlockCache = null;
    this.EnsureTextBlockCache();
    Follow the path of EnsureTextBlockCache() to see how much information is being processed and then dumped every time a measure operation is processed.
    Mick Doherty
    http://dotnetrix.co.uk
    http://glassui.codeplex.com

Maybe you are looking for

  • Error Message in UIX Editor when using a custom parameter to a template

    In my UIX template, I have a custom attribute "selectedTab", as advised in the documentation: <type base="ui:pageLayout"> <!-- define the template's type information --> <namedChild name="topHead"/> <attribute name="selectedTab" javaType="int"/> </ty

  • Imac monitor dead

    I have a rev D 333mhz imac. It looks like somebody stabed the ati graphics cars, where it connects to the board the metal pieces are missing. The light on the front stays orange. is there any possible fix?

  • Updated to iOS 7.1 and my location arrow won't turn off. It used to. Any ideas?

    Updated to iOS 7.1 and my location arrow won't turn off. It used to. Any ideas? Help would be appreciated!

  • Work Flow Advice

    Another topic: Need the best streamline editing techniques in order to film, edit, add music, color-grade and upload as quickly as possible, in order to be a daily, weekly YouTube vlogger. Example, looking for quick way to make the video look good en

  • HTTPService error handling question

    When i am using HTTPService for Flex-PHP communication i set HTTPService's 'fault' property which handles error event. So basically if i have HTTPService set up like this: <mx:HTTPService id="test" url=" http://localhost/test/test.php" contentType="a