Cftrace speeding up page execution time

A bit of a wierd one... I have a site where one page was
reporting execution times of over 4000ms. So I put in some cftrace
tags through the code to see what was taking so long, and it
started producing execution times of around 400ms, ten times
faster. I commented out the cftrace tags, and the execution time
went back up to 4000ms. Undid the commenting, and the page
execution again fell to 400ms.
Obviously it's not a big problem, but I can't understand why
it would cause this behaviour. Any ideas?

I hope you meant decrease your load time!
Optimizing your files before uploading to a server is guaranteed to help as well as giving your site a chance to load in Internet Explorer.
See....
http://www.tonbrand.nl/
Using a simple flash player for your music files will help and they will be more universally playable than QuickTime.....
http://roddymckay.com/Satellite/FlashPlayer.html

Similar Messages

  • Slow query execution time

    Hi,
    I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
    The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
    The query:
    SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
    FROM MyTable
    WHERE SomeDate= date_entered_by_user  AND SomeString IN ("aaa","bbb")
    GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
    To check I replaced the where clause with
    WHERE SomeDate= date_entered_by_user  AND SomeString = "aaa"No improvements.
    What could be the problem?
    Thank you,
    Lobo

    It's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
    When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
    When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
    The stored execution plan will enable the engine to execute the query faster.
    />

  • How to reduce execution time ?

    Hi friends...
    I have created a report to display vendor opening balances,
    total debit ,total credit , total balance & closing balance for the given date range. it is working fine .But it takes more time to execute . How can I reduce execution time ?
    Plz help me. It's a very urgent report...
    The coding is as below.....
    report  yfiin_rep_vendordetail no standard page heading.
    tables : bsik,bsak,lfb1,lfa1.
    type-pools : slis .
    --TABLE STRUCTURE--
    types : begin of tt_bsik,
            bukrs type bukrs,
            lifnr type lifnr,
            budat type budat,
            augdt type augdt,
            dmbtr type dmbtr,
            wrbtr type wrbtr,
            shkzg type shkzg,
            hkont type hkont,
            bstat type bstat_d ,
            prctr type prctr,
            name1 type name1,
         end of tt_bsik,
         begin of tt_lfb1,
             lifnr type lifnr,
             mindk type mindk,
         end of tt_lfb1,
        begin of tt_lfa1,
            lifnr type lifnr,
            name1 type name1,
            ktokk type ktokk,
        end of tt_lfa1,
        begin of tt_opbal,
            bukrs type bukrs,
            lifnr type lifnr,
            gjahr type gjahr,
            belnr type belnr_d,
            budat type budat,
            bldat type bldat,
            waers type waers,
            dmbtr type dmbtr,
            wrbtr type wrbtr,
            shkzg type shkzg,
            blart type blart,
            monat type monat,
            hkont type hkont,
            bstat type bstat_d ,
            prctr type prctr,
            name1 type name1,
            tdr type  dmbtr,
            tcr type  dmbtr,
            tbal type  dmbtr,
          end of tt_opbal,
         begin of tt_bs ,
            bukrs type bukrs,
            lifnr type lifnr,
            name1 type name1,
            prctr type prctr,
            tbal type dmbtr,
            bala type dmbtr,
            balb type dmbtr,
            balc type dmbtr,
            bald type dmbtr,
            bale type dmbtr,
            gbal type dmbtr,
        end of tt_bs.
    ************WORK AREA DECLARATION *********************
    data :  gs_bsik type tt_bsik,
            gs_bsak type tt_bsik,
            gs_lfb1 type tt_lfb1,
            gs_lfa1 type tt_lfa1,
            gs_ageing  type tt_ageing,
            gs_bs type tt_bs,
            gs_opdisp type tt_bs,
            gs_final type tt_bsik,
            gs_opbal type tt_opbal,
            gs_opfinal type tt_opbal.
    ************INTERNAL TABLE DECLARATION*************
    data :  gt_bsik type standard table of tt_bsik,
            gt_bsak type standard table of tt_bsik,
            gt_lfb1 type standard table of tt_lfb1,
            gt_lfa1 type standard table of tt_lfa1,
            gt_ageing type standard table of tt_ageing,
            gt_bs type standard table of tt_bs,
            gt_opdisp type standard table of tt_bs,
            gt_final type standard table of tt_bsik,
            gt_opbal type standard table of tt_opbal,
            gt_opfinal type standard table of tt_opbal.
    ALV DECLARATIONS *******************
    data : gs_fcat type slis_fieldcat_alv ,
           gt_fcat type slis_t_fieldcat_alv ,
           gs_sort type slis_sortinfo_alv,
           gs_fcats type slis_fieldcat_alv ,
           gt_fcats type slis_t_fieldcat_alv.
    **********global data declration***************
    data :   kb type dmbtr ,
              return like  bapireturn ,
              balancespgli like  bapi3008-bal_sglind,
              noteditems like  bapi3008-ntditms_rq,
              keybalance type table of  bapi3008_3 with header line,
             opbalance type p.
    SELECTION SCREEN DECLARATIONS *********************
    selection-screen begin of block b1 with frame .
    select-options : so_bukrs for bsik-bukrs obligatory,
                     so_lifnr for bsik-lifnr,
                     so_hkont for bsik-hkont,
                     so_prctr for bsik-prctr ,
                     so_mindk for lfb1-mindk,
                     so_ktokk for lfa1-ktokk.
    selection-screen end of block b1.
    selection-screen : begin of block b1 with frame.
    parameters       : p_rb1 radiobutton group rad1 .
    select-options   : so_date for sy-datum .
    selection-screen : end of block b1.
    ********************************ASSIGNING ALV GRID
    ****field catalog for balance report
    gs_fcats-col_pos = 1.
    gs_fcats-fieldname = 'BUKRS'.
    gs_fcats-seltext_m =  text-001.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 2 .
    gs_fcats-fieldname = 'LIFNR'.
    gs_fcats-seltext_m = text-002.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 3.
    gs_fcats-fieldname = 'NAME1'.
    gs_fcats-seltext_m =  text-003.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 4.
    gs_fcats-fieldname = 'BALC'.
    gs_fcats-seltext_m =  text-016.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 5.
    gs_fcats-fieldname = 'BALA'.
    gs_fcats-seltext_m =  text-012.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 6.
    gs_fcats-fieldname = 'BALB'.
    gs_fcats-seltext_m =  text-013.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 7.
    gs_fcats-fieldname = 'TBAL'.
    gs_fcats-seltext_m =  text-014.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 8.
    gs_fcats-fieldname = 'GBAL'.
    gs_fcats-seltext_m =  text-015.
    append gs_fcats to gt_fcats .
    data : repid1 type sy-repid.
    repid1 = sy-repid.
    INITIALIZATION EVENTS ******************************
    initialization.
    *Clearing the work area.
    clear gs_bsik.
    Refreshing the internal tables.
    refresh gt_bsik.
    ******************START OF  SELECTION EVENTS **************************
    start-of-selection.
    *get data for balance report.
      perform sub_openbal.
      perform sub_openbal_display.
    *&      Form  sub_openbal
          text
    -->  p1        text
    <--  p2        text
    form sub_openbal .
      if   so_date-low > sy-datum or so_date-high > sy-datum .
          message i005(yfi02).
         leave screen.
    endif.
         select bukrs lifnr gjahr belnr budat bldat
           waers dmbtr wrbtr shkzg blart monat hkont prctr
           from bsik into table gt_opbal
           where bukrs in so_bukrs and lifnr in so_lifnr
           and hkont in so_hkont and prctr in so_prctr
           and budat in so_date .
        select bukrs lifnr gjahr belnr budat bldat
           waers dmbtr wrbtr shkzg blart monat hkont prctr
           from bsak appending table gt_opbal
           for all entries in gt_opbal
           where lifnr = gt_opbal-lifnr
           and budat in so_date .
    if sy-subrc <> 0.
      message i007(yfi02).
      leave screen.
      endif.
    select lifnr mindk from lfb1 into table gt_lfb1
      for all entries in gt_opbal
        where lifnr = gt_opbal-lifnr and mindk in so_mindk.
    select lifnr name1 ktokk from lfa1 into table gt_lfa1
      for all entries in gt_opbal
       where lifnr = gt_opbal-lifnr and ktokk in so_ktokk.
       loop at gt_opbal into gs_opbal .
         loop at gt_lfb1 into gs_lfb1 where lifnr = gs_opbal-lifnr.
           loop at gt_lfa1 into gs_lfa1 where lifnr = gs_opbal-lifnr.
            gs_opfinal-bukrs = gs_opbal-bukrs.
            gs_opfinal-lifnr = gs_opbal-lifnr.
            gs_opfinal-gjahr = gs_opbal-gjahr.
            gs_opfinal-belnr = gs_opbal-belnr.
            gs_opfinal-budat = gs_opbal-budat.
            gs_opfinal-bldat = gs_opbal-bldat.
            gs_opfinal-waers = gs_opbal-waers.
            gs_opfinal-dmbtr = gs_opbal-dmbtr.
            gs_opfinal-wrbtr = gs_opbal-wrbtr.
            gs_opfinal-shkzg = gs_opbal-shkzg.
            gs_opfinal-blart = gs_opbal-blart.
            gs_opfinal-monat = gs_opbal-monat.
            gs_opfinal-hkont = gs_opbal-hkont.
            gs_opfinal-prctr = gs_opbal-prctr.
            gs_opfinal-name1 = gs_lfa1-name1.
        if gs_opbal-shkzg    = 'H'.
            gs_opfinal-tcr   =  gs_opbal-dmbtr * -1.
            gs_opfinal-tdr   =  '000000'.
        else.
            gs_opfinal-tdr   =  gs_opbal-dmbtr.
            gs_opfinal-tcr   =  '000000'.
        endif.
           append gs_opfinal to gt_opfinal.
           endloop.
           endloop.
           endloop.
    sort gt_opfinal by bukrs lifnr prctr .
    so_date-low = so_date-low - 1 .
    loop at gt_opfinal into gs_opfinal.
    call function 'BAPI_AP_ACC_GETKEYDATEBALANCE'
      exporting
        companycode        = gs_opfinal-bukrs
        vendor             =  gs_opfinal-lifnr
        keydate            = so_date-low
       balancespgli        = ' '
       noteditems          = ' '
      importing
        return             = return
      tables
        keybalance         = keybalance.
    clear kb .
    loop at keybalance .
       kb = keybalance-lc_bal + kb .
    endloop.
          gs_opdisp-balc = kb.
          gs_opdisp-bukrs =  gs_opfinal-bukrs.
          gs_opdisp-lifnr =  gs_opfinal-lifnr.
          gs_opdisp-name1 =  gs_opfinal-name1.
        at new lifnr .
          sum .
          gs_opfinal-tbal =  gs_opfinal-tdr + gs_opfinal-tcr  .
          gs_opdisp-tbal = gs_opfinal-tbal.
          gs_opdisp-bala = gs_opfinal-tdr .
          gs_opdisp-balb = gs_opfinal-tcr .
          gs_opdisp-gbal = keybalance-lc_bal + gs_opfinal-tbal .
          append gs_opdisp to gt_opdisp.
        endat.
        clear gs_opdisp.
        clear keybalance .
      endloop.
      delete adjacent duplicates from gt_opdisp.
    endform.                    " sub_openbal
    *&      Form  sub_openbal_display
          text
    -->  p1        text
    <--  p2        text
    form sub_openbal_display .
    call function 'REUSE_ALV_GRID_DISPLAY'
        exporting
      I_INTERFACE_CHECK                 = ' '
      I_BYPASSING_BUFFER                = ' '
      I_BUFFER_ACTIVE                   = ' '
          i_callback_program              =  repid1
      I_CALLBACK_PF_STATUS_SET          = ' '
      I_CALLBACK_USER_COMMAND           = ' '
      I_CALLBACK_TOP_OF_PAGE            = ' '
      I_CALLBACK_HTML_TOP_OF_PAGE       = ' '
      I_CALLBACK_HTML_END_OF_LIST       = ' '
      I_STRUCTURE_NAME                  =
      I_BACKGROUND_ID                   = ' '
      I_GRID_TITLE                      =
      I_GRID_SETTINGS                   =
      IS_LAYOUT                         =
          it_fieldcat                     = gt_fcats
      IT_EXCLUDING                      =
      IT_SPECIAL_GROUPS                 =
      IT_SORT                           =
      IT_FILTER                         =
      IS_SEL_HIDE                       =
      I_DEFAULT                         = 'X'
      I_SAVE                            = 'X'
      IS_VARIANT                        =
       it_events                        =
      IT_EVENT_EXIT                     =
      IS_PRINT                          =
      IS_REPREP_ID                      =
      I_SCREEN_START_COLUMN             = 0
      I_SCREEN_START_LINE               = 0
      I_SCREEN_END_COLUMN               = 0
      I_SCREEN_END_LINE                 = 0
      IT_ALV_GRAPHICS                   =
      IT_HYPERLINK                      =
      IT_ADD_FIELDCAT                   =
      IT_EXCEPT_QINFO                   =
      I_HTML_HEIGHT_TOP                 =
      I_HTML_HEIGHT_END                 =
    IMPORTING
      E_EXIT_CAUSED_BY_CALLER           =
      ES_EXIT_CAUSED_BY_USER            =
         tables
           t_outtab                       = gt_opdisp
      exceptions
        program_error                     = 1
        others                            = 2
      if sy-subrc <> 0.
        message id sy-msgid type sy-msgty number sy-msgno
                with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      endif.
    endform.                    " sub_openbal_display

    I think you are using for all entries statement in almost all select statements but i didnt see any condtion before you are using for all entries statement.
    If you are using for all entries in gt_opbal ... make sure that gt_opbal has some records other wise it will try to read all records from the data base tables.
    Try to check before using for all entries in the select statement like
    if gt_opbal is not initial.
    select adfda adfadf afdadf into table
      for all entries in gt_opbal.
    else.
    select abdf afad into table
    from abcd
    where a = 1
        and b = 2.
    endif.
    i didnt see anything wrong in your report but this is major time consuming when you dont have records in the table which you are using for all entries.

  • Urgent. How may i update a properties file in execution time?

    Urgent, please. How may i update a properties file in execution time? I need to update the file by means of a web page and i need the changes be reflected immediately.
    Thanks

    Note the update must be made in memory. But i don�t know how.

  • Execution time of a simple vi too long

    I'm working with LabVIEW 6.0.2 on a computer (AMD ~700MHz) under Windows 2000. The computer is connected to the instruments (eg Keithley 2400 Sourcemeter) via GPIB (NI PCI-GPIB Card). When trying to read the output of the K2400 with a very simple vi (sending the string READ? to the instrument with GPIBWrite (mode 2) and subsequently reading 100byte with GPIBRead (mode 2) from the instrument, the execution time mostly exceeds 1s (execution highlighting disabled). Sometimes, it can be much faster but this is very irreproducible. I played around with the GIPBRead and Write modes and with the number of bytes to be read from the device as well as with the hardware settings of the Keithley 2400 but nothing seemed to work. The API calls ca
    ptured by NI Spy mainly (lines 8 - 160) consist of ThreadIberr() and ibwait(UD0, 0x0000).
    As this problem is the main factor limiting our measurement speed, I would be grateful for any help.
    Thanks a lot
    Bettina Welter

    Hello,
    Thanks for contacting National Instruments. It seems like the 1 second delay that is occurring is due to the operation being called. ThreadIberr returns the value of iberr, while ibwait simply implements a wait. These two get called repeatedly while the GPIB device waits for the instrument (K2400, in your case) to finish its operation and respond back. It is quite possible that when you query the Keithley to send back 100 bytes of data, it has to gather them from its buffer (if its already been generated). And if there aren't 100 bytes of data in the buffer, the Keithley will keep the NRFD line asserted while it gathers 100 btyes of data. After the data has been gathered, the NRFD line is deasserted, at which point, ThreadIberr will detect the change in th
    e ibsta status bit and read the 100 bytes.
    So make sure that the 100 bytes of data that you are requesting don't take too long to be gathered, since this is where the main delay lies. Hope this information helps. Please let us know if you have further questions.
    A Saha
    Applications Engineering
    National Instruments
    Anu Saha
    Academic Product Marketing Engineer
    National Instruments

  • Execution time is too high.

    Hi,
    I got several xsql-files that uses aprox. 1 minute to execute by them selves. I want to run one xsql that includes them all, but it times out, and I get an server-error. Is there any way I can get the execution time down, or change the timeout-setting?
    I am running the xsql that comes with 8.1.7
    Terje K.

    If Oracle8i JServer is included in the Oracle 8i package, then yes. The database itself is not large (aprox. 50Mb with data), but the results of the queries can get somewhat large. Here is an example:
    1. first I made a view:
    create view view_section2_issue as SELECT SPVSPC.OPN, OPE, PCA, PCS, PCSR, OSC, PWTT, OVSM, PLWD, PCSOD, PDC, TO_CHAR(ISOD,'dd.MM.YY') AS ISOD, TO_CHAR(PCSOD,'dd.MM.YY') AS OD, PSCN, PMDP1, PMDP2, PMDP3, PMDP4, PMDP5, PMDP6, PMDP7, PMDP8, PMDP9, PMDP10, PMDP11, PMDP12,PDT1, PDT2, PDT3, PDT4, PDT5, PDT6, PDT7, PDT8, PDT9, PDT10, PDT11, PDT12, PMDC, PMCA, PMMDP, PMDT, PMLWD, PMWTT, PMSCN, PMNS, PMWTH, PMSCH, PMOD
    from SPVSISSU, SPVSISS2, SPVSPCS2, SPVSPC
    where SPVSISSU.OPN = SPVSPC.OPN
    and SPVSISSU.ISS is not null
    and SPVSISS2.OPN = SPVSISSU.OPN
    and SPVSISS2.ISSUE = SPVSISSU.ISR2
    and SPVSPCS2.OPCS = SPVSISS2.IOPCS
    and SPVSPCS2.PCSR = SPVSISS2.IPCS_REV
    then I made the query (with some cursors):
    SELECT OPE, PCA, PCS, PCSR, OSC, PWTT, OVSM, PLWD, PCSOD, PDC, OD, PSCN, PMDP1, PMDP2, PMDP3, PMDP4, PMDP5, PMDP6, PMDP7, PMDP8, PMDP9, PMDP10, PMDP11, PMDP12, PDT1, PDT2, PDT3, PDT4, PDT5, PDT6, PDT7, PDT8, PDT9, PDT10, PDT11, PDT12, PMDC, PMCA, PMMDP, PMDT, PMLWD, PMWTT, PMSCN, PMNS, PMWTH, PMSCH, PMOD,
    CURSOR( SELECT PNS, POD, PWTH, PSCH
    FROM spvspcs4
    WHERE spvspcs4.opn = view_section2_issue.opn
    and spvspcs4.pcs = view_section2_issue.pcs
    and spvspcs4.pcsr = view_section2_issue.pcsr ) as wallThickness,
    CURSOR( SELECT PELM, SST, PDSTD, PFS, PTS, PTY, PMN, MDS, ESK, PRM, PAGEBREAK, PMELL, page, start_remark(opn,pcs,pcsr,pel,pell) starten,
    end_remark(opn,pcs,pcsr,pel,pell,start_remark(opn,pcs,pcsr,pel,pell)) as slutt
    FROM spvspcs6
    WHERE spvspcs6.opn = view_section2_issue.opn
    and spvspcs6.pcs = view_section2_issue.pcs
    and spvspcs6.pcsr = view_section2_issue.pcsr ) as elements,
    CURSOR( SELECT PVELM, VDS, PVFS, PVTS, PVRM, PMVELL
    FROM spvspcs7
    WHERE spvspcs7.opn = view_section2_issue.opn
    and spvspcs7.pcs = view_section2_issue.pcs
    and spvspcs7.pcsr = view_section2_issue.pcsr ) as vType,
    CURSOR( SELECT PBLP, PAGEBREAK, LTXT
    FROM spvspcs5
    WHERE spvspcs5.opn = view_section2_issue.opn
    and spvspcs5.pcs = view_section2_issue.pcs
    and spvspcs5.pcsr = view_section2_issue.pcsr ) as kommentar,
    CURSOR( SELECT count(*) as tot
    FROM spvspcs5
    WHERE pagebreak = 'P'
    and spvspcs5.opn = view_section2_issue.opn
    and spvspcs5.pcs = view_section2_issue.pcs
    and spvspcs5.pcsr = view_section2_issue.pcsr ) as kpages,
    CURSOR( SELECT count(*) as tot
    FROM spvspcs6
    WHERE pagebreak = 'P'
    and spvspcs6.opn = view_section2_issue.opn
    and spvspcs6.pcs = view_section2_issue.pcs
    and spvspcs6.pcsr = view_section2_issue.pcsr ) as tpages
    from view_section2_issue
    where OPN = {@opn}

  • Execution time difference between Statspack and DBM_Monitor trace

    Hi Everyone,
    We noticed that output of query execution time is quite differ between statspack and session tracing (DBMS_MONITOR) report. The query execution time in Statspack was 1402 sec and in session trace file was 312.25 Sec. FYI database version is 11.2.0.3 which is installed on platform OL 5.8
    Both of the following reports (Staspack/tracing) was executed on same system and at the same time. Could you suggest why execution time is differ in staspack and session tracing?
    Staspack execution time :-
    Elapsed Elap per CPU Old
    Time (s) Executions Exec (s) %Total Time (s) Physical Reads Hash Value
    1402.50 1 1402.50 9.1 53.92 256,142 3247794574
    select * from ( select * from ( select resourcecontentslocati
    on,isprotocolname,ismimecontenttype,indexedmetatext,objectid,met
    atext,valueaddxml,resourcetype,resourceviewedtime,iscmaresultid,
    resourcelastviewedbyuser,issequencenumber,nvl(length(contents),0
    Session tracing time:-
    call count cpu elapsed disk query current rows
    Parse 1 10.58 256.44 43364 153091 0 0
    Execute 1 0.01 0.08 0 0 0 0
    Fetch 143 2.09 55.72 25440 32978 0 1000
    total 145 12.69 312.25 68804 186069 0 1000
    Thanks
    Rajdeep

    Hi,
    First of all, please read the [url https://wikis.oracle.com/display/Forums/Forums+FAQ]FAQ page and find out how to use the code tags to format your output properly so that it's readable.
    I don't want to work out the stats formatting but I'd guess that if you ran the query first time and the data was not cached, it would be slower than the 2nd query when it was cached. The stats should confirm this so please format them so we can see it properly.
    Rob

  • How can I speed up the load time?

    My website is completed and I am happy with everything.
    Everything except the time it takes to load. All the images have
    been optimized. What can I do to speed up the load time?
    http://www.liquidfirefishing.com

    On Tue, 6 Jan 2009 01:29:41 +0000 (UTC), "Team Liquid Fire"
    <[email protected]> wrote:
    >My website is completed and I am happy with everything.
    Everything except the
    >time it takes to load. All the images have been
    optimized. What can I do to
    >speed up the load time?
    >
    http://www.liquidfirefishing.com
    Typically, your budget is around 50k for the first page,
    total including
    all external files. When you add up all the external files
    you have used,
    like images, Flash, external JavaScript, etc., you have 98
    separate HTTP
    requests in that page. Just looking at the headers, you have
    about 80k in
    requests and responses, not even counting the actual files.
    Browsers are limited to just a few simultaneous streams, so
    most of your
    files wind up having to wait until the prior files download.
    Included among your 98 requests, you have these six files
    that each
    individually exceed that total budget:
    headerLinked.swf 2,621,578 bytes
    2007_fishing_video_web.flv 95,989,898 bytes
    2008_aoy_graphic.png 65,582 bytes
    jensen_beach.png 59,105 bytes
    liquid_fire_logo_new.png 142,277 bytes
    DanSchaad.png 379,111 bytes
    Your entire total is 99,816,652 bytes. I've put a complete
    breakdown here:
    http://testing.apptools.com/newsgroup/liquidfire.php
    Files are requested from the server in the order they are
    encountered in
    the source code. They are displayed in the linked page in the
    order in
    which they are requested. You can see that some of the huge
    files are
    requested fairly early in the sequence. That means that those
    files and
    their extended download times are occupying streams and
    preventing other
    files from being downloaded more quickly.
    My suggestion would be to really trim down the content on the
    page. Break
    it up into several pages. Dump the huge Flash movie at the
    top of the
    page. Optimize the four .png's above.
    Gary

  • What can be done to reduce page loading time?

    Hi,
    I've built a site to showcase my photographs and pages load slowly. It has about 70 pages and each page uses the same custom template that contains graphics and type. Hyperlinks navigate from page to page or from section to section. Each page has a unique photograph. The site can be seen at...
    http://web.mac.com/peter_tangen/iWeb/pt/enter.html
    I'd like to reduce the time it takes to load a page.
    In other web design applications it's possible to have all pages access a single graphic file, this speeds up page loading as the cache "remembers" the contents of the file and eliminates the need to reload it. Other posts in this forum indicate that this capability is not currently available in iWeb, however hoped for in Ver 2.0.
    I'd appreciate any suggestions!
    FYI: A typical page contains the following files (from the page)...
    http://web.mac.com/peter_tangen/iWeb/pt/portrait01.html
    backgroundimage_1.png
    photo-filtered.jpg
    portrait01.css
    portrait01.js
    shapeimage_1.png
    shapeimage_2.png
    shapeimage_3.png
    shapeimage_4.png
    shapeimage_5.png
    shapeimage_6.png
    shapeimage_7.png
    shapeimage_8.png
    shapeimage_9.png
    shapeimage_10.png
    shapeimage_11.png
    transparent.gif
    Thanks for your time!
    g4 laptop   Mac OS X (10.4.4)  

    pvt:
    If you load the page you linked to and then open Safari's Activity window you'll see that those png files are all about from 0.1 to 2.6 kb in size. thats not very large at all. The largest file there is 66kb, the jpg background, and again not big. The Elijah jpg is 53 kb.
    Those small png files are your links below the photo and probably the borders around it.
    Here are some tips I've learned from these sites:
    1 - do not use and frames or borders, etc. around photos.
    2 - don't use any reflections.
    3 - create your own navigation bar with linked text* and turn of the iWeb Navigation bar. The nav bar is all png based.
    4 - use only the web safe fonts from the Font pane.
    5 - do not use drop shadow on fonts.
    6 - turn off smart quotes.
    The above will reduce the number and size of files associated with a web page quite a bit. Photos with fancy frames and reflections can generate a thumbnail png of around 110KB whereas the plain version will be a jpg of only 28KB. Although it doesn't sound like a lot, it will speed up loading of the page and be more darkside (i.e. PC) friendly.
    Run a test with a test site and publish to a folder. Then follow the hints above and publish to another folder and compare folders.
    *Put your linked text directly under the Navigation bar. Then turn off the nav bar in the Inspector window. The nav bar will disappear and the linked text will move up to the top of the page. (This wouldn't apply to your site)
    None of the pages I visited had any large png or unusually large files. One on the portrait 2 page was 448 kb and was the largest jpg I found.
    On your portrait 6 page the drop shadows on the photos produced png files of 2.1 and 0.9 kb. The background is 66 kb.
    As I said all the small png files are the borders and text links. I don't know if knocking out those drop shadows and eliminating those two files would make that big a difference for that page. One of the gurus here mentioned only adding a color background for the page and not the browser. I don't know how that would look or affect your site.
    I like it by the way. Nice customization.
    Tutorials

  • Execution times and other issues in changing TextEdit documents

    (HNY -- back to terrorize everyone with off-the-wall questions)
    --This relates to other questions I've asked recently, but that may be neither here nor there.
    --Basically, I want to change a specific character in an open TextEdit document, in text that can be pretty lengthy. But I don't want pre-existing formatting of the text to change.
    --For test purposes the front TextEdit document is simply an unbroken string of letters (say, all "q's") ranging in number from 5 and upwards. Following are some of the results I've gotten:
    --1) Using a do shell script routine (below), the execution is very fast, well under 0.1 second for changing 250 q's to e's and changing the front document. The problem is that the formatting of the first character becomes the formatting for the entire string (in fact, for the entire document if there is subsequent text). So that doesn't meet my needs, although I certainly like the speed.
    --SCRIPT 1
    tell application "TextEdit"
    set T to text of front document
    set Tnew to do shell script "echo " & quoted form of T & " | sed 's/q/e/g'"
    set text of front document to Tnew
    end tell
    --END SCRIPT 1
    --The only practical way I've found to change a character AND maintain formatting is the "set every character where it is "q" to "e"" routine (below). But, for long text, I've run into a serious execution speed problem. For example, if the string consists of 10 q's, the script executes in about 0.03 second. If the string is 40 characters, the execution is 0.14 second, a roughly linear increase. If the string is 100 characters, the execution is 2.00 seconds, which doesn't correlate to a linear increase at all. And if the string is 250 characters, I'm looking at 70 seconds. At some point, increasing the number of string characters leads to a timeout or stall. One interesting aspect of this is that, if only the last 4 characters (example) of the 250-character string are "q", then the execution time is again very quick.
    --SCRIPT 2
    tell application "TextEdit"
    set T to text of front document
    tell text of front document
    set every character where it is "q" to "e"
    end tell
    end tell
    --END SCRIPT 2
    --Any insight into this issue (or workaround) will be appreciated.
    --In the real world, I most often encounter the issue when trying to deal with spaces in long text, which can be numerous.

    OK, Camelot, helpful but maddening. Based on your response. I elected to look at this some more, even though I'm stuck with TextEdit on this project. Here's what I found, not necessarily in the order I did things:
    1) I ran your "repeat" script on my usual machine (2.7 PPC with 10.4.6)) and was surprised to consisently get about 4.25 seconds -- I didn't think it should matter, but I happened to run it with Script Debugger.
    2) Then, curious as to what a slower processor speed would do, I ran it at ancient history speed -- a 7500 souped up to 700 MHz. On a 10.4.6 partition, the execution time was about 17 seconds, but on a 10.3.6 partition it was only about 9.5 seconds. (The other complication with this older machine is that it uses XPostFacto to accommodate OS X.) And I don't have Script Debugger for 10.3.x, so I ran the script in Script Editor on that partition.
    3) That got me wondering about Script Editor vs. Script Debugger, so (using 10.4.6) I ran the script on both the old machine and my (fast) usual machine using Script Editor. On the old machine, it was somewhat faster at about 14 seconds. But, surprise!, on the current machine it took twice as long at 8.6 seconds. The story doesn't end here.
    (BTW, I added a "ticks" routine to the script, so the method of measuring time should be consistent. And I've been copying and pasting the script to the various editors, so there shouldn't be any inconsistencies with it. I've consistently used a 250-character unbroken string of the target in a TextEdit document.)
    4) Mixed in with all these trials, I wrote a script to get a list of offsets of all the target characters; it can be configured to change the characters or not. But I found some intriguing SE vs. SD differences there also. In tests on the fast machine, running the script simply to get the offset list (without making any changes), the list is generated in under a second -- but sometimes barely so. The surprise was that SE ran it in about half the time as SD. although SD was about twice as fast with a script that called for changes. Go figure.
    5) Since getting the offset list is pretty fast in either case, I was hoping to think up some innovative way of using the offset list to make changes in the document more quickly. But running a repeat routine with the list simply isn't innovative, and the result is roughly what I get with your repeat script coupled with an added fraction of a second for generating the list. Changing each character as each offset is generated also yields about the same result.
    My conclusion from all this is that the very fast approaches (which lose formatting) are changing the characters globally, not one at a time as occurs visibly with techniques where the formatting isn't lost. I don't know what to make of SE vs. SD, but I repeated the runs several times in each editor, with consistent results.
    Finally, while writing the offset list script, I encountered a couple AS issues that I've seen several times in the past (having nothing specifically to do with this topic), but I'll present that as a new post.
    Thanks for your comments and any others will be welcome.

  • Tool for measuring execution time?

    Hi,
    i'm trying to measure the time of some determined methods in my app.
    I've tried System.currentMillis(), but i don't get the accuracy i need (System.nanoTime() - from 1.5.0 sdk won't help me neihter).
    Does anyone know a simple java tool that i can use to do this ???
    I thought about optimized (borland), but i just don't have it :-( also i don't know if it ease to use (and i need this measuring as soon as possible).
    Thanks for any sugestion,
    ltcmelo

    <<In Windows at least the resolution of System.currentTimeMillis() seems to be 10ms.
    If the operation that takes 9 ms is called a thousand times, and the one that takes 5 ms is called 20
    million times, and the one that takes 1 �s is called 200 million times, all will be fast to measure,
    all will come in at zero ms by currentTimeMillis(). It would be useful to know the respective execution times of each method.>>
    This is not correct. Windows does have 10 ms. granularity, however if you average many measurements this limitation disappears.
    For example let's say that a particular method takes 5 ms. on average and we take 10 measurements. You claim that the 10 measurements will all be 0, whereas the actual measurements will either be 0 or 10 depending on how close the clock was to ticking when currentTimeMillis() was called. For example the 10 measurements might look like 0, 0, 10, 0, 10, 0, 0, 10, 10, 0. If you take the average of these numbers you get quite good accuracy (i.e. 50/10=5 ms. the actual time we said the method took). In principle the average should even get you sub-ms. accuracy.
    Try it with jamon (http://www.jamonapi.com). JAMon calculates averages and so gets around the windows limitation. If you are coding a web app then jamon has a report page that displays all the results (hits, average time, totals time, min time, max time, ...). If not then you can get the raw data and display it as you like.
    One question for the original poster. Why do you think you need sub-millisecond timings? If you are coding a business app IO tends to be the bottleneck and much greater than sub-millisecond.
    Steve - http://www.jamonapi.com - a fast, free monitoring tool that is suitable for production applications.

  • "IMAQdxOpenCamera" function execution time is particularly long,why?

    I install VAS2011 in CVI2010 environment, running IMAQdx the samples  <Grab and AttributesSetup>,  "IMAQdxOpenCamera" function execution time is particularly long, more than 7 seconds,why?
    Thanks!
    Solved!
    Go to Solution.

    Thank you for your answers!
    I capture video using VFW initialization faster, only use IMAQdx speed slow.
    thanks!

  • Capturing cfml execution time

    Hi, good day to all! Is there any way on how to capture an
    execution time of a cfm instruction? I have two situations, but I
    think they're similar as shown below:
    1st situation:
    <!--- start query --->
    <cfquery name = "q1" datasource="#db#">
    SQL statement
    </cfquery>
    <cfquery name = "q2" datasource="#db#">
    SQL statement
    </cfquery>
    <cfquery name = "q3" datasource="#db#">
    SQL statement
    </cfquery>
    <cfquery name = "q4" datasource="#db#">
    SQL statement
    </cfquery>
    <!--- end query --->
    For the 1st situation above, I want to capture the total
    execution time from q1 to q3 only. Also, I want to get the time
    each query is executed. Is there any way how?
    2nd situation (could be any cfml instructions):
    <!--- start cfml --->
    instruction 1............................
    instruction 2............................
    instruction 3............................
    instruction 4............................
    <!--- end cfml --->
    In the 2nd situation, I want to get the total execution time
    from instruction 1 to instruction 3 only.
    Any help is really appreciated. Thanks.
    Update ----------------------------------------
    By the way, I am using cf 6.1. It seems the gettickcount
    technique isn't returning the exact execution time. actually, i
    have a code in cf that contains a vbscript. that writes to an excel
    file. This is what i did:
    start:
    ..... cf tags here ...
    <cfset startTimer = GetTickCount()>
    <script language="vbscript">
    --- some vbscript and cf statements .........
    <cfoutput query = "some_query">
    objXL.ActiveSheet.Cells(X_Pos,1).Value = "#some_cf_value#"
    objXL.ActiveSheet.Cells(X_Pos,1).Font.size = "#FSize#"
    --- some vbscript and cf statements .........
    </cfoutput>
    --- some vbscript and cf statements .........
    </script>
    <cfset endTimer = GetTickCount()>
    <cfset totalTime_ms = endTimer - startTimer >
    end:
    totalTime_ms doesn't seem to return the exact length of time
    of the process. I mean, From one test, I tried to use a stopwatch
    and found out it took about 30 seconds to complete filling the
    excel file. But totalTime_ms returns only about 1500 ms, which is
    only about 1.5 seconds. The result is far from reality. Why? Is
    there any other way? Thanks.

    if you wanna know the execution time of the whole page, here
    is coding..
    put this
    <cfset tickBegin = GetTickCount()> on the top row of ur
    page.
    put this
    <cfset tickEnd = GetTickCount()>
    <cfset loopTime = tickEnd - tickBegin>
    <cfset loopTime = loopTime / 1000>
    <cfoutput>Messages Results #count_first# - #count_end#
    of about #ListLen(session.topic_replies_list)# for seconds
    milliseconds. (#loopTime# seconds)</cfoutput>
    at the bottom row of ur page.

  • Is there anything I can do to speed up iWeb download times?

    I was wondering if I could ask the forum a couple of questions. (I have had a look through previous posts but wasn't able to find the answers I need. Apologies if my questions have been answered before).
    I bought iWeb a few days ago and website building is virgin territory for me. Over the weekend I put together a site. This is albeit a skeleton site, but the final site will be pretty much the same, only with more picture and text content.
    The site is primarily about displaying photographs to picture editors (etc) so fastish download times are important for me . On my G5 (with reasonable broadband), and after emptying the cache, the site opens very slowly. (Page opening times range from 5 seconds to 10 seconds). This is unfortunately far too slow for my needs.
    I have tried to speed up the download times by a variety of means:
    (1) Reducing the file size to 30kb jpeg files in ImageReady using 'save for web'
    (2) Using the 'original size' facility in inspector.
    Nothing seems to be increasing the download times to acceptable speeds.
    Does anyone know what I can do, if anything?
    Please check out this skeleton site at:
    http://web.mac.com/gregoryclements/iWeb
    Any help or feedback with this would be very much appreciated.
    Cheers,
    Greg
    1.6GHz PowerPC G5   Mac OS X (10.3.9)  

    Hi Greg,
    Went to your web site. Page loading time wasn't too bad here, but I'm a photographer also and am sympathetic to your need for speed.
    Here's what I found and my current ideas for speeding up page loading speed.
    I grabbed two of your images and found that they were 450k png files, not 30k as mentioned plus the images, in my opinion, are soft and could be sharper.
    Some iWeb observations re: Photo Pages and Image Optimization Workflow
    Dropping hirez files into iWeb it will automatically "Fit Image" to 800x800px with a jpeg compression of 10.
    Watching users open my web pages I've noticed and feel:
    a) Alot of these folks are PC users and their display resolutions are not set as high as we use editing photos.
    b). Because of this, images over 500px high cripples some users from seeing the bottom of a page and controls that may be there.
    c) My current workflow includes using "Fit Image" to Width 700px and Heigth 500px.
    Following are Custom and Batch Image Processing Workflows
    These will work with Raw, Tif or Jpg images and will assume that you have reasonably worked up your images applying your own voodoo.
    Custom Image Processing Workflow. (using smallest file, quick version)
    1. Load images into Bridge
    2. Edit and manually sort images into the order you want.
    3. Renumber images with (sequence)(-)(Filename)
    4. Use Tools->Photoshop->Image Processor
    Save the files as sRGB Tiff and Fit Image H500px by 700px
    5. Open images in Photoshop
    6. Apply your custom image image corrections to each image
    7. Sharpen the images
    8. Save for Web using jpeg with medium compression.
    9. Images are now ready for web use.
    Using this process a couple of times you will come away with a feel for the Custom image correction and amount of sharpening and compression works best for your images.
    Now you are ready to start batching images.
    First create a bunch of commonly used action.
    I have a "Sharpen" folder containing actions named:
    30 SmartSharp
    40 SmartSharp
    50 SmartSharp
    60 SmartSharp
    etc.
    Also, I have a Convert to Profile folder of actions
    A Color Correct folder of actions that calls on some of my favorite nik Color Efex filters and presets.
    And a "Save As" folder of actions containing
    Save4Web high
    Save4Web medium
    --Don't let me lose you here becase here comes the good part
    Create a folder of "Combined Actions" and create some useful combinations.
    Example: "sRGB 50Sharp WebMedium" that calls on three of the actions previously created Convert to Profile, SmartSharpen and Save As
    Now you are ready for:
    Batch Image Processing Workflow
    1. Load images into Bridge
    2. Edit and manually sort images into the order you want.
    3. Renumber images with (sequence)(-)(Filename)
    4. Use Tools->Photoshop->Image Processor
    Save the files as sRGB Tiff and Fit Image H500px by 700px
    but this time use: Apply Action "sRGB 50Sharp WebMedium"
    5. Images are now ready for web use.
    Respectfully submitted,
    Junebug Clark

  • AJAX, how to check execution time of queries?

    CF8, using CFDIV to bind to a CFC. The CFC has several SQL
    queries, including select, insert, update, delete. I need to check
    the execution time of these queries. I have the CF Ajax logger
    turned on (cfdebug), but it doesn't seem to list query execution
    times. I see the query execution times from normal form pages in
    the log info on that page, at the bottom as normal, but not the
    queries in Ajax binded CFC's.
    Am I missing a server setting? I have everything checked in
    the Ajax logger. Global, LogReader, http, bind, debug, info,error,
    window.
    Thanks in advance for any help!
    Mike

    I use Ajax (in CF7) a lot to access a SQL Server database. To
    determine execution times for Ajax queries, I run the application
    in Firefox with the "Firebug" add-on. Firebug reports the execution
    of each (and every) Ajax transaction in milliseconds.

Maybe you are looking for