Summarization and Optimization

Hi Gurus,
Please help me to find way of optimizing my program and extract data...i have to extract data coming from VBRK and KONV which having a large number of data so even i use SA38 to schedule and execute my program in background it doesnt work or continue and my job always cancelled....now i filter it out and divide it into monthly and come up of summing up the value according to condition type to lessen the space and data volume but my codes doesnt work in summing up those value so i ask for some codes or tips to sum-up please...im using also ALV format...i used also COLLECT and MODIFY ITAB TRANSPORTING KWERT but it doesnt work out..help me plz guys
Thanks,
nips

Hi,
Here's my code:
*& Report  ZSALES_BY_SALESGRP
REPORT  ZSALES_BY_SALESGRP no standard page heading.
type-pools: slis.
tables: vbrk,
        konv,
        knvv,
        vbrp.
types: begin of t_doc,
          fkart like vbrk-fkart,
          knumv like vbrk-knumv,
          rfbsk like vbrk-rfbsk,
          VKGRP like knvv-VKGRP,
       end of t_doc,
       begin of t_con,
          kschl like konv-kschl,
          sakn1 like konv-sakn1,
          kwert like konv-kwert,
       end of t_con,
       begin of t_itab,
          rfbsk like vbrk-rfbsk,
          fkart like vbrk-fkart,
          VKGRP like knvv-VKGRP,
          kschl like konv-kschl,
          sakn1 like konv-sakn1,
          kwert like konv-kwert,
          gross like konv-kwert,
          rud like konv-kwert,
          tradeinv like konv-kwert,
          trdiscret like konv-kwert,
          tr_ret like konv-kwert,
          frgoods like konv-kwert,
          sp_invoice like konv-kwert,
          sp_discret like konv-kwert,
          cm_disc_dist like konv-kwert,
          cm_discmark like konv-kwert,
          cmex_dist like konv-kwert,
          cmex_mark like konv-kwert,
          cm_tax like konv-kwert,
       end of t_itab.
data: doc type standard table of t_doc,
      wa_doc like line of doc,
      con type standard table of t_con,
      wa_con like line of con,
      itab type standard table of t_itab with header line,
      list type slis_fieldcat_alv,
      t_list type slis_t_fieldcat_alv,
      sort_field type slis_sortinfo_alv,
      sort_t_field type slis_t_sortinfo_alv,
      mess1(30).
selection-screen begin of block blk1 with frame title title1.
    select-options: VKGRP for knvv-VKGRP.
selection-screen end of block blk1.
selection-screen begin of block blk2 with frame title title2.
    select-options: fkdat for vbrk-fkdat obligatory default '20070901' to '20070930',
                    erdat for vbrk-erdat.
selection-screen end of block blk2.
perform get_and_display.
initialization.
title1 = 'Make an Entry'.
title2 = 'Date Selection'.
list-fieldname = 'VKGRP'.
list-just = 'C'.
list-seltext_l = 'Sales Group'.
append list to t_list.
list-fieldname = 'FKART'.
list-just = 'C'.
list-seltext_l = 'Billing Type'.
append list to t_list.
list-fieldname = 'KSCHL'.
list-just = 'C'.
list-seltext_l = 'Condition Type'.
append list to t_list.
list-fieldname = 'SAKN1'.
list-seltext_l = 'GL Account'.
append list to t_list.
list-fieldname = 'KWERT'.
list-seltext_l = 'Condition Value'.
list-do_sum    = 'X'.
list-outputlen = 20.
append list to t_list.
list-fieldname = 'GROSS'.
list-seltext_l = 'Gross Sales'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'RUD'.
list-seltext_l = 'RUD'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'TR_RET'.
list-seltext_l = 'Trade Returns'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'FRGOODS'.
list-seltext_l = 'Free Goods'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'TRADEINV'.
list-seltext_l = 'Trade Disc Invoice'.
list-just = 'C'.
list-outputlen = 20.
append list to t_list.
list-fieldname = 'TRDISCRET'.
list-seltext_l = 'Trade Disc Returns'.
list-just = 'C'.
list-outputlen = 20.
append list to t_list.
list-fieldname = 'SP_INVOICE'.
list-seltext_l = 'Special Disc Invoice'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'SP_DISCRET'.
list-seltext_l = 'Special Disc Returns'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'CM_DISC_DIST'.
list-seltext_l = 'CM Disc Dist'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'CM_DISCMARK'.
list-seltext_l = 'CM Disc Mktg'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'CMEX_DIST'.
list-seltext_l = 'CM Exp Dist'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'CMEX_MARK'.
list-seltext_l = 'CM Exp Mktg'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
list-fieldname = 'CM_TAX'.
list-seltext_l = 'CM Tax'.
list-just = 'C'.
list-outputlen = 18.
append list to t_list.
*CLEAR sort_field.
sort_field-fieldname = 'KSCHL'.
sort_field-spos      = 1.
sort_field-up        = 'X'.
sort_field-subtot    = 'X'.
APPEND sort_field TO sort_t_field.
form get_and_display.
select afkart aknumv arfbsk bVKGRP from vbrk as a
       inner join knvv as b on akunag = bkunnr into table doc
       where rfbsk = 'C'
       and a~fkdat in fkdat
       and a~erdat in erdat
       and a~fkart in ('ZUF2','ZUIN','ZUVF','ZUS2','ZC23','ZC24','ZC26','ZUL2','ZUG2')
       and b~VKGRP in VKGRP.
if sy-subrc = 0.
select kschl sakn1 kwert from konv into table con
       for all entries in doc
       where knumv = doc-knumv
       and kinak = ' '
       and kschl in ('ZU01','R100','ZC25','ZDTR','ZDTL','ZDTP','ZDDA','ZRET','ZD06','ZDMI','ZDNP','ZDNE','ZDTD',
                     'ZC20','ZC01','ZC02','ZC03','ZC04','ZC05','ZC06','ZC07','ZC10','ZC14','ZC18','ZM09','ZC19',
                     'ZC27').
endif.
loop at doc into wa_doc.
    loop at con into wa_con.
    sort itab ascending by fkart kschl.
   delete adjacent duplicates from itab comparing fkart kschl.
      move:   wa_doc-vkgrp to itab-vkgrp,
              wa_doc-fkart to itab-fkart,
              wa_doc-rfbsk to itab-rfbsk,
              wa_con-kschl to itab-kschl,
              wa_con-sakn1 to itab-sakn1,
           <b>   wa_con-kwert to itab-kwert.</b>      " <<----
I have to sum-up this
      append itab.
    endloop.
endloop.
if sy-subrc = 0.
       loop at itab.
            if ( ( itab-fkart = 'ZUF2' or itab-fkart = 'ZUIN' or itab-fkart = 'ZUVF' or itab-fkart = 'ZUS2' ) )
               and itab-kschl = 'ZU01' and itab-sakn1 <> ' '.
               move: itab-kwert to itab-gross.
               modify itab.
            elseif itab-kschl = 'ZU01' and itab-fkart = 'ZUS2' and itab-sakn1 = ' '.
               move: itab-kwert to itab-rud.
               modify itab.
            elseif itab-kschl = 'ZU01' and ( ( itab-fkart = 'ZURE' or itab-fkart = 'ZUS2'
                       or itab-fkart = 'ZC23' or itab-fkart = 'ZC24' or itab-fkart = 'ZC26' ) ) and itab-sakn1 = ' '.
               move: itab-kwert to itab-tr_ret.
               modify itab.
            elseif ( ( itab-fkart = 'ZUF2' or itab-fkart = 'ZUG2' or itab-fkart = 'ZUL2'
                       or itab-fkart = 'ZURE' or itab-fkart = 'ZUVF') )
                       and ( ( itab-kschl = 'R100' or itab-kschl = 'ZC25' ) ) and itab-sakn1 <> ' '.
               move: itab-kwert to itab-frgoods.
               modify itab.
            elseif ( ( itab-fkart = 'ZUF2' or itab-fkart = 'ZUIN' or itab-fkart = 'ZUVF' ) )
               and ( ( itab-kschl = 'ZDTR' or itab-kschl = 'ZDTL' or itab-kschl = 'ZDTP'
                       or itab-kschl = 'ZM01' or itab-kschl = 'ZDDA' ) ).
               move: itab-kwert to itab-tradeinv.
               modify itab.
            elseif ( ( itab-fkart = 'ZURE' or itab-fkart = 'ZUS2' ) )
               and ( ( itab-kschl = 'ZDTR' or itab-kschl = 'ZDTL' or itab-kschl = 'ZDTP' or itab-kschl = 'ZDDA'
                       or itab-kschl = 'ZRET' ) ).
               move: itab-kwert to itab-trdiscret.
               modify itab.
            elseif ( ( itab-fkart = 'ZUF2' or itab-fkart = 'ZUIN'
                       or itab-fkart = 'ZUVF' or itab-fkart <> 'ZURE' or itab-fkart <> 'ZUS2' ) )
               and ( ( itab-kschl = 'ZD06' or itab-kschl = 'ZDMI' or itab-kschl = 'ZDNP'
                       or itab-kschl = 'ZDNE' or itab-kschl = 'ZDTD') ).
               move: itab-kwert to itab-sp_invoice.
               modify itab.
            elseif ( ( itab-fkart = 'ZURE' or itab-fkart = 'ZUS2' ) )
               and ( ( itab-kschl = 'ZD06' or itab-kschl = 'ZDMI' or itab-kschl = 'ZDNP'
                       or itab-kschl = 'ZDNE' or itab-kschl = 'ZDTD' ) ).
               move: itab-kwert to itab-sp_discret.
               modify itab.
            elseif itab-kschl = 'ZC20' or itab-kschl = 'ZC01' or itab-kschl = 'ZC02' or itab-kschl = 'ZC05'
                   or itab-kschl = 'ZC06' or itab-kschl = 'ZC07'.
               move: itab-kwert to itab-cm_disc_dist.
               modify itab.
            elseif itab-kschl = 'ZC27' OR itab-kschl = 'ZC19' or itab-kschl = 'ZM09'.
               move: itab-kwert to itab-cm_discmark.
               modify itab.
            elseif ( ( itab-fkart = 'ZUG2' or itab-fkart = 'ZUL2' ) )
               and ( ( itab-kschl = 'ZC01' or itab-kschl = 'ZC02' or itab-kschl = 'ZC18' or itab-kschl = 'ZC10'
               or itab-kschl = 'ZC11' ) ).
               move: itab-kwert to itab-cmex_dist.
               modify itab.
            elseif ( ( itab-fkart = 'ZUG2' or itab-fkart = 'ZUS2' ) )
               and ( ( itab-kschl = 'ZC09' or itab-kschl = 'ZC14' ) ).
               move: itab-kwert to itab-cmex_mark.
               modify itab.
            elseif itab-kschl = 'ZC03' or itab-kschl = 'ZC04'.
               move: itab-kwert to itab-cm_tax.
               modify itab.
            endif.
       endloop.
endif.
call function 'REUSE_ALV_LIST_DISPLAY'
  exporting
    it_fieldcat  =  t_list
   it_sort      =  sort_t_field
  tables
    t_outtab     = itab.
endform.
Thanks,
nips

Similar Messages

  • Differentiate between mapping and optimization.

    Hi
    tell me some thing about this.
    Differentiate between mapping and optimization.
    please
    urgent. imran

    user571615 wrote:
    Hi
    tell me some thing about this.
    Differentiate between mapping and optimization.
    please
    urgent. imranThis is a forum of volunteers. There is no urgent here. For urgent, buy yourself a support contract and open an SR on MetaLink.

  • Code Generation and Optimization

    hi,
    we have been looking into a problem with our app, running in a JRockit JVM, pausing. Sometimes for up to a second or so at a time.
    We enabled the verbose logging of codegen and opt.
    And we see entries like:
    [Fri Sep  3 09:51:38 2004][28810][opt    ] #114 0x200 o com/abco/util/Checker.execute(Lcom/abco/util/;)V
    [Fri Sep  3 09:51:39 2004][28810][opt    ] #114 0x200 o @0x243c4000-0x243c7740 1186.23 ms (14784.88 ms)
    So the optimization took 1186 ms.
    Does this optimization happen in the main thread?
    I.e., is the above message an indication that the app had to stop for 1186ms while the optimization occurred?
    Any help on this would be greatly appreciated!
    Also, does anyone have any more pointers to info on the code generation and optimization in JRockit?
    I have only managed to find the following:
    http://edocs.beasys.com/wljrockit/docs142/intro/understa.html#1015273
    thanks,
    JN

    Hi,
    The optimization is done in its own thread and should not cause pauses in your application.
    The probable cause for long pause times is garbage collection. Enable verbose output for gc (-Xverbose:memory), or use the JRockit Management Console to monitor your application. Try using the generational concurrent gc, -Xgc:gencon, or the gc strategy -Xgcprio:pausetime. Read more on: http://edocs.beasys.com/wljrockit/docs142/userguide/memman.html
    If you allocate a lot of small, shortlived objects, you might want to configure a nursery where small objects are allocated. When the nursery is full, only that small part of the heap is garbage collected at a smaller cost than scanning the whole heap.
    If you need help tuning your application, you could make a JRA recording and send to us: http://e-docs.bea.com/wljrockit/docs142/userguide/jra.html.
    Good luck,
    Cecilia
    BEA WebLogic JRockit

  • Relaunch and optimize taking forever - can I get some realtime help?

    I have a large fast system running Vista 64 with 8GB Ram. Suddenly Outlook started running really slow. So I loaded relaunce and optimize. Optimization has been running for more than a half hour using 95 -100% memory. However, only 3 to 5% of CPU is ever used.
    Is this normal?
    What will happen to my catelog if I reset the system?
    As soon as I started the optimization I realized that I did not backup the catelog  before I started it.
    Jim Groan

    The optimization finished - It took almost an hour and used virtually all memory during that time. since there was nothing else going on in the system I still wonder if this should be considered a normal behavior
    Jim

  • SQL Tuning and OPTIMIZER - Execution Time with  " AND col .."

    Hi all,
    I get a question about SQL Tuning and OPTIMIZER.
    There are three samples with EXPLAIN PLAN and execution time.
    This "tw_pkg.getMaxAktion" is a PLSQL Package.
    1.) Execution Time : 0.25 Second
    2.) Execution Time : 0.59 Second
    3.) Execution Time : 1.11 Second
    The only difference is some additional "AND col <> .."
    Why is this execution time growing so strong?
    Many Thanks,
    Thomas
    ----[First example]---
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as dbadmin2
    SQL>
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900 ;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |   220 |   880 |     5  (40)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |   220 |   880 |     5  (40)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900)
    13 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 0.25 seconds
    ----[/First]---
    ----[Second example]---
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as dbadmin2
    SQL>
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900
      6    AND max_aktion.max_aktion_id <> 692;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |    11 |    44 |     6  (50)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |    11 |    44 |     6  (50)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692)
    14 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 0.59 seconds
    ----[/Second]---
    ----[Third example]---
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900
      6    AND max_aktion.max_aktion_id <> 692
      7    AND max_aktion.max_aktion_id <> 392;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |     1 |     4 |     6  (50)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |     1 |     4 |     6  (50)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>392)
    15 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 1.11 seconds
    ----[/Third]---Edited by: thomas_w on Jul 9, 2010 11:35 AM
    Edited by: thomas_w on Jul 12, 2010 8:29 AM

    Hi,
    this is likely because SQL Developer fetches and displays only limited number of rows from query results.
    This number is a parameter called 'sql array fetch size', you can find it in SQL Developer preferences under Tools/Preferences/Database/Advanced tab, and it's default value is 50 rows.
    Query scans a table from the beginning and continue scanning until first 50 rows are selected.
    If query conditions are more selective, then more table rows (or index entries) must be scanned to fetch first 50 results and execution time grows.
    This effect is usually unnoticeable when query uses simple and fast built-in comparison operators (like = <> etc) or oracle built-in functions, but your query uses a PL/SQL function that is much more slower than built-in functions/operators.
    Try to change this parameter to 1000 and most likely you will see that execution time of all 3 queries will be similar.
    Look at this simple test to figure out how it works:
    CREATE TABLE studie AS
    SELECT row_number() OVER (ORDER BY object_id) studie_id,  o.*
    FROM (
      SELECT * FROM all_objects
      CROSS JOIN
      (SELECT 1 FROM dual CONNECT BY LEVEL <= 100)
    ) o;
    CREATE INDEX studie_ix ON studie(object_name, studie_id);
    ANALYZE TABLE studie COMPUTE STATISTICS;
    CREATE OR REPLACE FUNCTION very_slow_function(action IN NUMBER)
    RETURN NUMBER
    IS
    BEGIN
      RETURN action;
    END;
    /'SQL array fetch size' parameter in SQLDeveloper has been set to 50 (default). We will run 3 different queries on test table.
    Query 1:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      1.22       1.29          0       1310          0          50
    total        3      1.22       1.29          0       1310          0          50
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         50  INDEX FAST FULL SCAN STUDIE_IX (cr=1310 pr=0 pw=0 time=355838 us cost=5536 size=827075 card=165415)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         50   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
          AND max_aktion.max_aktion_id > 800
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      8.40       8.62          0       9351          0          50
    total        3      8.40       8.64          0       9351          0          50
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         50  INDEX FAST FULL SCAN STUDIE_IX (cr=9351 pr=0 pw=0 time=16988202 us cost=5552 size=41355 card=8271)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         50   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id = 600
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.72      19.16          0      19315          0           1
    total        3     18.73      19.16          0      19315          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
          1  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 1 - 1,29 sec, 50 rows fetched, 1310 index entries scanned to find these 50 rows.
    Query 2 - 8,64 sec, 50 rows fetched, 9351 index entries scanned to find these 50 rows.
    Query 3 - 19,16 sec, only 1 row fetched, 19315 index entries scanned (full index).
    Now 'SQL array fetch size' parameter in SQLDeveloper has been set to 1000.
    Query 1:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.35      18.46          0      19315          0         899
    total        3     18.35      18.46          0      19315          0         899
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
        899  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=20571272 us cost=5536 size=827075 card=165415)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
        899   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
          AND max_aktion.max_aktion_id > 800
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.79      18.86          0      19315          0          99
    total        3     18.79      18.86          0      19315          0          99
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         99  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=32805696 us cost=5552 size=41355 card=8271)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         99   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id = 600
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.69      18.84          0      19315          0           1
    total        3     18.69      18.84          0      19315          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
          1  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)And now:
    Query 1 - 18.46 sec, 899 rows fetched, 19315 index entries scanned.
    Query 2 - 18.86 sec, 99 rows fetched, 19315 index entries scanned.
    Query 3 - 18.84 sec, 1 row fetched, 19315 index entries scanned.

  • Analyze and optimization

    Hello everyone,
    I'm on this subject for now 3 weeks and i need help.
    I'm trainee in a company where i have to analyze and optimize their GPO as simple as that, I so learn in detail how does this tool work ect and other useful things about Active Directory.
    I learned their 60 gpos (Some rules up to 600 settings...) and their thousands parameters which is essential for me and during my searches i found many many many softwares to detect parameters conflicts or duplicated settings, but after all my tries i'm
    not satisfied today by what i found.
    I used a trial version of GPOAdmin, the GPO Reporting pack from SDM, probably all the Microsoft tools, ActiveAdministrator ect ... I mean all these tools are very powerful and allow many features but i just need something that will find and tell me where
    are all my conflicts on my domain and by this I will correct these settings to have a full capable domain optimized and users won't complain anymore because they'll have a faster logon ect...
    Maybe I don't use the products as i should or maybe it doesn't even exist but it seems very long to analyze all by my self and write every parameter on each object that will be applied and check if there won't be conflict or another GPO for this setting.
    Maybe Powershell can help me on this but I don't know how to use it to.
    So here I am and if you have any idea to help me on the best practice or someone had to do the same job as I have tell me I'll be very happy to receive your information.
    Thanks and sorry for my English.

    > I mean that if there are 10 gpo for the domain and 10 others on children
    > UO, some parameters will be overwritten (Conflict) or the same
    > parameters will be set 5 times (Duplication).
    Yes, that's true. But setting a simple registry key takes a time windows
    cannot even log to the gpsvc.log file. This is from a VM running on a
    desktop system concurrently with 4 other VMs:
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 1 =>
    Microsoft.CredentialManager  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 2 => Microsoft.GetPrograms
     [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 3 => Microsoft.HomeGroup  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 4 =>
    Microsoft.iSCSIInitiator  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 5 =>
    Microsoft.ParentalControls  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 6 =>
    Microsoft.PeopleNearMe  [OK]
    GPSVC(478.d68) 11:48:19:813 SetRegistryValue: 7 =>
    Microsoft.UserAccounts  [OK]
    GPSVC(478.d68) 11:48:19:829 SetRegistryValue: 8 =>
    Microsoft.WindowsAnytimeUpgrade  [OK]
    And even here it takes only about 1 ms average - on a real system, this
    is about 50 times faster.
    Martin
    Mal ein
    GUTES Buch über GPOs lesen?
    NO THEY ARE NOT EVIL, if you know what you are doing:
    Good or bad GPOs?
    And if IT bothers me - coke bottle design refreshment :))

  • Problem Modify and optimize an application

    We have problem using modify application and optimize application on a specific application.
    The application name is GPFormat.
    When we do a modify application and choose "Reassign SQL Index" we get the following errror message:
    Error message:: Cannot drop the index 'dbo.tblFACTGPFormat.IX_tblFACTGPFormat', because it does not exist or you do not have permission.
    And if we do an optimize application with "Full Optimize" and "Compress database" we get the following errror message:
    Error message:There is already an object named 'CONSTTBLFACTGPFORMAT' in the database.
    Could not create constraint. See previous errors.
    We using BPC 5,1 SP5 and SQL2005

    It seems a previous run of optimize with compress was failing.
    So you have to rename CONSTTBLFACTGPFORMAT table and make sure that installation user
    of SAP BPC has the correct access regarding table tblFactGPFFormat.
    Are you using custom indexes for this table?
    I suggest to drop the existing cluster index for table tblFactGPFormat and after you have to run another optimize with compress. This should fix all your problems.
    Regards
    Sorin Radulescu

  • [svn:osmf:] 12659: A few code cleanup and optimization tasks for the Manifest.

    Revision: 12659
    Revision: 12659
    Author:   [email protected]
    Date:     2009-12-08 10:56:56 -0800 (Tue, 08 Dec 2009)
    Log Message:
    A few code cleanup and optimization tasks for the Manifest.
    Modified Paths:
        osmf/trunk/framework/MediaFramework/org/osmf/net/F4MLoader.as
        osmf/trunk/framework/MediaFramework/org/osmf/net/ManifestParser.as

    Many thanks for the fast reply.
    I've got a follow up question.
    What will happen if I modify the reconnect Code in the OSMF Netloader Class as recommended and then load multiple third party OSMF plugins,
    which may have included the origin OSMF version of the Netloader class.
    Which one will be used at runtime?
    Thanks in advance!

  • New site about J2ME game programming and optimization

    Hello!
    I just wanted to tell you about a new J2ME development site
    SupremeJ2ME found at
    http://supremej2me.bambalam.se
    It has a lot of useful guides and tips about mobile J2ME game development and optimization, a forum and information about the best J2ME tools.
    Check it out!
    cheers,
    Cranky

    J2ME Polish have licensing costs associated with it:
    http://www.j2mepolish.org/licenses.html
    As for Canvas VS GameCanvas:
    GameCanvas is MIDP 2.0 and up, so depending which devices your app supports you might want to stick with Canvas. GameCanvas eases graphics flushing, but double buffering can be implemented on Canvas like so:
    Image offscreen = isDoubleBuffered () ? null :
               Image.createImage (getWidth (), getHeight ());See http://www.developer.com/java/j2me/article.php/10934_1561591_8
    section: Avoiding Flickering

  • [svn:fx-trunk] 5028: IGraphicElement interface clean-up and optimizations.

    Revision: 5028
    Author: [email protected]
    Date: 2009-02-20 16:02:17 -0800 (Fri, 20 Feb 2009)
    Log Message:
    IGraphicElement interface clean-up and optimizations.
    Animating a GraphicElement that doesn't share the Group's DO should be now faster and smoother since redrawing it won't redraw the Group anymore.
    1. Group doesn't always clear the first sequence of display objects now
    2. Moved the shared DO logic almost entirely into Group
    3. More granular invalidation for GraphicElements
    QE Notes: Make sure we have test that count the number of display objects for a given set of graphic elements and a group
    Doc Notes:
    Bugs: None
    Reviewer: Glenn, Ryan, Jason
    tests: None
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/components/Group.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/components/baseClasses/GroupBase.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/core/InvalidatingSprite.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/BitmapGraphic.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/IGraphicElement.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/StrokedElement.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/baseClasses/GraphicElement.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/baseClasses/TextGraphicElement.a s
    flex/sdk/trunk/frameworks/projects/framework/src/mx/core/IVisualElement.as
    Added Paths:
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/ISharedDisplayObject.as
    flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/baseClasses/ISharedGraphicsDispl ayObject.as

  • Performance analysis and optimization tools

    Hello I am looking for some tools for Performance analysis and optimization for Oracle. For now I looked over Spotlight, Ignite and Embarcadero DB Optimizer. Can you please point out some links or something for comparing such tools?
    What tools do you use?
    Thanks,

    For performance analysis you can use AWR and ASH.
    -- How to analyze AWR/statpack
    http://jonathanlewis.wordpress.com/statspack-examples/
    -- how to take AWR and ASH report
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/autostat.htm#PFGRF02601
    http://www.oracle.com/technology/pub/articles/10gdba/week6_10gdba.html
    Regards
    Asif Kabir

  • Stabilization and Optimize Video

    If the stabilization box is checked when at 100% is there any stabilization taking place? It appears as though at 100%, unchecked or checked gives the same result - no stabilization.
    At 101% and above though a visible reduction in shaking can be noticed.
    If "optimize video" is then chosen after stabilization, iMovie alters the frames ie. applies a tone curve without user control (iMovie '09 ver 8.0.4 (807)).
    It appears both stabilization and "optimize video" cannot be reversed or converted back to the original file and both stabilized only and "optimize video" have converted the video.
    Questions for advanced users:
    1. At 100% checked is there any stabilization taking place?
    2. When optimized are you also finding tone curve shifts?
    3. Is there any way to convert back to the original file once converted after stabilization or "optimize video" have been applied?
    4. Is it best then to stabilize and optimize copies instead, so that original iPhoto videos are preserved in their original formats for future renditions?
    Any responses would be appreciated.

    I can only comment on your first question and hypothesize about your other questions...
    Stabilization works by zooming into the footage and applying translations that counter act the shakiness that you typically see. If you apply stabilization in some more advanced systems but don't zoom in at all you can see a black border around your video that shakes. In iMovie you must zoom in at least 1% to get any stabilization. The more you zoom in the more stable iMovie can make your clip but you lose more of your video. You'll need to find an appropriate trade off to make your movies look their best.
    Stabilization shouldn't have any effect on the clip on your drive. It should remain untouched. The same goes for optimization. I've seen that optimize video creates another clip in the Apple Intermediate Codec format in a folder called iMovie Speed Conversion. Look for those folders. If you delete them and go into iMovie it will use your original, unoptimized clip.

  • How does iCloud Photo Library and Optimize iPhone Storage Work?

    Even Apple Phone support can't answer these questions:
    How does iCloud Photo Library (Beta) from yesterday's 8.1 update with Optimize iPhone Storage turned on work? (I'm on an iPhone 6)
    It only stores your iPhone/iPad versions on your device, which is amazing. Why would you need anything else? It freed up around 8-10GB of space on my device which is amazing. But how does it work when you use it?
    When you email it seems to ask you, when sending what size you want to send like it's always done. It gives you the original 2.4MB version. This works like I'd imagine and when I received the test email the photo was correct. In this case I assume it hits iCloud and sends it from there, not your actual device. (Like it doesn't re-download it to email it). Correct?
    When posting to Facebook, does it do this same thing? Or is this something Facebook has to build into their iOS app in an update? Or does it just work? Correct?
    When posting to Instagram, does it grab the right version. Obviously the phone optimized version should be fine, but when viewing on a Desktop version of Instagram at instagram.com, will it show the right version?
    Sometimes I backup photos using Image Capture on the Mac. If I did that right now, would it only grab the iPhone versions? Obviously I can just test it but it'd be amazing if someone who knows these things could explain all of these use cases.
    Do you need to be on
    In closing it's more of how does this work? Does it work? Why wouldn't you want to use this feature if it saved space and made more sense? I guess I just would love some real clarity since not even the blogs are explaining how it works in every application.
    In closing part 2. It does seem to work really well, it's just those fringe cases. I just viewed all my photos on https://beta.icloud.com/#photos and they're all there, easy download of the high res, delete, it deletes from my camera roll on my phone, it's amazing, it works, it's magic. Just tell me what issues there are with storing only low res iPhone on my device and if there are none, game on!
    Thanks!

    santiagofrommiami wrote:
    And of those 4.3GB of backup, my Photo Library is 1.4GB.
    So my question is, how do I prevent my Photo Library from being backed up to the cloud, since I already have a backup of it everytime I connect to iPhoto and it copies all those photos to my mac?
    I don't know, sorry. It's supposed to work alongside Photos & iCloud, so that photos are stored only once. When it doesn't work you have a couple of options as I see it –
    Delete the old backups on iCloud to hopefully start over (I don't like deleting the only backup of anything).
    Forget about iCloud backups & use iTunes to backup instead, it might not be suitable if you have no computer etc.
    Buy more storage for iCloud.
    I don't use iCloud backup, it bemuses me with issues like this, there is no easy way to resolve when it fails.
    And one more question while we are at it...can I erase iphoto from my computer now that I have the new Photos app?
    That is up to you. If you are happy you do not need it & have a backup of it go ahead & remove it. You can re-download it from the app store if it is already in your Mac App store account.

  • "iCloud Photo Sharing" and "Optimize iPhone Storage"

    Before updating to iOS 8, my wife and I shared the same iCloud account. It was convenient for us to use the same Photostream, so pictures of our kids where always gathered together. But iOS 8 has too many features that make sharing same account troublesome, so we have finally split up (digitally...)
    Our question is, if we share albums, will our iPhones also treat the pictures and videos in those albums the same way when it comes to Optimize iPhone Storage as those in camera roll? We have 10000+ pics and videos, so being able to stream those from iCloud instead of local storage would be fantastic...
    If that's the case, is there a time or storage limit for how many items we can have as shared? Apple hasn't explained those terms very well, understandable, since these features are still in beta. I would be very grateful if someone out there has more insight on this.

    I don't see a way to identify the new photos being added to smart album since I shared it last.  iTunes' smart playlist  allows you to do this easily.   But don;t see a way to do it iPhoto.
    It is not well supported. You would have to mark the photos you already published in some way, for example add them to a regular album or add a keyword, after you published the photos.
    When you add new photos to the smart album, you can add a rule "and has not keyword 'publish'" to find only the new photos that need publishing.

  • Performance and optimization from R - SQL in Oracle

    Hello,
    I'm assessing the adoption of Oracle R. I've glanced some of the Learning R Series files and forum threads but didn't find the answers I need.
    My questions are:
    1: Given any R source code, how to view the resulting SQL (or maybe PL/SQL) script?
    2. Would "my" optimization of db objects (e.g. creating an index, ALTERing a procedure) disrupt functionality?
    3. Are Vectorize, tapply, lapply, etc fully implemented in Oracle R? I'm guessing the answer will be yes.
    4. Is every line in R submitted to the db, or do any portions run purely in R? This relates to scenarios (if any) where an operation might be more efficient on SQL than on R or viceversa.
    Thank you.

    1. Given any R source code, how to view the resulting SQL (or maybe PL/SQL) script?We do not expose this.
    2. Would "my" optimization of db objects (e.g. creating an index, ALTERing a procedure) disrupt functionality?It will be considered by the query optimizer when building a plan.
    3. Are Vectorize, tapply, lapply, etc fully implemented in Oracle R? I'm guessing the answer will be yes.There is support for some predefined functions. To work with custom functions we have a family of <tt>ore.*Apply</tt> functions.
    4. Is every line in R submitted to the db, or do any portions run purely in R? This relates to scenarios (if any) where an operation might be more efficient on SQL than on R or viceversa.It is a mixed mode. ORE makes the decision. But in general not every line is executed right away. The execution is deferred to the point when the results are requested on the client side.
    Denis

Maybe you are looking for

  • Scenario Testing

    I am new in XI and created file to file scenario. While testing the configuration in Directory .. i found the following error. Interface Mapping              

  • Client User ID Name

    Hello together, To log in into the Portal I have a Password and a ID-Name: how can I get this ID-Name? I only know this code, but there is no method to get the ID-Name:   IWDClientUser user = WDClientUser.getCurrentUser();   String i = user.getLastNa

  • What is the relationship between CAF & GP

    hi Experts, I am new to the CAF and GP. Please provide me the following details about the CAF and GP. what is the relation ship between CAF & GP? Is GP is the one of the component of the CAF? What is the GP? Please give me one business scenorio we ca

  • I lost all my contact due to a line cut, can I restore my contact?

    I lost all my contact after my line been cut. I never save it in the past. Is there a solution to my problem?

  • Serial ata sata hard drive

    Hi members                I have a msi k8mm3 motherboard and want to connect a serial sata raid hd. but when i press tab to go into sata bios a get error message saying you need 2 hd to setup can some one tell me how to set up a diamond max 7L300RO s