Regarding Performance of the Index

Hi all,
Generally Index should increase the performance of the search, but i got the reverse of that when i was accessing the data from a table with and without index.
I have seen that the query performed more faster without index. What is the situation of the data base to perform like this. Can anybody explain.
SQL> set autotrace on explain
SQL> select * from emp where empno = 7934;
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10
Elapsed: 00:00:00.18
Execution Plan
Plan hash value: 2949544139
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |        |     1 |    37 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    37 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | PK_EMP |     1 |       |     0   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   2 - access("EMPNO"=7934)
SQL> alter table emp disable primary key;
Table altered.
Elapsed: 00:00:01.53
SQL> select * from emp where empno = 7934;
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10
Elapsed: 00:00:00.15
Execution Plan
Plan hash value: 3956160932
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |     1 |    37 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    37 |     3   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   1 - filter("EMPNO"=7934)Regards,
Sri Ram.
Edited by: Sri Ram on Jun 19, 2011 9:52 PM
Edited by: BluShadow on 20-Jun-2011 10:33
added {noformat}{noformat} tags.  86 posts and you still don't know how to post code/data?  Please read {message:id=9360002}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

SQL> alter table emp disable primary key;
Table altered.
Elapsed: 00:00:00.15
SQL> select * from emp where empno =7566;
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7566 JONES      MANAGER         7839 02-APR-81       2975                    40
Elapsed: 00:00:00.17
Execution Plan
Plan hash value: 3956160932
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |     1 |    37 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    37 |     3   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   1 - filter("EMPNO"=7566)
SQL> alter table emp enable primary key;
Table altered.
Elapsed: 00:00:00.14
SQL> select * from emp where empno =7566;
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7566 JONES      MANAGER         7839 02-APR-81       2975                    40
Elapsed: 00:00:00.17
Execution Plan
Plan hash value: 2949544139
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |        |     1 |    37 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |     1 |    37 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | PK_EMP |     1 |       |     0   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   2 - access("EMPNO"=7566)Edited by: BluShadow on 20-Jun-2011 10:34
added more {noformat}{noformat} tags.  See, it's really not that difficult, but I'm not going to do it all the time for you.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Regarding performance of report

    hi SDNs,
    my report is running for more than 30 min and going dumb.
    so i am increasing the performance by considering every thing..
    putting indexes in select stmts .
    while reading ( READ TABLE ) using binary search.
    sorting before reading.
    thing is still showing, some poor performance..
    what to consider inorder to increase the perofrmance?
    and here report has to extract huge data ( min 50,000 records )
    shall i use HASHED OR SORTED ? how to use these??
    i am using STANDARD now ??
    pls help me .,
    Thanking you,
    ramu

    hi,
    pls go thru this. def this will help
    Performance tuning for Data Selection Statement 
    For all entries
    The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of 
    entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the 
    length of the WHERE clause. 
    The plus
    Large amount of data 
    Mixing processing and reading of data 
    Fast internal reprocessing of data 
    Fast 
    The Minus
    Difficult to program/understand 
    Memory could be critical (use FREE or PACKAGE size) 
    Some steps that might make FOR ALL ENTRIES more efficient:
    Removing duplicates from the the driver table 
    Sorting the driver table 
    If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
    FOR ALL ENTRIES IN i_tab
      WHERE mykey >= i_tab-low and
            mykey <= i_tab-high.
    Nested selects
    The plus:
    Small amount of data 
    Mixing processing and reading of data 
    Easy to code - and understand 
    The minus:
    Large amount of data 
    when mixed processing isn’t needed 
    Performance killer no. 1
    Select using JOINS
    The plus
    Very large amount of data 
    Similar to Nested selects - when the accesses are planned by the programmer 
    In some cases the fastest 
    Not so memory critical 
    The minus
    Very difficult to program/understand 
    Mixing processing and reading of data not possible 
    Use the selection criteria
    SELECT * FROM SBOOK.                   
      CHECK: SBOOK-CARRID = 'LH' AND       
                      SBOOK-CONNID = '0400'.        
    ENDSELECT.                             
    SELECT * FROM SBOOK                     
      WHERE CARRID = 'LH' AND               
            CONNID = '0400'.                
    ENDSELECT.                              
    Use the aggregated functions
    C4A = '000'.              
    SELECT * FROM T100        
      WHERE SPRSL = 'D' AND   
            ARBGB = '00'.     
      CHECK: T100-MSGNR > C4A.
      C4A = T100-MSGNR.       
    ENDSELECT.                
    SELECT MAX( MSGNR ) FROM T100 INTO C4A 
    WHERE SPRSL = 'D' AND                
           ARBGB = '00'.                  
    Select with view
    SELECT * FROM DD01L                    
      WHERE DOMNAME LIKE 'CHAR%'           
            AND AS4LOCAL = 'A'.            
      SELECT SINGLE * FROM DD01T           
        WHERE   DOMNAME    = DD01L-DOMNAME 
            AND AS4LOCAL   = 'A'           
            AND AS4VERS    = DD01L-AS4VERS 
            AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    SELECT * FROM DD01V                    
    WHERE DOMNAME LIKE 'CHAR%'           
           AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    Select with index support
    SELECT * FROM T100            
    WHERE     ARBGB = '00'      
           AND MSGNR = '999'.    
    ENDSELECT.                    
    SELECT * FROM T002.             
      SELECT * FROM T100            
        WHERE     SPRSL = T002-SPRAS
              AND ARBGB = '00'      
              AND MSGNR = '999'.    
      ENDSELECT.                    
    ENDSELECT.                      
    Select … Into table
    REFRESH X006.                 
    SELECT * FROM T006 INTO X006. 
      APPEND X006.                
    ENDSELECT
    SELECT * FROM T006 INTO TABLE X006.
    Select with selection list
    SELECT * FROM DD01L              
      WHERE DOMNAME LIKE 'CHAR%'     
            AND AS4LOCAL = 'A'.      
    ENDSELECT
    SELECT DOMNAME FROM DD01L    
    INTO DD01L-DOMNAME         
    WHERE DOMNAME LIKE 'CHAR%' 
           AND AS4LOCAL = 'A'.  
    ENDSELECT
    Key access to multiple lines
    LOOP AT TAB.          
    CHECK TAB-K = KVAL. 
    ENDLOOP.              
    LOOP AT TAB WHERE K = KVAL.     
    ENDLOOP.                        
    Copying internal tables
    REFRESH TAB_DEST.              
    LOOP AT TAB_SRC INTO TAB_DEST. 
      APPEND TAB_DEST.             
    ENDLOOP.                       
    TAB_DEST[] = TAB_SRC[].
    Modifying a set of lines
    LOOP AT TAB.             
      IF TAB-FLAG IS INITIAL.
        TAB-FLAG = 'X'.      
      ENDIF.                 
      MODIFY TAB.            
    ENDLOOP.                 
    TAB-FLAG = 'X'.                  
    MODIFY TAB TRANSPORTING FLAG     
               WHERE FLAG IS INITIAL.
    Deleting a sequence of lines
    DO 101 TIMES.               
      DELETE TAB_DEST INDEX 450.
    ENDDO.                      
    DELETE TAB_DEST FROM 450 TO 550.
    Linear search vs. binary
    READ TABLE TAB WITH KEY K = 'X'.
    READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
    Comparison of internal tables
    DESCRIBE TABLE: TAB1 LINES L1,      
                    TAB2 LINES L2.      
    IF L1 <> L2.                        
      TAB_DIFFERENT = 'X'.              
    ELSE.                               
      TAB_DIFFERENT = SPACE.            
      LOOP AT TAB1.                     
        READ TABLE TAB2 INDEX SY-TABIX. 
        IF TAB1 <> TAB2.                
          TAB_DIFFERENT = 'X'. EXIT.    
        ENDIF.                          
      ENDLOOP.                          
    ENDIF.                              
    IF TAB_DIFFERENT = SPACE.           
    ENDIF.                              
    IF TAB1[] = TAB2[].  
    ENDIF.               
    Modify selected components
    LOOP AT TAB.           
    TAB-DATE = SY-DATUM. 
    MODIFY TAB.          
    ENDLOOP.               
    WA-DATE = SY-DATUM.                    
    LOOP AT TAB.                           
    MODIFY TAB FROM WA TRANSPORTING DATE.
    ENDLOOP.                               
    Appending two internal tables
    LOOP AT TAB_SRC.              
      APPEND TAB_SRC TO TAB_DEST. 
    ENDLOOP
    APPEND LINES OF TAB_SRC TO TAB_DEST.
    Deleting a set of lines
    LOOP AT TAB_DEST WHERE K = KVAL. 
      DELETE TAB_DEST.               
    ENDLOOP
    DELETE TAB_DEST WHERE K = KVAL.
    Tools available in SAP to pin-point a performance problem
    The runtime analysis (SE30)
    SQL Trace (ST05)
    Tips and Tricks tool
    The performance database
    Optimizing the load of the database
    Using table buffering
    Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
    Select DISTINCT 
    ORDER BY / GROUP BY / HAVING clause 
    Any WHERE clasuse that contains a subquery or IS NULL expression 
    JOIN s 
    A SELECT... FOR UPDATE 
    If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.
    Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
    Avoid ther SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.
    http://www.erpgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    for any clarifiaction pls mail me.
    pls reward points, if this helped u.
    regards,
    anversha.
    [email protected]

  • Regarding Performance Tunning of Mapping & Process Flow

    Hi,
    i have around 60-70 GB data in Target database.
    i need do improve the performance of Mapping and Process flow.
    i have used lookup transformation in mapping.
    Plz give me some tips for improving performance of Process.
    Thanks,

    Please go through a Performance Tuning Book for Oracle 10gR2.
    Most importantly remember that in Oracle 10g, your performace of mappings can be increased manifold by following these steps:
    1. Do not design a mapping wherein you load a table and then select from that table to load another table and so on. This is a bad design.
    2. Keep mappings as simple as possible. In other words if a mapping is complicated in terms of joins or other operators then split the mapping into more than one parts.
    3. Also check that all your source tables should be analyzed using DBMS_STATS. Ensuring this one single step can make your work very easy.
    4. Put indexes where you find your predicate has a very high selectivity. Also keep in mind the column ordering of the index.
    5. Use Set Based operation, since it is always a good idea to achieve the result by running one single query rather than a loop and multiple inserts.
    6. Use APPEND PARALLEL hint while loading the target tables. This will not generate wny redo and save time
    7. Please have a recheck while usng some performance intensive operators like UNION, DISTINCT and AGGREGATION
    8. When using a sequence operator to load a large table, check that the sequence should be cached with some values.
    9. When loading large data HASH JOINS are the most appropriate more often than not. So you can use USE_HASH as the hint for selecting from large tables.
    10. Filter out as much unrequired data from a table as soon as possible in the mapping before doing multiple joins.
    I am sure there are many more ... the above is just a random list that I could remember now. Please go through Oracle Performance Tuning Guide, Tom Kyte's Oracle Expert One on One. Knowledge of performance tuning will grow with experience. I am also learning each day !!!
    Regards
    AP

  • Performance of the query is poor

    Hi All,
    This is Prasad.  I have a problem with the query it is taking more time to retrieve the data from the Cube.  In the query they are using a Variable of type Customer Exit.   The Cube is not at compressed.  I think the issue with the F fact table is due to the high number of table partitions (requests) that it has to select from. If I compress the cube, the performance of the query is increased r not?  Is there any alternative for improving the performance of the query.  Somebody suggested Result set query, iam not aware of this technique if u know let me know.
    Thanks in advance

    Hi Prasad,
    Query performance will depend on many factors like
    1. Aggregates
    2. Compression of requests
    3. Query read mode setting
    4. Cache memory setting
    5. By Creating BI Accelerator Indexes on Infocubes
    6. Indexes
    Proposing aggregates to improve query performance:
    First try to execute the query in RSRT on which u required to build aggregates. Check how much time it is taking to execute.....and whether it is required to build aggregate on this querry?? To get this information, Goto SE11> Give tabl name RSDDSTAT_DM in BI7.0 or RSDDSTAT in BW3.x.> Disply -> Contnts-> Give from date and to date values as today, user name as Ur user name, and give the query name
    --> execute.
    Now u'll get a list with fields like Object anme(Report anme), Time read, Infoprovider name(Multiprovider), Partprovider name (Cube), Aggregate name... etc. If the time read is less than 100,000,000 (100 sec) is acceptable. If the time read is more than 100 sec then it is recommended to create Aggregates for that query to increase performance. Keep in mind this time read.
    Again goto RSRT> Give query name> Execute+Debug-->
    A popup will come in that select the check box display aggregates found--> continue. If any aggregates or exist for that
    query it will display first if u press on continue button, it will display from which cube which fields are coming it will display...try to copy this list of objects on which aggregate can be created into one text file...
    then select that particular cube in RSA1>context>Maintain Aggregates-> Create by own> click on create aggregate button on top left side> Give discription of the aggregate>continue> take first object from list and fclick on find button in aggregates creation screen> give the object name and search... drag and drop that object into aggregate name right side (Drag and drop all the fields like this into aggregate).---->
    Activate the aggregate--> it will take some time once the activation finishes --> make sure that aggregate is in switch on mode.
    Try to xecute the query from RSRT again and find out the time read and compare this with first time read. If it is less tahn first time read then u can propose this aggregate to incraese the performance of the query.
    I hope this will help u... go through the below links to know about aggregates more clear.
    http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    Follow this thread for creation of BIA Indexes:
    Re: BIA Creation
    Hopr this helps...
    Regards,
    Ramki.

  • Best practises regarding generation of the EOImpl and VORowImpl java files

    Greetings,
    I was wondering what is the best practice/suggested approach regarding generation of the EOImpl and the VORowImpl java files for ADF Business Components.
    From the maintenance point of view, it makes sense to generate them only if you want to put some custom code in those e.g. validation or transient attribute.
    However from performance point of view, does it help to have these EOImpl and the VORowImpl java files generated.
    Any thoughts on this will be a lot helpful.
    regards - rutwik

    Hi
    Sections:
    * 5.11.3.3 Attribute Indexes and InvokeAccessor Generated Code
    * 6.9.4.4 Attribute Indexes and InvokeAccessor Generated Code
    Explain the details of the generated code for view object row classes and entity object row classes, respectively.
    As described in the guide, the key benefit of the generated code is not performance, but of type-safe access and having a place to put custom code when needed.
    When you don't have a generated row class, then there is no typesafe method like getSal() to get the Sal attribute.
    Section 4.4.9 Generic Versus Strongly-Typed APIs gives a few more details on that and both 5.11.1.1 Generating Bind Variable Accessors and 5.11.1.2 Generating View Row Attribute Accessors give some concrete examples of the value proposition of the strongly-typed API's.
    Without the typed API's of course, you need to get an attribute value by calling:
    yourRow.getAttribute("SomeAttrName");
    or
    yourRow.getAttribute(3); // integer attribute index (zero-based) [see section 5.11.3.3 in the guide]
    Since it's rare that someone will like to use the numerical approach, then let's assume that people would go with the generic getAttribute("SomeAttrName") way.
    Internally, the framework will need to find the attribute index based on the attribute name before accessing the attribute. If you perform this attribute-name-to-attribute-index lookup a zillion times, then of course it would end up representing some overhead that you could avoid by either using the numerical index yourself, or generating the strongly-typed class. The generated code avoids the need to lookup the attribute by string name and also provides the compile-time checking (i.e. typesafe access) in your code.
    So, the performance of using the attribute index access (or the generated code) that is ever-so-slightly faster than using the attribute access by string name, not necessarily the generation of code itself that makes things faster. The performance shouldn't be the basis of the strategy decision, but if they see the value of typesafe access then it might be worth the extra files -- some perhaps containing no custom code -- being generated for them.
    Hope this helps
    rgds
    susan

  • Boost the Performance of the database

    I am getting a user call that the performance of the database is poor. I want to increase the performance of the database, can any one list out what are the checks and changes i have to do to increase the performance.
    I am using topas command to find the top consume process in unix apart from this what are the area i have to look to boost the performance.
    Help me in this regards.
    Vel

    there is no one area where you can pinpoint and say this needs tuning. performance tuning needs to be addressed from all fronts. but you make one change at a time and see if it gives the desired improvement. the areas that you have to look are
    1. table design
    2. sql tuning, proper use of indexes
    3. sizing the tables, indexes
    4. setting up proper SGA parameters, if you have memory in the machine, make optimal use of it by allocating it to oracle.
    5. use of procedures and functions.
    you may or may not get a call from the user, but if you feel that something could be improved by tuning, i guess its the job of the DBA to do that and squeeze every bit of performance from the hardware and the software.
    check out oracle performance tuning docs for more detailed info.
    Mukundan.

  • What will the space will the index take ?

    My table has 3 lakhs records
    I have 5 gb freespace in my tablespace.
    I need to create an index on a column which is in date format.
    What will the space will the index take ?

    hi,
    I can not say how much space the index will take? But its always better to create an Index after your table is populated with data. Since you are doing the same it will boost the performance.
    2) Choose the proper tablespace for Index. Whether the same tablespace in which your table is or use separate tablespace for Index.
    Regards,
    Darshan

  • After purging the data..is need to rebuild the index?

    Hi All,
    In our production DB,Application level incidents Purging got finished.(Mean first 3 years data got purged,which is not usable by client.This is done for performance issue)
    Now in this situation,Is necessary to rebuild the index?
    Before that FYI,
    oracle 9i(Enterprise Edition)
    hp-ux b11
    dbsize=30G
    Application:
    tool:HP-ov
    Please throw your input.
    Thanks in advance :-)
    Regards,
    DB.
    Edited by: DB on May 21, 2013 11:26 PM

    >
    In our production DB,Application level incidents Purging got finished.(Mean first 3 years data got purged,which is not usable by client.This is done for performance issue)
    Now in this situation,Is necessary to rebuild the index?
    >
    Please clarify.
    You said you did the purge for 'performance issue' and now, after the purge, you still don't know if your performance issue went away?
    If you no longer have whatever (unknown to us since you didn't post it) 'performance issue' you had then why do you need to do anything?
    If you do still have a performance issue then you need to follow the same steps you should follow whenever you have a performance issue:
    1. validate that you REALLY have an issue
    2. determine the cause of the issue
    3. identify solutions to mitigate/eliminate the issue
    4. test one or two of the possible solutions
    5. implement your 'best' tested solution from step #4
    6. go back to step #1
    Based on what you posted it sounds like you implemented a 'purge' without really knowing that it was even necessary or was even causing a performance issue.
    And you make no mention at all of whether you recollected statistics after the purge was done.

  • BRSPACE to compress all the index

    Dear expert,
    Currently , no table no index is compressed in our ECC.
    So we want to first compress all the index then later will thinking about compress the TOP 20 tables.
    i would like to run the following scripts
    brspace -u / -c force -f idrebuild -o sapsr3 -s PSAPSR3 -i allsel -c cind -ic ora_proc -p 4
    But do you suggest i'd rather compress all the index by brspace table reorg?
    Do you suggest i run the index compression directly on productive system ?
    Thank you!

    Hi Kate,
    You can perform index compression online on Production provided you are performing it during low load period as it may impact performance.
    Refer SAP note for more details on the options available with brtools to perform index compression
    1464156 - Support for index compression in BRSPACE 7.20
    Alternate option is available.
    You may perform table compression which will automatically rebuild all the indexes for that table. Refer SAP note
    1907700 - Different approach to compress tables
    Hope this helps.
    Regards,
    Deepak Kori

  • The plan doesn't use the index but the cost of INDEX FULL SCAN looks better

    Hi,
    Well, I'm sure I miss the boat... and if the question is pretty tricky, the answer is probably :"You're stupid Greg!". Well anyway, you'll probably be interested in trying to answer it as I've spent some times on it without any result ! I use Oracle XE on Windows...
    1) Below is my query and its plan. You'll find the full schema generation script at the end of this email. Look at the cost (468) of the plan and the cost of the same query when you force the use of the index (116). Why is this ?
    select count(distinct col5)
      2    from demo
      3      where col1 between 1 and 50000
      4        and col2=col1
      5        and col3=col1
      6        and col4=col1;
    Plan d'execution
    Plan hash value: 2072716547
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   116 |   468   (2)| 00:00:06 |
    |   1 |  SORT GROUP BY     |      |     1 |   116 |            |          |
    |*  2 |   TABLE ACCESS FULL| DEMO |     1 |   116 |   468   (2)| 00:00:06 |
    Predicate Information (identified by operation id):
       2 - filter("COL2"="COL1" AND "COL3"="COL1" AND "COL4"="COL1" AND
                  "COL1"<=50000 AND "COL2"<=50000 AND "COL3"<=50000 AND "COL4"<=50000 AND
                  "COL1">=1 AND "COL2">=1 AND "COL3">=1 AND "COL4">=1)2) When I force the use of an index (with a Hint), You'll see the cost of the plan is 116 which is definitly better than the TABLE ACCESS FULL (468) :
    SQL> l
      1  select /*+ index(demo demo_idx)*/ count(distinct col5)
      2    from demo
      3      where col1 between 1 and 50000
      4        and col2=col1
      5        and col3=col1
      6*       and col4=col1
    SQL> /
    Plan d'execution
    Plan hash value: 189561699
    | Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |          |     1 |   116 |   437   (2)| 00:00:06 |
    |   1 |  SORT GROUP BY               |          |     1 |   116 |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| DEMO     |     1 |   116 |   437   (2)| 00:00:06 |
    |*  3 |    INDEX FULL SCAN           | DEMO_IDX |     1 |       |   436   (2)| 00:00:06 |
    Predicate Information (identified by operation id):
       3 - filter("COL2"="COL1" AND "COL3"="COL1" AND "COL4"="COL1" AND
                  "COL1"<=50000 AND "COL2"<=50000 AND "COL3"<=50000 AND "COL4"<=50000 AND
                  "COL1">=1 AND "COL2">=1 AND "COL3">=1 AND "COL4">=1)3) My question is why is plan1 used while plan2 should be considered better by the optimizer regarding the cost (to make the case even more complex, plan1 is actually more efficient but this is out of the scope of my question. I know that and I know why !).
    You'll find a script to generate the structures and data below. I can send you the 10053 traces if you what to go furthermore. Take care the index is a REVERSE index (Don't know if query rewrite should be enabled in order to take advantage of this type of index but it is set to "true" (and "trusted") :
    drop table demo;
    create table demo (col1 number not null,
        col2 number,
        col3 number,
        col4 number,
        col5 varchar2(500));
    begin
      for i in 1..100000 loop
        insert into demo values (i,i,i,i,'This column is used to raise the High Water Mark and '||
                                 ' the cost of an TABLE ACCESS FULL operation');
      end loop;
    end;
    commit;
    create index demo_idx on demo(col1,col2,col3,col4) reverse;
    exec dbms_stats.gather_table_stats(USER, 'DEMO', cascade=>true, -
      method_opt=>'FOR ALL COLUMNS SIZE 254', no_invalidate=>false) Any comments are welcome ! Best Regards,
    Gregory
    Message was edited by:
    arkzoyd... I've added the "pre" tags

    I suspect this has something to do with db_file_multiblock_read_count
    After running provided creation statements by you I got following results:
    SQL> show parameter multiblock
    NAME                                 TYPE        VALUE
    db_file_multiblock_read_count        integer     16
    SQL> set autotrace on
    SQL> select count(distinct col5)
      2   from demo
      3   where col1 between 1 and 50000
      4   and col2=col1
      5   and col3=col1
      6   and col4=col1
      7  /
    COUNT(DISTINCTCOL5)
                      1
    Execution Plan
    Plan hash value: 2072716547
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   116 |   375   (1)| 00:00:05 |
    |   1 |  SORT GROUP BY     |      |     1 |   116 |            |          |
    |*  2 |   TABLE ACCESS FULL| DEMO |     1 |   116 |   375   (1)| 00:00:05 |
    Predicate Information (identified by operation id):
       2 - filter("COL2"="COL1" AND "COL3"="COL1" AND "COL4"="COL1" AND
                  "COL1"<=50000 AND "COL2"<=50000 AND "COL3"<=50000 AND "COL4"<=5000
    0 AND
                  "COL1">=1 AND "COL2">=1 AND "COL3">=1 AND "COL4">=1)
    Statistics
            196  recursive calls
              0  db block gets
           1734  consistent gets
            850  physical reads
              0  redo size
            422  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              7  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> select /*+ index(demo demo_idx)*/ count(distinct col5)
      2   from demo
      3   where col1 between 1 and 50000
      4   and col2=col1
      5   and col3=col1
      6   and col4=col1
      7  /
    COUNT(DISTINCTCOL5)
                      1
    Execution Plan
    Plan hash value: 189561699
    | Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| T
    ime     |
    |   0 | SELECT STATEMENT             |          |     1 |   116 |   431   (1)| 0
    0:00:06 |
    |   1 |  SORT GROUP BY               |          |     1 |   116 |            |
            |
    |   2 |   TABLE ACCESS BY INDEX ROWID| DEMO     |     1 |   116 |   431   (1)| 0
    0:00:06 |
    |*  3 |    INDEX FULL SCAN           | DEMO_IDX |     1 |       |   430   (1)| 0
    0:00:06 |
    Predicate Information (identified by operation id):
       3 - filter("COL2"="COL1" AND "COL3"="COL1" AND "COL4"="COL1" AND
                  "COL1"<=50000 AND "COL2"<=50000 AND "COL3"<=50000 AND "COL4"<=5000
    0 AND
                  "COL1">=1 AND "COL2">=1 AND "COL3">=1 AND "COL4">=1)
    Statistics
              1  recursive calls
              0  db block gets
          50426  consistent gets
            428  physical reads
              0  redo size
            422  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
              1  rows processedNow I modify multiblock_read_count and full scan cost is going up although anyway Oracle by default chooses full scan instead of index access.
    SQL> alter session set db_file_multiblock_read_count = 8;
    Session altered.
    SQL> select count(distinct col5)
      2   from demo
      3   where col1 between 1 and 50000
      4   and col2=col1
      5   and col3=col1
      6   and col4=col1
      7  /
    COUNT(DISTINCTCOL5)
                      1
    Execution Plan
    Plan hash value: 2072716547
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   116 |   463   (1)| 00:00:06 |
    |   1 |  SORT GROUP BY     |      |     1 |   116 |            |          |
    |*  2 |   TABLE ACCESS FULL| DEMO |     1 |   116 |   463   (1)| 00:00:06 |
    Predicate Information (identified by operation id):
       2 - filter("COL2"="COL1" AND "COL3"="COL1" AND "COL4"="COL1" AND
                  "COL1"<=50000 AND "COL2"<=50000 AND "COL3"<=50000 AND "COL4"<=5000
    0 AND
                  "COL1">=1 AND "COL2">=1 AND "COL3">=1 AND "COL4">=1)
    Statistics
              1  recursive calls
              0  db block gets
           1697  consistent gets
            850  physical reads
              0  redo size
            422  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> select /*+ index(demo demo_idx)*/ count(distinct col5)
      2   from demo
      3   where col1 between 1 and 50000
      4   and col2=col1
      5   and col3=col1
      6   and col4=col1
      7  /
    COUNT(DISTINCTCOL5)
                      1
    Execution Plan
    Plan hash value: 189561699
    | Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| T
    ime     |
    |   0 | SELECT STATEMENT             |          |     1 |   116 |   431   (1)| 0
    0:00:06 |
    |   1 |  SORT GROUP BY               |          |     1 |   116 |            |
            |
    |   2 |   TABLE ACCESS BY INDEX ROWID| DEMO     |     1 |   116 |   431   (1)| 0
    0:00:06 |
    |*  3 |    INDEX FULL SCAN           | DEMO_IDX |     1 |       |   430   (1)| 0
    0:00:06 |
    Predicate Information (identified by operation id):
       3 - filter("COL2"="COL1" AND "COL3"="COL1" AND "COL4"="COL1" AND
                  "COL1"<=50000 AND "COL2"<=50000 AND "COL3"<=50000 AND "COL4"<=5000
    0 AND
                  "COL1">=1 AND "COL2">=1 AND "COL3">=1 AND "COL4">=1)
    Statistics
              1  recursive calls
              0  db block gets
          50426  consistent gets
              0  physical reads
              0  redo size
            422  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
              1  rows processedSo I don't know what is the default value of dbfmbrc in XE and not gone too deep to understand how for example system statistics may change your situation.
    Gints Plivna
    http://www.gplivna.eu
    P.S. BTW I used Oracle Database 10g Enterprise Edition Release 10.2.0.1.0.
    Message was edited by:
    gintsp
    listened to Williams suggestion :)

  • Difference between open loop and closed loop mode when finding the index on a stepper motor axis.

    I have a test system which uses a PXI-7352 motion controller with a MID-7602 stepper motor drive to position a 200 step/revolution stepper motor with an attached 1000 line encoder.  The MID-7602 is configured for 64 microsteps per step, giving 12,800 microsteps and 4000 encoder counts per revolution.  The system is controlled using LabView 7.1 and NI-Motion 6.13.  To home the motor, I've defined a sequence in MAX which first finds the reverse limit switch and then moves forward to the encoder index pulse.
    When finding its reference in open loop mode, the system its reference at the same microstep each time.  When findings its reference in closed loop mode, the system finds its reference somewhere within about a 20 microstep wide range.
    I hope somebody out there with more experience with this controller can describe the difference between finding the encoder index in open loop mode and finding the encoder index in closed loop mode. 
    Thanks,
    Mark Moss

    Hello Mark,
    I suspect that the behavior you are experiencing has today
    with the resolution difference between your encoder and your
    microstepping.  Let me illustrate with an
    example:
    Example Setup
    Stepper
    Motor
    NI
    Motion Controller
    10
    steps per encoder pulse (Every 10 steps the controller gets an encoder
    pulse, therefore positions 0-9 all appear to be 0 to the controller)
    The
    system has a maximum of 3 pull in moves set
    Open Loop Scenario
    The
    system is commanded to go to position 9.
    The
    controller sends out 9 pulses.
    The
    motor moves to position 9.
    Closed Loop Scenario
    The
    system is commanded to go to position 9.
    The
    controller sends out 9 pulses.
    The
    motor moves to position 9.
    The
    controller checks its position and sees that it is still at position 0
    because it has not received an encoder pulse.
    Because
    the controller thinks it is still at position 0, it sends another 9 pulses
    as its first pull in move.
    The
    motor moves to position 18.
    The
    controller checks its position and sees that it is at position 10 because
    it has only received one encoder pulse.
    Because
    the controller thinks it is still at position 10, it sends1 reverse pulse
    as its second pull in move.
    The
    motor moves to position 17.
    The
    controller checks its position and sees that it is at position 10 because
    it has only received one encoder pulse.
    Because
    the controller thinks it is still at position 10, it sends and additional
    single reverse pulse as its third pull in move.
    The
    motor moves to its final position of 16.
    I believe something similar is happening with your
    application as it looks for the index pulse. 
    Because the controller does not compensate for what it perceives as
    position error in open loop mode, the motor always goes to the same commanded
    position.  In closed loop mode, the motor
    is bouncing around inside the single index encoder pulse trying to find a
    certain position.
    There are several recommendations I can make towards
    correcting this behavior.  These
    recommendations are in order of effectiveness:
    Setup
    your system so that there are more encoder counts per revolution than
    steps per revolution.
    Turn
    off pull-in moves by setting them to 0 in MAX.
    Use an
    open loop configuration.
    Play
    around with the pull in window in MAX.
    Regards,
    Luke H

  • How to update a large (over 4 million item) List(Of Byte) quickly by altering indexes contained in a Dictionary(Of Integer, Byte) where the Dictionaries keys are the indexes in the List(Of Byte) that need to be changed to the values for those indexes?

       I'm having some difficulty with transferring images from a UDP Client to a UDP Server. The issue is receiving the bytes necessary to update an original image sent from the Client to the Server and updating the Servers List(Of Byte) with the
    new bytes replacing bytes in that list. This is a simplex connection where the Sever receives and the Client sends to utilize the least amount of bandwidth for a "Remote Desktop" style application where the Server side needs image updates of whatever
    occurs on the Client desktop.
       So far I can tranfer images with no issue. The images can be be any image type (.Bmp, .Gif, .JPeg, .Png, etc). I was working with sending .JPeg's as they appear to be the smallest size image when a Bitmap is saved to a memory stream as type
    .JPeg. And then I am using GZip to compress that byte array again so it is much smaller. However on a loopback on my NIC the speed for sending a full size screen capture is not very fast as the Server updates fairly slowly unless the Clients screen capture
    Bitmap is reduced in size to about 1/3'd of the original size. Then about 12000 bytes or less are sent for each update.
       Due to .JPeg compression I suppose there is no way to get the difference in bytes between two .JPegs and only send those when something occurs on the desktop that alters the desktop screen capture image. Therefore I went to using .Bmp's as each
    .Bmp contains the same number of bytes in its array regardless of the image alterations on the desktop. So I suppose the difference in bytes from a second screen capture and an inital screen capture are what is different in the second image from the initial
    image.
       What I have done so far is save an initial Bitmap of a screen capture using a memory stream and saving as type .Bmp which takes less than 93 milliseconds for 4196406 bytes. Compressing that takes less than 118 milliseconds to 197325 bytes for
    the current windows on the desktop. When that is done PictureBox1 is updated from nothing to the captured image as the PictureBox's background image with image layout zoom and the PictureBox sized at 1/2 my screens width and 1/2 my screens height.
       Then I save a new Bitmap the same way which now contains different image information as the PictureBox is now displaying an image so its back color is no longer displayed (solid color Aqua) and the cursor has moved to a different location. The
    second Bitmap is also 4196406 in bytes and compressed it was 315473 bytes in size.
       I also just found code from this link Converting a Bitmap to a Byte Array (and Byte Array to Bitmap) which gets a byte array
    directly from a Bitmap and the size of that is 3148800 for whatever is full screen captured on my laptop. So I should be able to work with smaller byte arrays at some point.
       The issue I'm having is that once the Client sends an image of the desktop to the Server I only want to update the server with any differences occuring on the Clients desktop. So what I have done is compare the first screen captures bytes (stored
    in a List(Of Byte)) to the second screen captures bytes (stored in a List(Of Byte)) by using a For/Next for 0 to 4196405 where if a byte in the first screen captures List is not equal to a byte in the second screen captures List I add the index and byte of
    the second screen captures list to a Dictionary(Of Integer, Byte). The Dictionary then only contains the indexes and bytes that are different between the first screen capture and second screen capture. This takes about 125 milliseconds which I think is pretty
    fast for 4196406 byte comparison using a For/Next and adding all the different bytes and indexes for each byte to a Dictionary.
        The difference in Bytes between the inital screen capture and the second screen capture is 242587 as an example which changes of course. For that amount of bytes the Dictionary contains 242587 integers as indexes and 242587 bytes as different
    bytes totaling 485174 bytes for both arrays (keys, values).  Compressed the indexes go from 242587 to 43489 bytes and the values go from 242587 to 34982 bytes. Which means I will have to send 78, 481 bytes from the Client to the Server to update the display
    on the server. Quite smaller than the original 4196406 bytes of the second Bitmap saved to type .Bmp or the compressed size of that array which was 315473 bytes. Plus a few bytes I add as overhead so the server knows when an image array ends and how many packets
    were sent for the array so it can discard complete arrays if necessary since UDP is lossfull although probably not so much in current networks like it may originally have been when the internet started.
        In reality the data from the Client to the Server will mostly be the cursor as it moves and updating the Server image with only a few hundred bytes I would imagine at a time. Or when the cursor selects a Button for example and the Buttons
    color changes causing those differences in the original screen capture.
       But the problem is if I send the Dictionaries Indexes and Bytes to the Server then I need to update the original Bitmap List(Of Byte) on the server by removing the Bytes in the received informations Index locations array from the Servers Bitmap
    List(Of Byte) and replacing those Bytes with the Bytes in the received informations Byte array. This takes so long using a For/Next for however many indexes are in the received informations Index array to update the Bitmap List(Of Byte) on the server using
    "Bmp1Bytes.RemoveAt(Index As Integer)" followed by "Bmp1Bytes.Insert(Index As Integer, Item As Byte)" in the For/Next.
        I've tried various For/Next statements including using a new List(Of Byte) with If statements so If the the integer for the For/Next ='s the Key in a Dictionary(Of Integer, Byte) using a Counter to provide the Dictionaries Key value then
    the Dictionaries byte value will be added to the List(Of Byte) and the counter will increas by one Else the List(Of Byte) adds the original "Bmp1Bytes" byte at that index to the new List(Of Byte). This takes forever also.
       I also tried the same For/Next adding to a new Dictionary(Of Integer, Byte) but that takes forever too.
       I think I could use RemoveRange and AddRange to speed things up. But I don't know how to retrieve a contiguous range of indexes in the received indexes that need to be updated in the servers "Bmp1Bytes" List(Of Byte) from the received
    array of indexes and bytes which are in a Dictionary(Of Integer, Byte).  But I believe this would even be slower than some realistic method for replacing all Bytes in a List(Of Byte) when I have the indexes that need to be replaced and the bytes to replace
    them with.
       Even if I just used AddRange on a new List(Of Byte) to add ranges of bytes from the original "Bmp1Bytes" and the changes from the Dictionary(Of Integer, Byte) I think this would be rather slow. Although I don't know how to do that
    by getting contiguous ranges of indexes from the Dictionaries keys.
       So I was wondering if there is some method perhaps using Linq or IEnumerable which I've been unable to figure anything out which could do this.
       I do have some copy and pasted code which I don't understand how it works that I am using which I would guess could be altered for doing something like this but I can't find information that provides how the code works.  Or even if I did
    maybe I can't understand it. Like the code below which is extremely fast.
       Dim strArray() As String = Array.ConvertAll(Of Integer, String)(BmpComparisonDict.Keys.ToArray, Function(x) x.ToString())
    La vida loca

    Monkeyboy,
    That was quite a bit to read, but still a bit unclear. Could you put a specific list of goals/questions, asked in the smallest possible form?
    It seems like either you're making a program that monitors activity on your computer, or you're writing some kind of remote pc app.
    When you do get your bytes from using lockbits, keep in mind all the files header info would be lost. I think retaining the header info is worth the extra bytes.
    The other, thing: I'm not sure if you're taking 32bpp screen shots, but also keep in mind that the "whole desktop" is the final destination for blended graphics, if that makes sense. What I mean is that there is no need to capture an "alpha"
    channel for a desktop screenshot, as alpha would always be 255, this could save you 1 byte per pixel captured... Theres nothing "behind" the desktop, therefore no alpha, and every window shown above the desktop is already blended. I suggest using
    24Bpp for a full screen capture.
    Your X,Y information for the mouse could be stored as UINT16, this would save you a measly 2 bytes per location update/save.
    When you update your byte arrays, maybe you can turn the array into a stream and write to whatever index, however many bytes, that should prevent a "Shift" of bytes, and instead overwrite any bytes that "get in the way".
    ex
    Dim example As String = "This is an example."
    Dim insertString As String = "was"
    Dim insertBytes As Byte() = System.Text.Encoding.ASCII.GetBytes(insertString)
    Dim bytes As Byte() = System.Text.Encoding.ASCII.GetBytes(example)
    Dim modifiedBytes As Byte() = {}
    Using ms As New System.IO.MemoryStream(bytes)
    ms.Position = 5
    ms.Write(insertBytes, 0, 3)
    modifiedBytes = ms.ToArray
    End Using
    Dim newString As String = System.Text.Encoding.ASCII.GetString(modifiedBytes)
    'Notice how below there isn't the word "is" anymore, and that there isn't a
    'space.
    'This demonstrates that you overwrite existing data, versus shifting everything to
    'the right.
    'Returns: This wasan example.
    MsgBox(newString)
    “If you want something you've never had, you need to do something you've never done.”
    Don't forget to mark
    helpful posts and answers
    ! Answer an interesting question? Write a
    new article
    about it! My Articles
    *This post does not reflect the opinion of Microsoft, or its employees.
    Well it's too much to read. I was really tired when I wrote it. Even the below is too much to read but perhaps gets the point across of what I would like to do which I think
    Joel Engineer may have answered but I'm not sure. As I'm still too tired to understand that yet and research what he said in order to figure it out yet.
    But maybe the code below can provide the concept of the operation with the comments in it. But seeing as how I'm still tired it may be confused.
    Option Strict On
    Imports System.Windows.Forms
    Imports System.IO
    Imports System.IO.Compression
    Imports System.Drawing.Imaging
    Imports System.Runtime.InteropServices
    Public Class Form1
    Dim Bmp1Bytes As New List(Of Byte)
    Dim Bmp1BytesCompressed As New List(Of Byte)
    Dim Bmp2Bytes As New List(Of Byte)
    Dim BmpComparisonDict As New Dictionary(Of Integer, Byte)
    Dim BmpDifferenceIndexesCompressed As New List(Of Byte)
    Dim BmpDifferenceBytesCompressed As New List(Of Byte)
    Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
    SomeSub()
    End Sub
    Private Sub SomeSub()
    ' Pretend this code is in UDP Client app. A screen capture is performed of the desktop. Takes about 90 milliseconds.
    Bmp1Bytes.Clear()
    Using BMP1 As New Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height)
    Using g1 As Graphics = Graphics.FromImage(BMP1)
    g1.CopyFromScreen(0, 0, 0, 0, BMP1.Size)
    Cursor.Draw(g1, New Rectangle(Cursor.Position.X, Cursor.Position.Y, Cursor.Size.Width, Cursor.Size.Height))
    Using MS As New MemoryStream
    BMP1.Save(MS, System.Drawing.Imaging.ImageFormat.Bmp)
    Bmp1Bytes.AddRange(MS.ToArray)
    End Using
    End Using
    End Using
    Bmp1BytesCompressed.AddRange(Compress(Bmp1Bytes.ToArray))
    ' UDP Client app sends Bmp1BytesCompressed.ToArray to UDP Server which is the entire image of the desktop that the UDP
    ' Client is on. This image takes awhile to send since compressed it is about 177000 bytes from over 4000000 bytes.
    ' I will be using different code just to get the bytes from the actual Bitmap in the future. That is not important for now.
    ' Pretend the UDP Server has received the bytes, decompressed the array received into a List(Of Byte) and is displaying
    ' the image of the UDP Clients desktop in a PictureBox.
    ' Now the image on the UDP Clients desktop changes due to the mouse cursor moving as an example. Therefore a new Bitmap
    ' is created from a screen capture. This takes about 90 milliseconds.
    Bmp2Bytes.Clear()
    Using BMP2 As New Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height)
    Using g1 As Graphics = Graphics.FromImage(BMP2)
    g1.CopyFromScreen(0, 0, 0, 0, BMP2.Size)
    Cursor.Draw(g1, New Rectangle(Cursor.Position.X, Cursor.Position.Y, Cursor.Size.Width, Cursor.Size.Height))
    Using MS As New MemoryStream
    BMP2.Save(MS, System.Drawing.Imaging.ImageFormat.Bmp)
    Bmp2Bytes.AddRange(MS.ToArray)
    End Using
    End Using
    End Using
    ' Now I have the original images bytes in Bmp1Bytes and the new images bytes in Bmp2Bytes on the UDP Client. But I don't
    ' want to send all of the bytes in Bmp2Bytes to the UDP Server. Just the indexes of and the bytes that are different in
    ' Bmp2Bytes from Bmp1Bytes.
    ' This takes less than 100 milliseconds for what I've tested so far where over 500000 bytes in Bmp2Bytes are different
    ' than the bytes in Bmp1Bytes. Usually that amount would be much less. But during testing I was displaying the image
    ' from Bmp1 bytes in a PictureBox so a large amount of data would change between the first screen shot, the PictureBox
    ' then displaying an image on the same PC and then the second screen shot.
    BmpComparisonDict.Clear()
    For i = 0 To Bmp1Bytes.Count - 1
    If Bmp1Bytes(i) <> Bmp2Bytes(i) Then
    BmpComparisonDict.Add(i, Bmp2Bytes(i))
    End If
    Next
    ' So now I have all the difference bytes and their indexes from Bmp2Bytes in the BmpComparisonDict. So I compress
    ' the indexes into on List and the Bytes into another List.
    BmpDifferenceIndexesCompressed.Clear()
    BmpDifferenceBytesCompressed.Clear()
    BmpDifferenceIndexesCompressed.AddRange(Compress(BmpComparisonDict.Keys.SelectMany(Function(d) BitConverter.GetBytes(d)).ToArray()))
    BmpDifferenceBytesCompressed.AddRange(Compress(BmpComparisonDict.Values.ToArray))
    ' Now pretend the UDP Client has sent both those arrays to the UDP Server which has added both decompressed arrays
    ' to a Dictionary(Of Integer, Byte). And the server has the original image decompressed bytes received in a List
    ' called Bmp1Bytes also.
    ' This is where I am stuck. The UDP Server has the Dictionary. That part was fast. However there is no
    ' fast method I have found for creating a new List(Of Byte) where bytes in the originally received List(Of Byte) that
    ' do not have to be altered are placed into a new List(Of Byte) except for the indexes listed in the
    ' Dictionary(Of Integer, Byte) that need to be placed into the appropriate index locations of the new List(Of Byte).
    ' The below example for doing so is exceptionally slow. Pretend UpdateDictionary has all of the decompressed indexes
    ' and bytes received by the UDP Server for the update contained within it.
    Dim UpdateDictionary As New Dictionary(Of Integer, Byte)
    Dim UpdatedBytes As New List(Of Byte)
    Dim Counter As Integer = 0
    For i = 0 To Bmp1Bytes.Count - 1
    If i = UpdateDictionary.Keys(Counter) Then ' Provides the index contained in the Keys for the Dictionary
    UpdatedBytes.Add(UpdateDictionary.Values(Counter))
    Counter += 1
    If Counter > UpdateDictionary.Count - 1 Then Counter = 0
    Else
    UpdatedBytes.Add(Bmp1Bytes(i))
    End If
    Next
    ' So what I'm trying to do is find an extremely fast method for performing something similar to what the
    ' above operation performs.
    End Sub
    Private Function Compress(BytesToCompress() As Byte) As List(Of Byte)
    Dim BytesCompressed As New List(Of Byte)
    Using compressedStream = New MemoryStream()
    Using zipStream = New GZipStream(compressedStream, CompressionMode.Compress)
    zipStream.Write(BytesToCompress, 0, BytesToCompress.Count)
    zipStream.Close()
    BytesCompressed.AddRange(compressedStream.ToArray)
    End Using
    End Using
    Return BytesCompressed
    End Function
    Private Function Decompress(BytesToDecompress() As Byte) As List(Of Byte)
    Dim BytesDecompressed As New List(Of Byte)
    Using DecompressedStream = New MemoryStream()
    Using zipStream = New GZipStream(DecompressedStream, CompressionMode.Decompress)
    zipStream.Write(BytesToDecompress, 0, BytesToDecompress.Count)
    zipStream.Close()
    BytesDecompressed.AddRange(DecompressedStream.ToArray)
    End Using
    End Using
    Return BytesDecompressed
    End Function
    End Class
    La vida loca

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • To improve the system performance of the code

    Please help me to improve the system performance of the following program. its very urgent
    report zsdr0125
              no standard page heading
    *         LINE-SIZE  170                                   " SIR 061880
              line-size  210                                   " SIR 061880
              line-count 58
              message-id zz.
    ** Report header ******************************************************
    ** Report name: Activity Costing Report
    ** Report id:   RO-01148
    ** Designed/Coded. Tori Chandler. Reporting Team.
    ** Date: March 01, 2000.
    ** Original SIR:  016113
    ** Application Area: V - Sales & Distribution  (OTC)
    ** Correction/Transport:  D10K951579
    ** Description:  It is normal business practice for logistics
    **               operations to charge business units for the activity
    **               incurred on their behalf.  This is consistent with
    **               activity based costing principles between BU and
    **               shared resources.  The activities involved are picking
    **               storage, shipping and receiving.  The purpose of this
    **               report is to provide data for the first 3.
    **      QRB2 - 03/13/2000 - Tracy, Antoine, Christian, Tori
    ** History:
    * 06/14/2001 SIR 032383 CTS                             Antoine Dailly
    * A plant (WERKS) can have several Distrib points (VSTEL)
    * SIGN = 032383
    * Modification History:
    * Date       Modified by  SIR       CTS          Description
    *11/14/2001  J.CAMPION   034606   D10K979189  Logistics Activity report
    * Logistics Activity reports
    * We added fields ship to customer and country and we also make another
    * total
    * QRB2 Tracy L. Obrien
    * Modification History:
    * Date       Modified by  SIR       CTS          Description
    *11/14/2001  J.CAMPION   37838   D10K982890      Select only
    *                                                material type HALB
    *06/12/2002  T OBrien    38784   D10K988181
    *        Allow option to get material weight from Master data or
    *        from the delivery.
    *  Modified by    SIR       CTS                         Date
    *  Jim Harwood    42730     D10K993119               10 Oct 2002
    * Description:  Code amended to default the Goods Issue Date range to
    * the previous month.  Also amended to print out the Select Options.
    *  Modified by    SIR       CTS                         Date
    *  Jim Harwood    44381     D10K994598               18 Nov 2002
    * Description:  Correct date range processing.  APPEND statement added
    * so that record is added ot internal table S_WADAT.  Also S_VKORG
    * removed as it was NO DISPLAY and nothing was assigned to it. It's use
    * in an SQL call may be causing the wrong optimization.
    *  Modified by    SIR       CTS                         Date
    *  Tori Chandler  45567     D10K995875               03 Jan 2003
    * Description:  Correct reporting of weights for non-pickable items
    *    when the Material Master Data radiobutton is selected. Also found
    *    from SIR 37838, that material type is hardcoded on the LIPS.  I
    *    created a new select option and the person needing the report
    *    can control if they want only HALB or all line items.  Also,
    *    because of this the delivery weight is obtain from header,
    *    changing to accumulate from LIPS to match which lines are selected
    *  QRB2:  1/15/2003:  Eileen, Jerome and Tori
    *{   INSERT         D11K901833                                        1
    *  Modified by    Sir             CTS                    Date
    *  Sue Kastan    48712/054042    D11K901833         28 Aug, 2003
    *  Fix overcounting of records from LIPS
    *}   INSERT
    *  Modified by    SIR       CTS                         Date          *
    *  Vijay Andea   061880    D11K918628               04/20/2006        *
    *                          D11K946194                                 *
    *  Description: Enhence ZSDR0125 Activity Cost Driver Report to allow *
    *  Analysis by Product Groupings.                                     *
    *}   INSERT
    *  Modified by            SIR       CTS                    Date       *
    *  Prakash Arunachalam  091510   D11K950288             09/26/2006    *
    *  Description: Correct Activity Cost report  - material weight       *
    *  calculation                                                        *
    *  Modified by            SIR       CTS                    Date       *
    *  Murali Krishna        301978    D50K903293            01/20/2008
    *  Description: Improve the system performance of this report and
    *               clean-up of code into various form routine
    * Table declaration.
    tables: likp,    " SD Document: Delivery Header Data
            lips,    " SD document: Delivery: Item data
            vepo,    " SD Document: Shipping Unit Item (Content)
            vekp,    " SD Document: Shipping Unit Header
            knvv,    " Customer Master Sales Data
            kna1,    " General Data in Customer Master            SIR 34606
            marm,    " Units of Measure
            mara,    " Material Master: General Data              SIR 38784
            t001l,   " Stge locs
            t001k,   " Valuation area
            tvswz,   " Shipping Points per Plant
            t134,    " Material types
            z0234.   " Alternative Unit of Measure
    *** Selection screen.
    selection-screen begin of block b1 with frame title text-001.
    parameters:     p_werks  like t001l-werks obligatory memory id wrk.
    select-options: s_lgort  for  t001l-lgort,
    *               S_VKORG  FOR  TVKO-VKORG NO-DISPLAY ," SIR 34606, 42730
    *               S_WADAT  FOR  LIKP-WADAT_IST OBLIGATORY NO-EXTENSION,
                    s_wadat  for  likp-wadat_ist no-extension,  " SIR 42730
                    s_mtart  for  t134-mtart,                   " SIR 45567
                    s_lfart  for  likp-lfart no-display no intervals.
    selection-screen skip 2.
    selection-screen comment 1(21) text-002.                    " SIR 38784
    parameters:     p_delwt  radiobutton group grp1,            " SIR 38784
                    p_mstwt  radiobutton group grp1.            " SIR 38784
    *--------------------------------------------------*Start of SIR 061880
    selection-screen skip 2.
    selection-screen comment 1(21) text-006.
    parameters:     p_voldl radiobutton group 2,     " Volume from Delivery
                    p_volmd radiobutton group 2.  " Volume from Master Data
    *-----------------------------------------------------End of SIR 061880
    selection-screen end of block b1.
    *--------------------------------------------------*Start of SIR 061880
    selection-screen begin of block b2 with frame title text-007.
    select-options: s_cbuun for knvv-kvgr1 no intervals,      " Customer BU
                    s_mbuun for mara-prdha+1(2) no intervals, " Material BU
                    s_lobus for mara-prdha+3(3) no intervals,         " LOB
                    s_pac1  for mara-prdha+6(3) no intervals.   " PAC1
    selection-screen end of block b2.
    *----------------------------------------------------*End of SIR 061880
    *---Type Declaration for Internal Tables------------------------------*
    types: begin of t_likp,
              vbeln      like likp-vbeln,  " delivery
              vstel      like likp-vstel,  " shipping point
              lfart      like likp-lfart,  " delivery type
              vkorg      like likp-vkorg,  " Sales organization
              kunag      like likp-kunag,  " sold-to party
              kunnr      like likp-kunnr,  " ship to party        SIR 34606
              btgew      like likp-btgew,  " Delivery weight
              gewei      like likp-gewei,  " Unit of weight
              anzpk      like likp-anzpk,  " Number of Packages   SIR 61880
              volum      like likp-volum,  " Delivery Volume      SIR 61880
              voleh      like likp-voleh,  " Volume Unit          SIR 61880
              vtwiv      like likp-vtwiv,  " Distribution channel
              spaiv      like likp-spaiv,  " Division
              wadat_ist  like likp-wadat_ist,  " actual goods issue date
              del_flg(1) type c,                            "(+) SIR 301978
           end of t_likp.
    types: begin of t_lips,
              vbeln      like lips-vbeln,  " delivery
              posnr      like lips-posnr,  " delivery line
              matnr      like lips-matnr,  " material
              lgort      like lips-lgort,  " storage location
              prodh      like lips-prodh,  " product hierarchy
              meins      like lips-meins,  " base UoM
              brgew      like lips-brgew,  " Material weight
              gewei      like lips-gewei,  " Unit of weight
              volum      like lips-volum,  " Material Volume      SIR 61880
              voleh      like lips-voleh,  " Volume Unit          SIR 61880
              lgmng      like lips-lgmng,  " actual delivery quantity
              komkz      like lips-komkz,  " Indicator for picking control
              mtart      like lips-mtart,  " Material type      " SIR 37838
              del_flg(1) type c,                            "(+) SIR 301978
           end of t_lips.
    types: begin of t_vepo,
              venum      like vepo-venum,  " shipping unit number
              vbeln      like vepo-vbeln,  " delivery
           end of t_vepo.
    types: begin of t_vekp,
              venum      like vekp-venum,  " shipping unit number
              brgew      like vekp-brgew,  " actual weight
              gewei_max  like vekp-gewei_max,  " unit of weight
              vpobjkey   like vekp-vpobjkey,   " key for assigned object
           end of t_vekp.
    types: begin of t_knvv,
              kunnr      like knvv-kunnr,  " customer number
              ktgrd      like knvv-ktgrd,  " acct assign group
              kvgr1      like knvv-kvgr1,  " customer group 1
           end of t_knvv.
    types: begin of t_kna1,                                     " SIR 34606
              kunnr      like kna1-kunnr,  " customer number    " SIR 34606
              land1      like kna1-land1,  " contry             " SIR 34606
           end of t_kna1.                                       " SIR 34606
    types: begin of t_marm,
              matnr      like marm-matnr,  " material
              meinh      like marm-meinh,  " Alt unit of measure     032383
              umrez      like marm-umrez,  " numerator
              umren      like marm-umren,  " denominator
           end of t_marm.
    types: begin of t_mara,                                     " SIR 38784
              matnr      like mara-matnr,  " material           " SIR 38784
              prdha      like mara-prdha,  " Product Hierarchy  " SIR 61880
              brgew      like lips-brgew,  " gross weight       " SIR 38784
              gewei      like mara-gewei,  " Unit of weight     " SIR 38784
              volum      like mara-volum,  " Volume             " SIR 61880
              voleh      like mara-voleh,  " Volume Unit        " SIR 61880
              del_flg(1) type c,                            "(+) SIR 301978
           end of t_mara.                                       " SIR 38784
    types: begin of t_tvswz,
              vstel      like tvswz-vstel, " shipping point
           end of t_tvswz.
    types: begin of t_z0234,                                        "032383
              vstel      like z0234-vstel, " shipping point          032383
              zpaluom    like z0234-zpaluom," pallet unit of measure 032383
              zcsuom     like z0234-zcsuom," Case unit of measure    032383
           end of t_z0234.                                          "032383
    types: begin of t_output_dt,
              wadat_ist    like likp-wadat_ist, " Goods issue date
              ktgrd        like knvv-ktgrd," acct assign group
              bu           like knvv-kvgr1," business unit
              kunnr        like kna1-kunnr," ship to location     SIR 34606
              land1        like kna1-land1," ship to location     SIR 34606
              d_btgew      like likp-btgew," delivery weight
              m_brgew      like lips-brgew," material weight
              a_brgew      like vekp-brgew," actual weight of ship unit
              num_del      type i,         " counter of deliveries
              num_pallets  type i,         " number of pallets
              num_cases    type i,         " number of cases
              num_loose    type i,         " loose quantity
              num_delln    type i,         " counter of delivery lines
    *         packages     like likp-anzpk," Number of Packages " SIR 61880
              packages(3)  type p,         " Number of Packages " SIR 61880
              volume       like lips-volum," Volume             " SIR 61880
              lobus(3)     type c,         " Line of Business   " SIR 61880
              pac1(3)      type c,                    " PAC1    " SIR 61880
           end of t_output_dt.
    types: begin of t_output_ag,
              ktgrd        like knvv-ktgrd," acct assign group
              bu           like knvv-kvgr1," business unit
              land1        like kna1-land1," country              SIR 34606
              d_btgew      like likp-btgew," delivery weight
              m_brgew      like lips-brgew," material weight
              a_brgew      like vekp-brgew," actual weight of ship unit
              num_del      type i,         " counter of deliveries
              num_pallets  type i,         " number of pallets
              num_cases    type i,         " number of cases
              num_loose    type i,         " loose quantity
              num_delln    type i,         " counter of delivery lines
    *         packages     like likp-anzpk," Number of Packages " SIR 61880
              packages(3)  type p,         " Number of Packages " SIR 61880
              volume       like lips-volum," Volume             " SIR 61880
              lobus(3)     type c,         " Line of Business   " SIR 61880
              pac1(3)      type c,                   " PAC1     " SIR 61880
           end of t_output_ag.
    types: begin of t_output_gs,                                " SIR 34606
              ktgrd        like knvv-ktgrd," acct assign group  " SIR 34606
              bu           like knvv-kvgr1," business unit      " SIR 34606
              d_btgew      like likp-btgew," delivery weight    " SIR 34606
              m_brgew      like lips-brgew," material weight    " SIR 34606
              a_brgew      like vekp-brgew," actual weight      " SIR 34606
              num_del      type i,         " counter of deliv   " SIR 34606
              num_pallets  type i,         " number of pallets  " SIR 34606
              num_cases    type i,         " number of cases    " SIR 34606
              num_loose    type i,         " loose quantity     " SIR 34606
              num_delln    type i,         " counter of deliv   " SIR 34606
    *         packages     like likp-anzpk, " Number of Package " SIR 61880
              packages(3)  type p,          " Number of Packages" SIR 61880
              volume       like lips-volum, " Volume            " SIR 61880
              lobus(3)     type c,          " Line of Business  " SIR 61880
              pac1(3)      type c,           " PAC1             " SIR 61880
           end of t_output_gs.                                  " SIR 34606
    *-------------------------------------------------* Begin of SIR 061880
    * Material Type
    types: begin of t_mtart,
           mtart like t134-mtart,                           " Material Type
           end of t_mtart.
    * Customer Business Unit.
    types: begin of t_kvgr1,
           kvgr1 like knvv-kvgr1,                        " Customer Group 1
           end of t_kvgr1.
    * sales Organization.
    types: begin of t_lgort,
           lgort like t001l-lgort,                     " Sales Organization
           end of t_lgort.
    *                                                   Begin of SIR 301978
    *      Header: Material Document
    types: begin of t_mkpf,
             vgart    type mkpf-vgart,
             xblnr    type likp-vbeln,
           end   of t_mkpf.
    *                                                     End of SIR 301978
    *---------------------------------------------------* End of SIR 061880
    *---Internal Tables---------------------------------------------------*
    data:  i_likp        type  t_likp occurs 0 with header line,
           i_temp_likp   type  t_likp occurs 0 with header line,
           v_likp        type  t_likp,
           i_lips        type  t_lips occurs 0 with header line,
           i_temp_lips   type  t_lips occurs 0 with header line,
           v_lips        type  t_lips,
           i_vepo        type  t_vepo occurs 0,
    *      V_VEPO        TYPE  T_VEPO,
           i_vekp        type  t_vekp occurs 0,
           v_vekp        type  t_vekp,
           i_knvv        type  t_knvv occurs 0 with header line,
           v_knvv        type  t_knvv,
           i_kna1        type  t_kna1 occurs 0,                 " SIR 34606
           v_kna1        type  t_kna1,                          " SIR 34606
           i_z0234       type  t_z0234 occurs 0,                "    032383
           v_z0234       type  t_z0234,                         "    032383
           i_z0234_uom   type  t_z0234 occurs 0,                "    032383
           v_z0234_uom   type  t_z0234,                         "    032383
           i_marm        type  t_marm occurs 0 with header line," SIR 61880
           i_marm_pallet type  t_marm occurs 0 with header line,
           v_marm_pallet type  t_marm,
           i_marm_case   type  t_marm occurs 0 with header line,
           v_marm_case   type  t_marm,
    *-------------------------------------------------* Begin of SIR 061880
    *      I_MARA        TYPE  T_MARA OCCURS 0,                 " SIR 38784
           i_mara1       type  t_mara  occurs 0 with header line,
           i_mtart       type  t_mtart occurs 0 with header line,
           i_kvgr1       type  t_kvgr1 occurs 0 with header line,
           i_lgort       type  t_lgort occurs 0 with header line,
           v_kvgr1       type  t_kvgr1,
    *---------------------------------------------------* End of SIR 061880
           v_mara        type  t_mara,                          " SIR 38784
           i_tvswz       type  t_tvswz occurs 0,
           v_tvswz       type  t_tvswz,                     "(+) SIR 301978
           i_output_dt   type  t_output_dt occurs 0,
           v_output_dt   type  t_output_dt,
           i_output_ag   type  t_output_ag occurs 0,
           v_output_ag   type  t_output_ag,
           i_output_gs   type  t_output_gs occurs 0,            " SIR 34606
           v_output_gs   type  t_output_gs,                     " SIR 34606
           i_mkpf        type  table of t_mkpf with header line."SIR 301978
    *** Data Declarations *
    data:  v_page(3)        type c,                          " Page Counter
           v_comp           like t001k-bukrs,       " zbsn0001 company code
           v_title(24)      type c,                 " zbsn0001 report title
           v_rpttyp         type c,                           " report type
           v_ok             type c,             " control While... endwhile
           v_diff_date      type p,                " days between selection
           v_werks          like t001l-werks,                       " plant
    *      v_z0234_zpaluom  like z0234-zpaluom,  " Pallet Unit of Measure
    *      v_z0234_zcsuom   like z0234-zcsuom,   " Case Unit of Measure
           v_palwto         type p decimals 6,                 " SIR 091510
    *       "like likp-btgew,       " Weight after conversion
           v_vekp_tabix     like sy-tabix,                  " index on read
           v_vekp_sum_brgew like vekp-brgew,                " actual weight
           v_pallet_qty     like lips-lgmng,        " calculated pallet qty
           v_pallet_integer type i,                       " true pallet qty
           v_case_qty       like lips-lgmng,          " calculated case qty
           v_case_integer   type i,                         " true case qty
           v_qty_not_pallets like lips-lgmng,                  " calculated
           v_num_pallets    like lips-lgmng,    " calculated nbr of pallets
           v_num_pallets_int type i,                  " true nbr of pallets
           v_num_cases      like lips-lgmng,      " calculated nbr of cases
           v_num_cases_int  type i,                     " true nbr of cases
           v_total_case_qty like lips-lgmng,                " case quantity
           v_loose_qty      type i,             " calculated loose quantity
    *-------------------------------------------------* Begin of SIR 061880
           v_volume         like lips-volum,      " Volume After Convertion
           v_cbuun          like knvv-kvgr1,                  " Customer BU
           v_mbuun          like knvv-kvgr1,                  " Material BU
           v_lobus(3)       type c,                      " Line of Business
           v_pac1(3)        type c,                                  " PAC1
           v_flag           type c.            " Flag Indicator for No Data
    *---------------------------------------------------* End of SIR 061880
    *                                                   Begin of SIR 301978
    * Ranges
    ranges : r_vstel for tvswz-vstel.
    *                                                     End of SIR 301978
    * Constants
    data:  c_uom(3)        type c value 'KG'.    " Kilogram Unit of Meas.
    data : c_vom like mara-voleh value 'M3'.   " Metter Cube.  " SIR 061880
    constants : c_wl(2) type c value 'WL'.                  "(+) SIR 301978
    * Initialization.
    initialization.
      s_lfart-sign   = 'I'.
      s_lfart-option = 'EQ'.
      s_lfart-low    = 'LF  '.
      append s_lfart.
      s_lfart-low    = 'NL  '.
      append s_lfart.
      s_lfart-low    = 'NLCC'.
      append s_lfart.
      s_lfart-low    = 'ZLFI'.
      append s_lfart.
    * AT SELECTION-SCREEN.
    at selection-screen.
    * SIR 42730 - If no Goods Issue Date has been specified in the
    *             Select Options set the range to the previous month.
      if s_wadat-low is initial.
        s_wadat-sign      = 'I'.
        s_wadat-option    = 'BT'.
        s_wadat-high      = sy-datum.              " Today's date
        s_wadat-high+6(2) = '01'.                  " First of this month
        s_wadat-high      = s_wadat-high - 1.      " End of last month
        s_wadat-low       = s_wadat-high.          " End of last month
        s_wadat-low+6(2)  = '01'.                  " First of last month
        append s_wadat.
      endif.     " SIR 42730 IF S_WADAT-LOW IS INITIAL.
      clear v_werks.                                        "(+) SIR 301978
    * Validate Plant/Storage Location from selection screen
      select werks up to 1 rows into v_werks from t001l
            where werks =  p_werks  and
                  lgort in s_lgort.
      endselect.
      if sy-subrc ne 0.
        message e045 with text-e01.
      endif.
    * Validate Storage Location
      if not s_lgort[] is initial.
        select lgort from t001l into table i_lgort where lgort in s_lgort.
        if sy-subrc ne 0.
          message e045 with text-e09.
        endif.
      endif.
    * Validate date range.  do not allow more that 31 days
      if not s_wadat-high is initial.
        v_diff_date = s_wadat-high - s_wadat-low.
        if v_diff_date >= '31'.
          message e045 with text-e02.
        endif.
      endif.
    *-------------------------------------------------* Begin of SIR 061880
    * Validation for Material Type in Selection Screen
      if not s_mtart[] is initial.
        select mtart from t134 into table i_mtart where mtart in s_mtart.
        if sy-subrc ne 0.
          message e045 with text-e07.
        endif.
      endif.
    at selection-screen on block b2.
    * Validation for Material Business Unit and Customer Business Unit.
      if s_cbuun-low is not initial and s_mbuun-low is not initial.
        message e045 with text-e05.
      endif.
    * Validation for Possible combinations of Material BU, Line of Business
    * and PAC1
      if ( s_mbuun-low is not initial and s_lobus-low is not initial
                                    and s_pac1-low is not initial )
      or ( s_mbuun-low is not initial and s_lobus-low is not initial )
      or ( s_lobus-low is not initial and s_pac1-low is not initial )
      or ( s_mbuun-low is not initial and s_pac1-low is not initial ).
        message e045 with text-e06.
      endif.
    * Validation for Customer Business Unit.
      if not s_cbuun[] is initial.
        select kvgr1 from tvv1 into table i_kvgr1 where kvgr1 in s_cbuun.
        if sy-subrc ne 0.
          message e045 with text-e08.
        endif.
    *    free: i_kvgr1.
      endif.
    *---------------------------------------------------- End of SIR 061880
    * TOP-OF-PAGE
    * Top of the page routine to print the headers and columns
    top-of-page.
      format color col_heading on.
      if v_rpttyp = 'D'.
        v_title = text-h01.
      else.
        v_title = text-h02.
      endif.
      perform zbsn0001_standard_header     " Standard Report Heading Form
               using v_comp v_title 'U' v_page.
      skip.
      if v_rpttyp = 'D'.
        perform write_dtlvl_headings.
      else.
        perform write_aglvl_headings.
      endif.
    *       FORM WRITE_DTLVL_HEADINGS                                     *
    *   for date detail level, print the column headers                   *
    form write_dtlvl_headings.
      write: /001 text-h03,                " Acct
              038 text-h04,                " ------ P I C K I N G ------
              105 text-h05,                " ----- S H I P P I N G -----
             /001 text-h06,                " Group
              007 text-h07,                " Date
              017 text-h08,                " BU
              021 text-h17,                " ship to party            34606
              030 text-h18,                " country
              039 text-h09,                " Pallets
              057 text-h10,                " Cases
              075 text-h11,                " Loose
              093 text-h12,                " Lines
              105 text-h13,                " Material Kg
              124 text-h16,                " Delivery Kg
              143 text-h14,                " Actual Kg
              161 text-h15,                " Deliveries
    *----------------------------------------------------Start of SIR 61880
              173 text-h20,                " Packages
              189 text-h21,                " Volume
              201 text-h22,                " LOB
              207 text-h23.                " PAC1
    *-----------------------------------------------------*End of SIR 61880
      skip.
    endform.                               " end of write_dtlvl_headings.
    *       FORM WRITE_AGLVL_HEADINGS                                     *
    *      for account group detail level, print the column headers       *
    form write_aglvl_headings.
      write: /001 text-h03,                " Acct
              038 text-h04,                " ------ P I C K I N G ------
              105 text-h05,                " ----- S H I P P I N G -----
             /001 text-h06,                " Group
              017 text-h08,                " BU
              030 text-h18,                " country              SIR 34606
              039 text-h09,                " Pallets
              057 text-h10,                " Cases
              075 text-h11,                " Loose
              093 text-h12,                " Lines
              105 text-h13,                " Material Kg
              124 text-h16,                " Delivery Kg
              143 text-h14,                " Actual Kg
              161 text-h15,                " Deliveries
    *---------------------------------------------------*Start of SIR 61880
              173 text-h20,                " Packages
              189 text-h21,                " Volume
              201 text-h22,                " LOB
              207 text-h23.                " PAC1
    *-----------------------------------------------------*End of SIR 61880
      skip.
    endform.                               " end of write_aglvl_headings.
    *-------------------------- SUBBROUTINES -----------------------------*
    include zbsn0001.      " Include to print all Standard Report Titles
    include zsdn0004.      " Print the Select Options
    *** MAIN SELECTION.                                                   *
    start-of-selection.
      refresh:  i_likp, i_temp_likp, i_lips, i_temp_lips, i_vepo,
                i_vekp, i_knvv, i_marm_pallet, i_marm_case, i_tvswz,
                i_output_dt, i_output_ag, i_mara1.
      clear:    v_likp, v_lips, v_vekp, v_knvv, v_mara,
                v_marm_pallet, v_marm_case, v_output_dt, v_output_ag.
      v_rpttyp = 'D'.
    * SIR 42730 - Echo the Select Options to the output listing and print
    *             the Goods Issue Date range being used.
      write: /,/.     " SIR 43730 - Skip a couple of lines to centre it.
      perform zsdn0004_print_select_options using sy-cprog ' '.
      if s_wadat-high is initial.
        write: /,/, text-003, 34 s_wadat-low.
      else.
        write: /,/, text-003, 34 s_wadat-low, 44 text-004, 47 s_wadat-high.
      endif.
      new-page.
    * Main Processing.                                                    *
      perform get_data.
      if not i_lips[] is initial.
        perform create_output.
      endif.
    * END-OF-SELECTION
    end-of-selection.
    *-------------------------------------------------* Start of SIR 061880
    *   PERFORM WRITE_REPORT.
      if not i_output_dt[] is initial and not i_output_ag[] is initial and
                                          not i_output_gs[] is initial.
        perform write_report.
      else.
        skip 2.
        write:/ text-e03.
        skip 2.
        uline.
        clear: v_flag.
      endif.
    *---------------------------------------------------* End of SIR 061880
    *       FORM GET_DATA                                                 *
    * build all of the internal tables needed for the report              *
    * selects on t001k, z0234, likp, lips, vepo, vekp, knvv & marm, mara  *
    form get_data.
      perform get_data_t001k.                                  " SIR 301978
      perform get_data_tvswz.                                  " SIR 301978
      if sy-subrc = 0.                     " Build shipping point range
        perform get_data_Z0234.                                " SIR 301978
        if sy-subrc ne 0.
          v_flag = 'X'.
    *      stop.
        else.
          sort i_z0234 by vstel.                                   " 032383
        endif.
    *                                                  Begin of  SIR 301978
    *   The data retrievalfrom LIKP has been modified for performance
    *   reasons
        perform get_data_mkpf.                                 " SIR 301978
        if sy-subrc eq 0.
    *     Deleting data other than Goods issued for delivery
          delete i_mkpf where vgart ne c_wl.
          sort i_mkpf by xblnr.
        endif.
    *   Deleting the data from the internal table i_likp by comparing
    *   shipping point
        r_vstel-sign = 'I'.
        r_vstel-option = 'EQ'.
        clear : v_tvswz.
        loop at i_tvswz into v_tvswz.
          r_vstel-low = v_tvswz-vstel.
          append r_vstel.
        endloop.
        if not i_mkpf[] is initial.
          perform get_data_likp.                             " SIR 301978
        endif.
        if sy-subrc = 0.
          delete i_likp where not lfart in s_lfart.
    *                                                  End    of SIR 301978
    *     Get data for the delivery lines
          if not i_likp[] is initial.
            perform get_data_lips.                         " SIR 301978
    *                                                  Begin of SIR 301978
            if i_lips[] is initial.
              v_flag = 'X'.
              message i089 with text-i02.
              leave list-processing.
            endif.                    " Return code for LIPS select
    *                                                     End of SIR 301978
          endif.
    *-------------------------------------------------* Begin of SIR 061880
    *     Get Data From MARA (Material Master) to Read Material Weight and
    *     Material Voluem from Master Data.
          if not i_lips[] is initial.
    *                                                   Begin of SIR 301978
    *       Delete the duplicate material from delivery item table
            i_temp_lips[] = i_lips[].
            sort i_temp_lips by matnr.
            delete adjacent duplicates from i_temp_lips comparing matnr.
            perform get_data_mara.                         " SIR 301978
            if sy-subrc = 0.
              sort i_mara1 by matnr.
              clear i_temp_lips.
              refresh i_temp_lips.
    *                                                     End of SIR 301978
            else.
              v_flag = 'X'.
              message i089 with text-i01.
              leave list-processing.
            endif.
          endif.
    *     Filter I_LIPS and I_MARA1 When Either material BU or Line of
    *     Business (LOB) or PAC1 are not left Balnk
          perform filter_data_for_prdha.
    *     Get data for the sold-to customer
          perform i_knvv_fill_data.
    *    Filter I_LIKP & I_LIPS & I_MARA for Cust-BU, When Cust-Bu is not
    *    left Blank.
          perform filter_likp_lips_mara_custbu.
    *---------------------------------------------------- End of SIR 061880
    *-------------------------------------------------* Begin of SIR 034606
    *     Get data for the ship to party
          i_temp_likp[]  = i_likp[].
          sort i_temp_likp by kunnr.
          delete adjacent duplicates from i_temp_likp comparing kunnr.
          perform get_data_kna1.                             " SIR 301978
          if sy-subrc = 0.
            sort i_kna1 by kunnr.
          endif.
    *---------------------------------------------------* End of SIR 034606
        else.
          v_flag = 'X'.
          message i089 with text-i04.
          leave list-processing.
        endif.                               " return code to LIKP select
    *    endif.                                                " SIR 061880
        free: i_temp_likp.
      endif.                               " return code for TVSWZ select
    * Process table LIPS
      if not i_lips[] is initial.
        i_temp_lips[] = i_lips[].
        sort i_temp_lips by vbeln.
        delete adjacent duplicates from i_temp_lips comparing vbeln.
    *   Get actual weight
        perform get_data_vepo.                                 " SIR 301978
        if sy-subrc = 0.
          sort i_vepo by venum.
          delete adjacent duplicates from i_vepo comparing venum.
          perform get_data_vekp.                               " SIR 301978
          sort i_vekp by vpobjkey.
        endif.
        free:  i_vepo, i_temp_lips.
        i_temp_lips[] = i_lips[].
        sort i_temp_lips by matnr.
        delete adjacent duplicates from i_temp_lips comparing matnr.
    *   Get Units of Measure for Material
        perform read_data_from_marm.                           " SIR 061880
    *                                                   Begin of SIR 032383
        i_z0234_uom = i_z0234.
        sort i_z0234_uom by zpaluom.
        delete adjacent duplicates from i_z0234_uom comparing zpaluom.
    *                                                   End   of SIR 032383
        clear i_marm_pallet.                                   " SIR 061880
        refresh  i_marm_pallet.                                " SIR 061880
        loop at i_z0234_uom into v_z0234_uom.             "LOOP Z0234 032383
    *-------------------------------------------------* Begin of SIR 061880
    *     Get Alternative Unit of Measure for Pallets
          perform i_marm_pallet_fill.
        endloop.                                      "ENDLOOP Z0234 032383
    ** get alternative UoM for pallets
    *      SELECT MATNR                     " material
    *             MEINH                     " Alt unit of measure  " 032383
    *             UMREZ                     " numerator
    *             UMREN                     " denominator
    **     into table i_marm_pallet                                " 032383
    *      APPENDING TABLE I_MARM_PALLET                            "032383
    *      FROM MARM
    *      FOR ALL ENTRIES IN I_TEMP_LIPS
    *      WHERE MATNR = I_TEMP_LIPS-MATNR  AND
    **           meinh = v_z0234_zpaluom.                          " 032383
    *            MEINH = V_Z0234_UOM-ZPALUOM.                      " 032383
    *---------------------------------------------------* End of SIR 061880
        sort i_marm_pallet by matnr meinh.
        free i_z0234_uom.                                           "032383
        i_z0234_uom = i_z0234.                                      "032383
        sort i_z0234_uom by zcsuom.                                 "032383
        delete adjacent duplicates from i_z0234_uom comparing zcsuom."32383
        clear i_marm_case.                                     " SIR 061880
        refresh i_marm_case.                                   " SIR 061880
        loop at i_z0234_uom into v_z0234_uom.            "LOOP Z0234 032383
    *-------------------------------------------------* Begin of SIR 061880
    *     Get Alternative Unit of Measure for Cases
          perform i_marm_case_fill.
        endloop.                                      "ENDLOOP Z0234 032383
    *---------------------------------------------------* End of SIR 061880
        sort i_marm_case by matnr meinh.
      else.
        v_flag = 'X'.
        message i089 with text-i02.
    *      leave list-processing.
      endif.                               " table LIPS is empty
    endform.                               " get_data
    *       FORM CREATE_OUTPUT                                            *
    * process internal table LIPS, for each delivery/delivery lines create*
    * an output record and collect into i_output internal table.  Fields  *
    * used for header are goods issue date, acct assign, BU for customer, *
    * total weight, total weight for shipping unit and count of           *
    * deliveries, the remaining fields will have a value of zero for the  *
    * collect.  For each delivery line, fields used are goods issue date, *
    * acct assign, BU for material, number of pallets, number of cases,   *
    * loose quantity and number of delivery lines.  The remaining header  *
    * fiels will be zero for the collect.                                 *
    form create_output.
    *-------------------------------------------------* Begin of SIR 061880
      sort i_mara1 by matnr.
      sort i_lips by vbeln posnr matnr.
      sort i_likp by vbeln.
      sort i_z0234 by vstel.
      sort i_knvv by kunnr.
      sort i_kna1 by kunnr.
      sort i_vekp by vpobjkey.
      sort i_marm_pallet by matnr meinh.
      sort i_marm_case by matnr meinh.
    * LOOP AT I_LIPS INTO V_LIPS.
      loop at i_lips.
        clear v_lips.
        v_lips = i_lips.
    *---------------------------------------------------* End of SIR 061880
        at new vbeln.
          perform collect_header_data.
        endat.
        if not v_lips-komkz is initial.                         " SIR 45567
          perform collect_item_data.
        endif.                                                  " SIR 45567
      endloop.
    endform.                               " create_output
    *       FORM COLLECT_HEADER_DATA                                      *
    * Fields used for header are goods issue date, acct assign,           *
    * BU for customer, total weight, total weight for shipping unit       *
    * and count ofdeliveries, the remaining fields will have a value of   *
    * zero for the collect.                                               *
    form collect_header_data.
      clear:  v_likp, v_knvv, v_vekp, v_vekp_sum_brgew.
      v_ok = 'Y'.
      read table i_likp into v_likp with key vbeln = v_lips-vbeln
                                                   binary search.
      if sy-subrc = 0.                                         " SIR 061880
        clear v_z0234.                                      "(+) SIR 301978
        read table i_z0234 into v_z0234                              "32383
             with key vstel = v_likp-vstel                           "32383
             binary search.                                          "32383
        read table i_knvv into v_knvv with key kunnr = v_likp-kunag
                                                     binary search.
        if sy-subrc ne 0.
          select single ktgrd kvgr1 into (v_knvv-ktgrd, v_knvv-kvgr1)
            from knvv where kunnr = v_likp-kunag  and
                            vkorg = v_likp-vkorg  and
                           vtweg = '01' and          " intercompany values
                           spart = '01'.             " intercompany values
        endif.
        clear v_kna1.                                           " SIR 34606
        read table i_kna1 into v_kna1                           " SIR 34606
           with key kunnr = v_likp-kunnr                        " SIR 34606
           binary search.                                       " SIR 34606
        read table i_vekp into v_vekp
           with key vpobjkey(10) = v_likp-vbeln
           binary search.
        if sy-subrc = 0.
          v_vekp_tabix = sy-tabix.
          while v_ok = 'Y'.
            if v_vekp-gewei_max ne c_uom.
              perform z_unit_conversion
                using v_vekp-brgew v_vekp-gewei_max c_uom v_palwto.
              v_vekp_sum_brgew = v_vekp_sum_brgew + v_palwto.
            else.
              v_vekp_sum_brgew = v_vekp_sum_brgew + v_vekp-brgew.
            endif.
            v_vekp_tabix = v_vekp_tabix + 1.
            read table i_vekp into v_vekp
               index v_vekp_tabix.
            if sy-subrc = 0.
              if v_vekp-vpobjkey(10) ne v_likp-vbeln.
                v_ok = 'N'.
              endif.
            else.
              v_ok = 'N'.
            endif.
          endwhile.
        endif.
      endif.                                                   " SIR 061880
    * populate output tables
      clear: v_output_dt, v_output_ag,v_output_gs.              " SIR 34606
      v_output_dt-wadat_ist = v_likp-wadat_ist.
      v_output_dt-ktgrd     = v_knvv-ktgrd.
      v_output_ag-ktgrd     = v_knvv-ktgrd.
      v_output_gs-ktgrd     = v_knvv-ktgrd.                     " SIR 34606
    *-------------------------------------------------* Begin of SIR 061880
    *  V_OUTPUT_DT-BU        = V_KNVV-KVGR1.
    *  V_OUTPUT_AG-BU        = V_KNVV-KVGR1.
    *  V_OUTPUT_GS-BU        = V_KNVV-KVGR1.                    " SIR 34606
    * Populate Business Unit,Line of Busness and PAC1 Values in Header Data
      perform get_busunit_lobus_pac1_data1.
    *---------------------------------------------------* End of SIR 061880
      v_output_dt-kunnr = v_kna1-kunnr.                         " SIR 34606
    * V_OUTPUT_AG-KUNNR = V_KNA1-KUNNR.                         " SIR 34606
      v_output_dt-land1 = v_kna1-land1.                         " SIR 34606
      v_output_ag-land1 = v_kna1-land1.                         " SIR 34606
      v_output_dt-a_brgew     = v_vekp_sum_brgew.
      v_output_ag-a_brgew     = v_vekp_sum_brgew.
      v_output_gs-a_brgew     = v_vekp_sum_brgew.               " SIR 34606
      if s_mbuun[] is initial and                               " SIR 61880
         s_lobus[] is initial and                               " SIR 61880
         s_pac1[] is  initial.                                  " SIR 61880
        v_output_dt-num_del   = 1.
        v_output_ag-num_del   = 1.
        v_output_gs-num_del   = 1.                              " SIR 34606
    *-------------------------------------------------* Begin of SIR 061880
    * Number of Packages
        v_output_dt-packages   = v_likp-anzpk.
        v_output_ag-packages   = v_likp-anzpk.
        v_output_gs-packages   = v_likp-anzpk.
      else.
        clear: v_output_dt-num_del,
               v_output_ag-num_del,
               v_output_gs-num_del,
               v_output_dt-packages,
               v_output_ag-packages,
               v_output_gs-packages.
      endif.
    *---------------------------------------------------* End of SIR 061880
      collect v_output_dt into i_output_dt.
      collect v_output_ag into i_output_ag.
      collect v_output_gs into i_output_gs.                     " SIR 34606
    endform.                               " collect_header_data
    *       FORM COLLECT_ITEM_DATA                                        *
    * For each delivery line, fields used are goods issue date,           *
    * acct assign, BU for material, number of pallets, number of cases,   *
    * loose quantity and number of delivery lines.  The remaining header  *
    * fiels will be zero for the collect.                                 *
    form collect_item_data.
      clear:  v_pallet_qty,      v_pallet_integer,
              v_case_qty,        v_case_integer,
              v_num_pallets,     v_num_pallets_int,
              v_num_cases,       v_num_cases_int,
              v_qty_not_pallets, v_total_case_qty, v_loose_qty.
      read table i_marm_pallet into v_marm_pallet
                 with key matnr = v_lips-matnr
                          meinh = v_z0234-zpaluom binary search."32382
      if sy-subrc = 0.
        v_pallet_qty = v_marm_pallet-umrez / v_marm_pallet-umren.
    * round down partial pallets
        v_pallet_integer = v_pallet_qty - '.499'.
      endif.
      read table i_marm_case into v_marm_case
                  with key matnr = v_lips-matnr
                           meinh = v_z0234-zcsuom binary search."32382
      if sy-subrc = 0.
        v_case_qty = v_marm_case-umrez / v_marm_case-umren.
    * round down partial cases
        v_case_integer = v_case_qty - '.499'.
      endif.
      if v_pallet_integer > 0.
        v_num_pallets     = v_lips-lgmng / v_pallet_integer.
        v_num_pallets_int = v_num_pallets - '.499'.
      endif.
      v_qty_not_pallets = v_lips-lgmng -
                          ( v_num_pallets_int * v_pallet_integer ).
      if v_case_integer > 0.
        v_num_cases     = v_qty_not_pallets / v_case_integer.
        v_num_cases_int = v_num_cases - '.499'.
      endif.
      v_total_case_qty     = v_num_cases_int * v_case_integer.
      if v_qty_not_pallets = v_total_case_qty.
        v_loose_qty = 0.
      else.
        v_loose_qty = 1.
      endif.
    * populate output tables
      clear: v_output_dt, v_output_ag, v_output_gs.             " SIR 34606
      v_output_dt-wadat_ist   = v_likp-wadat_ist.
      v_output_dt-ktgrd       = v_knvv-ktgrd.
      v_output_ag-ktgrd       = v_knvv-ktgrd.
      v_output_gs-ktgrd       = v_knvv-ktgrd.                   " SIR 34606
      v_output_dt-kunnr = v_kna1-kunnr.                         " SIR 34606
    * V_OUTPUT_AG-KUNNR = V_KNA1-KUNNR.                         " SIR 34606
      v_output_dt-land1 = v_kna1-land1.                         " SIR 34606
      v_output_ag-land1 = v_kna1-land1.                         " SIR 34606
    *-------------------------------------------------* Begin of SIR 061880
    *  V_OUTPUT_DT-BU          = V_LIPS-PRODH+1(2).
    *  V_OUTPUT_AG-BU          = V_LIPS-PRODH+1(2).
    *  V_OUTPUT_GS-BU          = V_LIPS-PRODH+1(2).             " SIR 34606
    * Populate Business Unit,Line of Business and PAC1 Values for Item Data.
      perform get_busunit_lobus_pac1_data2.
      if V_OUTPUT_DT-BU is initial.
       clear V_OUTPUT_DT.
       exit.
      endif.
    *---------------------------------------------------* End of SIR 061880
    * get delivery weight from delivery lines instead of header " SIR 45567
      if v_lips-gewei = c_uom.
        v_output_dt-d_btgew  = v_lips-brgew.
        v_output_ag-d_btgew  = v_lips-brgew.
        v_output_gs-d_btgew  = v_lips-brgew.                    " SIR 34606
      else.
        perform z_unit_conversion
             using v_lips-brgew v_lips-gewei c_uom v_palwto.
        v_output_dt-d_btgew  = v_palwto.
        v_output_ag-d_btgew  = v_palwto.
        v_output_gs-d_btgew  = v_palwto.                        " SIR 34606
      endif.
    * Get material weight from delivery (LIPS)(IF P_DELWT = 'X')" SIR 38784
      if p_delwt = 'X'.                                         " SIR 38784
        if v_lips-gewei = c_uom.
          v_output_dt-m_brgew  = v_lips-brgew.
          v_output_ag-m_brgew  = v_lips-brgew.
          v_output_gs-m_brgew  = v_lips-brgew.                  " SIR 34606
        else.
          perform z_unit_conversion
               using v_lips-brgew v_lips-gewei c_uom v_palwto.
          v_output_dt-m_brgew  = v_palwto.
          v_output_ag-m_brgew  = v_palwto.
          v_output_gs-m_brgew  = v_palwto.                      " SIR 34606
        endif.
      else.                                                     " SIR 38784
    * Get material weight from Master data (MARA)               " SIR 38784
        read table i_mara1 into v_mara                          " SIR 38784
                               with key matnr = v_lips-matnr    " SIR 38784
                               binary search.                   " SIR 38784
        if v_mara-gewei = c_uom.                                " SIR 38784
          v_output_dt-m_brgew  = v_mara-brgew * v_lips-lgmng.   " SIR 38784
          v_output_ag-m_brgew  = v_mara-brgew * v_lips-lgmng.   " SIR 38784
          v_output_gs-m_brgew  = v_mara-brgew * v_lips-lgmng.   " SIR 38784
        else.                                                   " SIR 38784
          perform z_unit_conversion                             " SIR 38784
               using v_mara-brgew v_mara-gewei c_uom v_palwto.  " SIR 38784
          v_output_dt-m_brgew  = v_palwto * v_lips-lgmng.       " SIR 38784
          v_output_ag-m_brgew  = v_palwto * v_lips-lgmng.       " SIR 38784
          v_output_gs-m_brgew  = v_palwto * v_lips-lgmng.       " SIR 38784
        endif.                                                  " SIR 38784
      endif.                                                    " SIR 38784
    *-------------------------------------------------* Begin of SIR 061880
    * To Get the Volume Data.
      perform get_volume_data.
    *---------------------------------------------------* End of SIR 061880
      v_output_dt-num_pallets = v_num_pallets_int.
      v_output_ag-num_pallets = v_num_pallets_int.
      v_output_gs-num_pallets = v_num_pallets_int.              " SIR 34606
      v_output_dt-num_cases   = v_num_cases_int.
      v_output_ag-num_cases   = v_num_cases_int.
      v_output_gs-num_cases   = v_num_cases_int.                " SIR 34606
      v_output_dt-num_loose   = v_loose_qty.
      v_output_ag-num_loose   = v_loose_qty.
      v_output_gs-num_loose   = v_loose_qty.                    " SIR 34606
    *{   INSERT         D11K901833                                        1
    * Sir 054042/48712 insert code to put in 0 instead of 1 for collect
    * value for delivery lines
      if v_lips-lgmng = 0.
        v_output_dt-num_delln   = 0.
        v_output_ag-num_delln   = 0.
        v_output_gs-num_delln   = 0.
      else.
    * end of insert for 054042/48712
    *}   INSERT
        v_output_dt-num_delln   = 1.
        v_output_ag-num_delln   = 1.
        v_output_gs-num_delln   = 1.          

    Since this is your first post, let me clue you in. This forum is meant for asking very short and specific questions. For example, if you've asked, how could you improve a particular one SELECT statement, I'm sure many people would have helped you.
    But you cannot seriously expect someone on this forum to spend his/her time and review such a long program. For this people are normally getting paid, you know.

  • Materialized view is not taking the Index

    Hi Friends,
      I have a materialized View for my Sales Details.  I created a view on the Materialized View and using in my report. The Performance is very slow. After little research,
    I found that the indexes are not used when the MV is accessed in the SQL.  I tried explain plan for a very simple query on the view to prove that.
    Here is my simple query.
    select month_num ,sum(qty) from v_mv_sales_table
    where month_num=201301
    by month_num;
    When I checked for count() it is considering the index and performance is good.
    select month_num ,count(*) from v_mv_sales_table
    where month_num=201301
    by month_num;
           Then I tried using hint for forcing the index. That time also it is working only for count not for SUM.
    select   /* +INDEX(tr, IDX2_MV_SALES_TABLE) */  month_num,count(*)
    v_mv_sales_table tr
    month_num=201301
    by month_num;
    Is there any way , I can force my sql to use the index on the month_num column?.
    I have a question about using hint to force the sql.
    My Question is  I have indexes on my Materialized View. But I am using the hint on my view based on the Mat View.
    Will it be okay to use like that?.
    You may ask why I am using View instead of using Mat View directly. At later stage we are planning to apply some user level restrictions to the data
    by linking the view to some other tables/view. So I am forced to use the View instead of MV.
    Any ideas or suggestions will be good
    Thanks in advance,
    Vimal....

    here's an example
    SQL> set autot off
    SQL>
    SQL> create table t
      2  (dt date)
      3  /
    Table created.
    SQL>
    SQL> insert into t
      2  select trunc (sysdate) + rownum
      3    from all_objects
      4  /
    25707 rows created.
    SQL>
    SQL>
    SQL> create index tidx on t (trunc (dt))
      2  /
    Index created.
    SQL>
    SQL> begin
      2     dbms_stats.gather_table_stats (user, 'T');
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL>
    SQL>
    SQL> set autot trace expl
    SQL> select *
      2    from t
      3   where dt = to_date ('24-07-2009', 'dd-mm-yyyy')
      4  /
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=1 Bytes=8)
       1    0   TABLE ACCESS (FULL) OF 'T' (Cost=6 Card=1 Bytes=8)
    SQL>
    SQL> select *
      2    from t
      3   where dt = trunc (sysdate)
      4  /
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=1 Bytes=8)
       1    0   TABLE ACCESS (FULL) OF 'T' (Cost=6 Card=1 Bytes=8)
    SQL>
    SQL> select *
      2    from t
      3   where trunc (dt) = trunc (sysdate)
      4  /
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=8)
       1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=2 Card=1 Bytes=
              8)
       2    1     INDEX (RANGE SCAN) OF 'TIDX' (NON-UNIQUE) (Cost=1 Card=1
    SQL>
    SQL>
    SQL> drop table t
      2  /
    Table dropped.
    SQL>

Maybe you are looking for

  • JavaFX SDK - no tab widget?

    The previous JavaFX releases had somehow the possibility to use a tabbedPane and the tab widget. I don't see them back in the list of javafx.ext.swing widgets of the newly released javaFX SDK. Will they be added in a future SDK release or if not, doe

  • MTD Logic Required for Calculation

    I have Month and year field in my DSO. Values Year(2009) and Month(4) I want below logic to calculate - MTD qty % = (qty * elapse days) / no of days in month so from the above, I want to know, 1. How can we get the no of days from the above 2 objects

  • HT4623 I have an Ipad 3 and cannot get it to print in Iworks

    I have an Ipad 3 and cannot print in Iworks. I have no problem scanning photo and printing through a cannon apps i downdloaded through my WiFi connection to Canon MX340 multifuntion printer.  My ios 5.5 is uptodate but the ipad when in any Iworks app

  • "Top Songs" category, on an artist's cd, on Itunes?   What does it mean?

    When I look at a cd, on itunes, there is a "top songs" category, for the cd. I notice that the order of the songs changes, week to week, in the "top songs," categgory, for the artist's cd. What does it mean? For example, does the first song listed me

  • FB4 DataGrid - ArrayList changes not persistent

    I made a small test app to show the problem: <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009"                             xmlns:s="library://ns.adobe.com/flex/spark"                             xm