Spatial Performance Problem

I'm new to GIS development, and hereby got a performance comparison.
Techs adopted time consumed to display the whole map in one time
ESRI SDE + Oracle Spatial 6 Mins
ESRI SDE + Oracle 1 Min 20 Sec
ESRI SDE + SQLSERVER 50 Sec
ESRI SDE + Shapefile 30 Sec
Shape file size 550M, totally 480k rows
Can anyone tell me why SDE + Spatial works so low performance than SDE + SQL SERVER, and what can I do to improve the performance when adopt just SDE + ORACLE(just for storage like normal RDBMS).
I don't think Oracle will work worse than SQL Server in SDE environment. But what should I pay attention to during using Oracle to store map data for SDE.
Is that not a normal scense to use SDE with Oracle Spatial? Why I got so bad performance? I just migrate data from Shapefile to SDO_GEOMETRY type and add indexes as Spatial required.
Any one can leave your comment if you know something about this.
Thanks in advance.

Here is a partitioning example as promised. Note an upcoming talk at Oracle Open World will show how to extend this model to include spatial partitioning, where we will describe some really important optimizations that will bring scalability (already a differentiator) to new levels. After Oracle Open World we'll post that information to OTN as well (please come talk to us if you happen to go).
-- First, remove table partition_sales if it exists
drop table partition_sales;
-- Create table partition_sales. This table is partitioned
-- by date range, breaking each year into 4 quarters.
-- What users see is a single table called partition_sales
-- but in reality there is one smaller table created for
-- each partition
-- For simplicity we are loading all partitions into the same
-- tablespace. Most often they are loaded into separate tablespaces
-- for performance, scalability, and manageability reasons
CREATE TABLE partition_sales
( customer_id number,
sale_date date,
sale_amount number(10,2),
geom mdsys.sdo_geometry
PARTITION BY RANGE (sale_date)
(PARTITION BEFORE_2003
VALUES LESS THAN (TO_DATE('01-JAN-2003','DD-MON-YYYY')),
PARTITION Q1_2003 VALUES LESS THAN (TO_DATE('01-APR-2003','DD-MON-YYYY')),
PARTITION Q2_2003 VALUES LESS THAN (TO_DATE('01-JUL-2003','DD-MON-YYYY')),
PARTITION Q3_2003 VALUES LESS THAN (TO_DATE('01-OCT-2003','DD-MON-YYYY')),
PARTITION Q4_2003 VALUES LESS THAN (TO_DATE('01-JAN-2004','DD-MON-YYYY')),
PARTITION Q1_2004 VALUES LESS THAN (TO_DATE('01-APR-2004','DD-MON-YYYY')),
PARTITION Q2_2004 VALUES LESS THAN (TO_DATE('01-JUL-2004','DD-MON-YYYY')),
PARTITION Q3_2004 VALUES LESS THAN (TO_DATE('01-OCT-2004','DD-MON-YYYY')),
PARTITION Q4_2004 VALUES LESS THAN (TO_DATE('01-JAN-2005','DD-MON-YYYY')),
PARTITION AFTER_2004
VALUES LESS THAN (MAXVALUE));
-- Each of these inserts goes into a separate partition.
-- Although there are only 9 rows inserted, each into a separate
-- partition, if you had 50 million rows they would be divided into
-- smaller, more manageable pieces
insert into partition_sales values (61243, '31-DEC-2002', 18764.23,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-73.943849, 40.66980, NULL), NULL, NULL) );
insert into partition_sales values (4576, '15-JAN-2003', 27797.05,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-71.017892, 42.336029, NULL), NULL, NULL) );
insert into partition_sales values (1161, '22-MAY-2003', 16222.50,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-76.610616, 39.30080, NULL), NULL, NULL) );
insert into partition_sales values (55033, '09-SEP-2003', 1211.33,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-77.016167, 38.90505, NULL), NULL, NULL) );
insert into partition_sales values (768, '15-DEC-2003', 84397.61,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-73.943851, 40.66975, NULL), NULL, NULL) );
insert into partition_sales values (61243, '05-JAN-2004', 21764.26,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-73.943849, 40.66980, NULL), NULL, NULL) );
insert into partition_sales values (6474, '17-JUN-2004', 76411.81,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-85.256956, 35.066209, NULL), NULL, NULL) );
insert into partition_sales values (505, '27-JUL-2004', 104100.89,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-74.173245, 40.7241, NULL), NULL, NULL) );
insert into partition_sales values (9151, '11-OCT-2004', 44298.66,
mdsys.sdo_geometry(2001, 8307,
mdsys.sdo_point_type(-79.976702, 40.43920, NULL), NULL, NULL) );
commit;
-- Let's make sure what we thought happened actually did happen
-- The following allows us to see how many rows are stored in each partition
exec dbms_stats.gather_table_stats('SCOTT','PARTITION_SALES');
select partition_name,num_rows
from user_tab_partitions
where table_name='PARTITION_SALES';
-- Now the data is loaded, we can spatially index it
-- First, insert into user_sdo_geom_metadata
delete from user_sdo_geom_metadata where table_name = 'PARTITION_SALES'
and column_name = 'GEOM';
insert into user_sdo_geom_metadata values ('PARTITION_SALES', 'GEOM',
mdsys.sdo_dim_array (
mdsys.sdo_dim_element ('LONG', -180, 180, 1),
mdsys.sdo_dim_element ('LAT', -90, 90, 1)),
8307);
-- Now create the index. Note the keyword local is used
-- which creates a separate index for each partition
-- Because this example is simple, we didn't show storage
-- of each spatial index partition in a separate tablespace,
-- which is the common usage, again for performance, scalability,
-- and manageability.
drop index partition_sales_sidx;
create index partition_sales_sidx on partition_sales (geom)
indextype is mdsys.spatial_index parameters ('layer_gtype=point')
local;
-- Any query that uses the partition key (the date) will only search the
-- partition associated with that date:
select customer_id, sale_amount
from partition_sales
where sale_date = '17-JUN-2004'
and sdo_relate(geom,
mdsys.sdo_geometry(2003, 8307, null, mdsys.sdo_elem_info_array(1,1003,1),
mdsys.sdo_ordinate_array(-86,34, -85,34, -85,36, -86,36, -86,34)),
'mask=anyinteract querytype=window') = 'TRUE';
-- The previous query only touched a single partition.
-- You don't have to specify the partition key, though.
-- All partitions can be searched if required. To speed up
-- these kinds of queries, you can search all partitions in
-- parallel by specifying the keywork parallel in the
-- create index statement, or altering the index
alter index partition_sales_sidx parallel 4;
-- No date specified in this query, so all partitions may be searched
select customer_id, sale_amount, sale_date
from partition_sales
where sdo_relate(geom,
mdsys.sdo_geometry(2003, 8307, null, mdsys.sdo_elem_info_array(1,1003,1),
mdsys.sdo_ordinate_array(-75,38, -70,38, -70,45, -75,45, -75,38)),
'mask=anyinteract querytype=window') = 'TRUE';

Similar Messages

  • Oracle Spatial Performance with 10-20.000 users

    Does anyone have any experience when Oracle Spatial is used with say 20.000 concurrent users. I am not interested in MapViewer response time, but lets say there is:
    - an app using 800 different tables each having an sdo_geometry column
    - the app is configured with different tables visible on different view scales
    - let's say an average of 40-50 tables is visible at any given time
    - some tables will have only a few records, while other can hold millions.
    - there is no client side caching
    - clients can zoom in/out pan.
    Anwers I am interested in:
    - What sort of server would be required
    - How can Oracle serve all that data (each Refresh renders the map and retrieves the data over the wire as there is no client side caching).
    - What sort of network infrastructure would be required.
    - Can clients connect to different servers and hence use load balancing or does Oracle have an automatic mechanism for that?
    Thanks in advance,
    Patrick

    Patrick, et al.
    There are lots of things one can do to improve performance in mapping environments because of a lot of the visualisation is based on "background" or read-only data. Here are some "tips":
    1. Spatially sort read-only data.
    This tip makes sure that data that is close to each other in space are next to each other on disk! Dan gave a good suggestion when he referenced Chapter 14, "Reorganize the Table Data to Minimize I/O" pp 580- 582, Pro Oracle Spatial. But just as easily one can create a table as select ... where sdo_filter() where the filtering object is an optimized rectangle across the whole of the dataset. (This is quite quick on 10g and above but much slower on earlier releases.)
    When implementing this make sure that the created table is created such that its blocks are next to each other in the tablespace. (Consider tablespace defragmentation beforehand.) Also, if the data is READ ONLY set the PCTFREE to 0 in order to pack the data up into as small a number of blocks as possible.
    2. Generalise data
    Rendering spatial data can be expensive where the data is geometrically detailed (many vertices) esp where the data is being visualised at smaller scales than it was captured at. So, if your "zoom thresholds" allow 1:10,000 data to be used at 1:100,000 then you are going to have problems. Consider pre-generalising the data (see sdo_util.simplify) before deployment. You can add multiple columns to your base table to hold this data. Be careful with polygon data because generalising polygons that share boundaries will create gaps etc as the data is more generalised. Often it is better to export the data to a GIS which can maintain the boundary relationships when generalising (say via topological relationships).
    Oracle's MapViewer has excellent on-the-fly generalisation but here one needs to be careful. Application tier caching (cf Bryan's comments) can help here a lot.
    3. Don't draw data that is sub-pixel.
    As one zooms out objects become smaller and smaller until they reach a point where the whole object can be drawn within a single pixel. If you have control over your map visualisation application you might want to consider setting the SDO_FILTER parameter "min_resolution" flag dynamically so that its value is the same as the number of meters / pixel (eg min_resolution=10). If this is set Oracle Spatial will only include spatial objects in the returned search set if one side of a geometry's MBR is greater than or equal to this value. Thus any geometries smaller than a pixel will not be returned. Very useful for large scale data being drawn at small scales and for which no selection (eg identify) is required. With Oracle MapViewer this behaviour can be set via the generalized_pixels parameter.
    3. SDO_TOLERANCE, Clean Data
    If you are querying data other than via MBR (eg find all land parcels that touch each other) then make sure that your sdo_tolerance values are appropriate. I have seen sites where data captured to 1cm had an sdo_tolerance value set to a millionth of a meter!
    A corollary to this is make sure that all your data passes validation at the chosen sdo_tolerance value before deploying to visualisation. Run sdo_geom.validate_geometry()/validate_layer()...
    4. Rtree Spatial Indexing
    At 10g and above lots of great work went in to the RTree indexing. So, make sure you are using RTrees and not QuadTrees. Also, many GIS applications create sub-optimal RTrees by not using the additional parameters available at 10g and above.
    4.1 If your table/column sdo_geometry data contains only points, lines or polygons then let the RTree indexer know (via layer_gtype) as it can implement certain optimizations based on this knowledge.
    4.2 With 10g you can set the RTree's spatial index data block use via sdo_pct_free. Consider setting this parameter to 0 if the table/column sdo_geometry data is read only.
    4.3 If a table/column is in high demand (eg it is the most commonly used table in all visualisations) you can consider loading (a part of) the RTree index into memory. Now, with the RTree indexing, the sdo_non_leaf_tbl=true parameter will split the RTree index into its leaf (contains actual rowid reference) and non-leaf (the tree built on the leaves) components. Most RTrees are built without this so only the MDRT*** secondary tables are built. But if sdo_non_leaf_tbl is set to true you will see the creation of an additional MDNT*** secondary table (for the non_leaf part of the rtree index). Now, if appropriate, the non_leaf table can be loaded into memory via the following:
    ALTER TABLE MDNT*** STORAGE(BUFFER_AREA KEEP);
    This is NOT a general panacea for all performance problems. One should investigate other options before embarking on this (cf Tom Kyte's books such as Expert Oracle Database Architecture, 9i and 10g Programming Techniques and Solutions.)
    4.4 Don't forget to check your spatial index data quality regularly. Because many sites use GIS package GUI tools to create tables, load data and index them, there is a real tendency to not check what they have done or regularly monitor the objects. Check the SDO_RTREE_QUALITY column in USER_SDO_INDEX_METADATA and look for indexes with an SDO_RTREE_QUALITY setting that is > 2. If > 2 consider rebuilding or recreating the index.
    5. The rendering engine.
    Whatever rendering engine one uses make sure you try and understand fully what it can and cannot do. AutoDesk's MapGuide is an excellent product but I have seen it simply cache table/column data and never dynamically access it. Also, I have been at one site which was running Deegree and MapViewer and MapViewer was so fast in comparison to Deegree that I was called in to find out why. I discovered that Deegree was using SDO_RELATE(... ANYINTERACT ...) for all MBR queries while MapViewer was using SDO_FILTER. Just this difference was causing some queries to perform at < 10% of the speed of MapViewer!!!!
    6. Consider "denormalising" data
    There is an old adage in databases that is "normalise for edit, denormalise for performance". When we load spatial data we often get it from suppliers in a fairly flat or normalised form. In consort with spatial sorting, consider denormalising the data via aggregations based on a rendering attribute and some sort of spatial unit. For example, if you have 1 million points stored as single points in SDO_GEOMETRY.SDO_POINT which you want to render by a single attribute containing 20 values, consider aggregating the data using this attribute AND some sort of spatial BUCKET or BIN. So, consider using SDO_AGGR_UNION coupled with Spatial Analysis and Mining package functions to GROUP the data BY <<column_name>> and a set of spatial extents.
    6. Tablespace use
    Finally, talk to your DBA in order to find out how the oracle database's physical and logical storage is organised. Is a SAN being used or SAME arranged disk arrays? Knowing this you can organise your spatial data and indexes using more effective and efficient methods that will ensure greater scalability.
    7. Network fetch
    If your rendering engine (app server) and database are on separate machines you need to investigate what sort of fetch sizes are being used when returning data from queries to the middle-tier. Fetch sizes for attribute only data rows and rows containing spatial data can be, and normally are, radically different. Accepting the default settings for these sizes could be killing you (as could the sort_area_size of the Oracle session the application server has created on the database). For example I have been informed that MapInfo Pro uses a fixed value of 25 records per fetch when communicating with Oracle. I have done some testing to show that this value can be too small for certain types of spatial data. SQL Developer's GeoRaptor uses 100 which is generally better (but this one can modify this). Most programmers accept defaults for network properties when programming in ADO/ODBC/OLEDB/JDBC: just be careful as to what is being set here. (This is one of the great strengths of ArcSDE: its TCP/IP network transport is well written, tuneable and very efficient.)
    8. Physical Format
    Finally, while Oracle's excellent MapViewer requires data its spatial data to be in Oracle, other commercial rendering engines do not. So, consider using alternate, physical file formats that are more optimal for your rendering engine. For example, Google Earth Enterprise "compiles" all the source data into an optimal format which the server then serves to Google Earth Enterprise clients. Similarly, a shapefile on local disk to the application server (with spatial indexing) may be faster that storing the data back in Oracle on a database server that is being shared with other business databases (eg Oracle financials). If you don't like this approach and want to use Oracle only consider using a dedicated Oracle XE on the application server for the data that is read only and used in most of your generated maps eg contour or drainage data.
    Just some things to think about.
    regards
    Simon

  • Performance problem with sdn_nn - new 10g install

    I am having a performance problem with sdn_nn after migrating to a new machine. The old Oracle version was 9.0.1.4. The new is 10g. The new machine is faster in general. Most (non-spatial) batch processes run in half the time. However, the below statement is radically slower. The below statement ran in 45 minutes before. On the new machine it hasn't finished after 10 hours. I am able to get a 5% sample of the customers to finish in 45 minutes.
    Does anyone have any ideas on how to approach this problem? Any chance something isn't installed correctly on the new machine (the nth version of the query finishe, albeit 20 times slower)?
    Appreciate any help. Thanks.
    - Jack
    create table nearest_store
    as
    select /*+ ordered */
    a.customer_id,
    b.store_id nearest_store,
    round(mdsys.sdo_nn_distance(1),4) distance
    from customers a,
    stores b
    where mdsys.sdo_nn(
    b.geometry,
    a.geometry,
    'sdo_num_res=1, unit=mile',
    1
    ) = 'TRUE'
    ;

    Dan,
    customers 110,000 (involved in this query)
    stores 28,000
    Here is the execution plan on the current machine:
    CREATE TABLE STATEMENT cost = 81947
    LOAD AS SELECT
    PX COORDINATOR
    PX SEND QC (RANDOM) :TQ10000
    ROW NESTED LOOPS
    1 1 PX BLOCK ITERATOR
    1 1ROW TABLE ACCESS FULL CUSTOMERS
    1 3 PARTITION RANGE ALL
    1 3 TABLE ACCESS BY LOCAL INDEX ROWID STORES
    DOMAIN INDEX STORES_SIDX
    I can't capture the execution plan on the old database. It is gone. I don't remember it being any different from the above (full scan customers, probe stores index once for each row in customers).
    I am trying the query without the create table (just doing a count). I'll let you know on that one.
    I am at 10.0.1.3.
    Here is how I created the index:
    create index stores_sidx
    on stores(geometry)
    indextype is mdsys.spatial_index LOCAL
    Note that the stores table is partitioned by range on store type. There are three store types (each in its own partition). The query returns the nearest store of each type (three rows per customer). This is by design (based on the documented behavior of sdo_nn).
    In addition to running the query without the CTAS, I am also going try running it on a different machine tonight. I let you know how that turns out.
    The reason I ask about the install, is that the Database Quick Installation Guide for Solaris says this:
    "If you intend to use Oracle JVM or Oracle interMedia, you must install the Oracle Database 10g Products installation type from the Companion CD. This installation optimizes the performance of those products on your system."
    And, the Database Installlation Guide says:
    "If you plan to use Oracle JVM or Oracle interMedia, Oracle strongly recommends that you install the natively compiled Java libraries (NCOMPs) used by those products from the Oracle Database 10g Companion CD. These libraries are required to improve the performance of the products on your platform."
    Based on that, I am suspicious that maybe I have the product installed on the new machine, but not correctly (forgot to set fast=true).
    Let me know if you think of anything else you'd like to see. Thanks.
    - Jack

  • PL/SQL Performance problem

    I am facing a performance problem with my current application (PL/SQL packaged procedure)
    My application takes data from 4 temporary tables, does a lot of validation and
    puts them into permanent tables.(updates if present else inserts)
    One of the temporary tables is parent table and can have 0 or more rows in
    the other tables.
    I have analyzed all my tables and indexes and checked all my SQLs
    They all seem to be using the indexes correctly.
    There are 1.6 million records combined in all 4 tables.
    I am using Oracle 8i.
    How do I determine what is causing the problem and which part is taking time.
    Please help.
    The skeleton of the code which we have written looks like this
    MAIN LOOP ( 255308 records)-- Parent temporary table
    -----lots of validation-----
    update permanent_table1
    if sql%rowcount = 0 then
    insert into permanent_table1
    Loop2 (0-5 records)-- child temporary table1
    -----lots of validation-----
    update permanent_table2
    if sql%rowcount = 0 then
    insert into permanent_table2
    end loop2
    Loop3 (0-5 records)-- child temporary table2
    -----lots of validation-----
    update permanent_table3
    if sql%rowcount = 0 then
    insert into permanent_table3
    end loop3
    Loop4 (0-5 records)-- child temporary table3
    -----lots of validation-----
    update permanent_table4
    if sql%rowcount = 0 then
    insert into permanent_table4
    end loop4
    -- COMMIT after every 3000 records
    END MAIN LOOP
    Thanks
    Ashwin N.

    Do this intead of ditching the PL/SQL.
    DECLARE
    TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
    TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
    pnums NumTab;
    pnames NameTab;
    t1 NUMBER(5);
    t2 NUMBER(5);
    t3 NUMBER(5);
    BEGIN
    FOR j IN 1..5000 LOOP -- load index-by tables
    pnums(j) := j;
    pnames(j) := 'Part No. ' || TO_CHAR(j);
    END LOOP;
    t1 := dbms_utility.get_time;
    FOR i IN 1..5000 LOOP -- use FOR loop
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    END LOOP;
    t2 := dbms_utility.get_time;
    FORALL i IN 1..5000 -- use FORALL statement
    INSERT INTO parts VALUES (pnums(i), pnames(i));
    get_time(t3);
    dbms_output.put_line('Execution Time (secs)');
    dbms_output.put_line('---------------------');
    dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
    dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
    END;
    Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

  • JRC 2: Performance Problem

    Hi.
    Our reporting component used JRC 1.x before we upgraded to JRC 2.x. We got two issues after upgrading.
    First issue I solved already with a workaround which I published on stackoverflow.com. (1) Does anyone knows where I will find the issue management system to report this issue?
    Second issue occurs big performance problem within our project. We opened a report with 6 subreports (which includes 1 upto 3 tables) in 2-4 seconds using JRC 1. If we will open same report using JRC 2, we wait upto 60 seconds.
    This methods requires more time with JRC 2 comparing to JRC 1:
    ReportClientDocument#open(String, int);
    SubreportController#setTableLocation(String, ITable, ITable)
    DatabaseController#setTableLocation(ITable, ITable)
    Each invocation of one of these methods requires 2-4 seconds.
    Thank you in advance.
    Best regards
    Thomas
    (1) http://stackoverflow.com/questions/479405/replace-a-database-connection-for-subreports-with-jrc

    hello ....
    my report is  ''crystal report 11'' => "OLE DB"  => "Add Command(select * from table) " .
    code(JRC) : eclipse + crystal report for eclipse version 2 =>  "cr4e-all-in-one-win_2.0.1.zip"
    <%@ page contentType="text/html; charset=UTF-8"
    import="
    com.crystaldecisions.report.web.viewer.CrystalReportViewer,
    com.crystaldecisions.reports.sdk.ReportClientDocument,
    com.crystaldecisions.sdk.occa.report.lib.ReportSDKExceptionBase,
    java.sql.Connection,
    java.sql.DriverManager,
    java.sql.ResultSet,
    java.sql.SQLException,
    java.sql.Statement" %>
    <%
         try {
              String reportName = "report.rpt";
              ReportClientDocument clientDoc = new ReportClientDocument();
              clientDoc.open(reportName, 0);
              String tableAlias = "Command";
              clientDoc.getDatabaseController().setDataSource(myResult("SELECT * FROM table"), tableAlias,tableAlias);
              CrystalReportViewer crystalReportPageViewer = new CrystalReportViewer();
              crystalReportPageViewer.setReportSource(clientDoc.getReportSource());
              crystalReportPageViewer.processHttpRequest(request, response, application, null);
         } catch (ReportSDKExceptionBase e) {
              e.printStackTrace();
             out.println(e);
    %>
    I simplified the code, *myResult("SELECT * FROM table") *  is absolutely no problem ,
    and this code is absolutely no problem in the "crystal report for eclipse "version 1
    but in  version 2 run error:
    com.crystaldecisions.sdk.occa.report.lib.ReportSDKException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4---- Error code:-2147467259 Error code name:failed
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.if(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.a(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter$2.call(Unknown Source)
         at com.crystaldecisions.reports.common.ThreadGuard.syncExecute(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.for(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.int(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(Unknown Source)
         at com.businessobjects.sdk.erom.jrc.a.a(Unknown Source)
         at com.businessobjects.sdk.erom.jrc.a.execute(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.RemoteAgent$a.execute(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.CommunicationChannel.a(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.RemoteAgent.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.if(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportClientDocument.new(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.b9.onDataSourceChanged(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.DatabaseController.setDataSource(Unknown Source)
         at org.apache.jsp.No_005f1.Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp._jspService(Eclipse_005fJTDS_005fSQL2005_005fTable_002dviewer_jsp.java:106)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374)
         at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342)
         at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
         at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
         at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
         at java.lang.Thread.run(Unknown Source)
    Caused by: com.crystaldecisions.reports.common.QueryEngineException: u7121u6CD5u9810u671Fu7684u8CC7u6599u5EABu9023u7DDAu5668u932Fu8AA4
         at com.crystaldecisions.reports.queryengine.Connection.bf(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.z3(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.bL(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Rowset.zM(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Connection.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.if(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.try(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.a(Unknown Source)
         at com.crystaldecisions.reports.queryengine.Table.u7(Unknown Source)
         at com.crystaldecisions.reports.datafoundation.DataFoundation.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.dfadapter.DFAdapter.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.dfadapter.CheckDatabaseHelper.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.datafoundation.CheckDatabaseCommand.new(Unknown Source)
         at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
         at com.crystaldecisions.reports.common.Document.a(Unknown Source)
         at com.crystaldecisions.reports.dataengine.VerifyDatabaseCommand.new(Unknown Source)
         at com.crystaldecisions.reports.common.CommandManager.a(Unknown Source)
         at com.crystaldecisions.reports.common.Document.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.f.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.a(Unknown Source)
         at com.businessobjects.reports.sdk.requesthandler.DatabaseRequestHandler.if(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.do(Unknown Source)
         ... 39 more
    Please help me and tell me why....

  • Performance problem with S873,S691,S679,S716

    Hi All,
    I am using tables S873,S691,S679,S716 in my report. Since these have huge data its creating performance problem. Right now i am selecting data from tables as Select single.. in Loop.. endloop.
    If i try to get it out and use FOR ALL ENTRIES, its taking longer time.
    I dont have all keys with me and i m trying to use index whereever possible.
    Any hints, comments are really appreciable.
    My code goes like this.
    This is not a full code... only a part where i am fetching data from 'S' tables.
    loop at i_mvke.
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO
    (s679-kunnr,s679-zzobklgq,s679-zzcbklgq) WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    SELECT SUM( zznslqty ) FROM s691 INTO
    w_s691_net_bill_qty WHERE
    Use index Z5
    matnr EQ i_mvke-matnr AND
    spmon EQ w_spmon AND
    vrsio EQ p_versn AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr."
    SELECT SUM( zznslqty ) SUM( zzninvnum ) FROM s873 INTO
    (work01-zznslqty, work01-zzninvnum) WHERE
    use index Z02
    vkorg EQ i_mvke-vkorg AND
    werks EQ p_werks AND
    matnr EQ i_mvke-matnr AND
    sptag IN s_billed AND
    vrsio EQ p_versn AND
    kunnr IN s_kunnr.
    SELECT zcrdotov FROM s716 INTO s716-zcrdotov WHERE
    matnr EQ i_mvke-matnr AND
    sptag IN s_billed AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr AND
    werks = p_werks AND
    vrsio EQ p_versn.
    GET THE RECORD TOTAL
    ADD 1 TO w_s716_total_count.
    GET THE ONTIME TOTAL
    IF s716-zcrdotov = 1.
    ADD 1 TO w_s716_ontime_count.
    ENDIF.
    ENDSELECT.
    PERFORM fill_output_table.
    endloop. "i_mvke
    Message was edited by: Agasti Kale

    hi
    good
    wrong->
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO
    (s679-kunnr,s679-zzobklgq,s679-zzcbklgq) WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    write->
    SELECT kunnr zzobklgq zzcbklgq
    FROM s679 INTO CORRESPONDING FIELDS OF TABLE I_MVKE
    WHERE
    matnr EQ i_mvke-matnr AND
    vrsio EQ p_versn AND
    sptag IN s_billed AND
    werks EQ p_werks AND
    vkorg EQ i_mvke-vkorg AND
    kunnr IN s_kunnr.
    do the changes accordingly in the below select statements. you have not post the detail report otherwise i could have help you in other select statements also.
    thanks
    mrutyun

  • Performance problem in ABAP code

    hai guys,
    I created report using tables like bsis,t001 etc,( tax report).
    I have performance problem in this report.
    COuld you pls tell me how to analyse the report and find out the place where process is taking more memory etc.
    i did abap trace and runtime analysis..but could not find out exact point.
    how to do this..
    i want to analysis each subroutine,internal table and query process.
    could you pls give me some ideas.
    ambichan

    There is an excellent tool available in SAP - <b>Code Inspector.
    </b>
    Transaction is SCII
    Try the following link and I am sure you will find a bunch of useful documents.
    <a href="http://www.google.co.in/search?hl=en&safe=off&q=site%3Asdn.sap.comfiletype%3ApdfCode+Inspector&btnG=Search&meta=">ABAP Performance</a>
    I use the Code Inspector to search for
    a) All the select statements which are present within the loop
    b) Nested Loops
    c) Select query without providing criteria for primary keys, depending upon situation
    d) Can the search be narrowed with extra conditions
    e) Using READ .. BINARY SEARCH if internal table has lots of records.
    The list is actually endless, but this is something to start with.
    You can actually have a checklist, and depending upon it, go through your code. The more you adhere to checklist, you will find that, the performance would dramatically improve.
    Also use <b>ST05</b> transaction, for SQL Trace and find out which select query is taking the maximum time for response.
    Regards,
    Subramanian V.

  • Report program Performance problem

    Hi All,
       one object is taking 30hr for executing.some one develped this in 1998 but this time it is a big Performance problem.please some one helep what to do i am giving that code.
    *--DOCUMENTATION--
    Programe written by :  31.03.1998 .
    Purpose : this programe updates the car status into the table zsdtab1
    This programe is to be schedule in the backgroud periodically .
    Querries can be fired on the table zsdtab1 to get the details of the
        Car .
    This programe looks at the changes made in the material master from
    last updated date and the new entries in material master and updates
    the tables zsdtab1 .
    Changes in the Sales Order are not taken into account .
    To get a fresh data set the value of zupddate in table ZSTATUS as
    01.01.1998 . All the data will be refreshed from that date .
    Program Changed on 23/7/2001 after version upgrade 46b by jyoti
    Addition of New tables for Ibase
    tables used -
    tables : mara ,                        " Material master
             ausp ,                        " Characteristics table .
             zstatus ,                     " Last updated status table .
             zsdtab1 ,    " Central database table to be maintained .
             vbap ,                        " Sales order header table .
             vbak ,                        " Sales order item table .
             kna1 ,                        " Customer master .
             vbrk ,
             vbrp ,
             bkpf ,
             bseg ,
             mseg ,
             mkpf ,
             vbpa ,
             vbfa ,
             t005t .                         " Country details tabe .
    --NEW TABLES ADDEDFOR VERSION 4.6B--
    tables :   ibsymbol ,ibin , ibinvalues .
    data : vatinn like ibsymbol-atinn , vatwrt like ibsymbol-atwrt ,
           vatflv like ibsymbol-atflv .
    *--types definition--
    types : begin of mara_itab_type ,
               matnr like mara-matnr ,
               cuobf like mara-cuobf ,
            end of mara_itab_type ,
            begin of ausp_itab_type ,
               atinn like ausp-atinn ,
               atwrt like ausp-atwrt ,
               atflv like ausp-atflv ,
            end of ausp_itab_type .
    data : mara_itab type mara_itab_type occurs 500 with header line ,
           zsdtab1_itab like zsdtab1 occurs 500 with header line ,
           ausp_itab type ausp_itab_type occurs 500 with header line ,
           last_date type d ,
           date type d .
    data: length type i.
    clear mara_itab . refresh mara_itab .
    clear zsdtab1_itab . refresh zsdtab1_itab .
    select single  zupddate into last_date from zstatus
           where programm = 'ZSDDET01' .
    select matnr cuobf into (mara_itab-matnr , mara_itab-cuobf) from mara
          where mtart eq 'FERT' or mtart = 'ZCBU'.
        where MATNR IN MATERIA
         and ERSDA IN C_Date
         and MTART in M_TYP.
        append mara_itab .
    endselect .
    loop at mara_itab.
    clear zsdtab1_itab .
    zsdtab1_itab-commno = mara_itab-matnr .
       Get the detailed data into internal table ausp_itab .----------->>>
    clear ausp_itab . refresh ausp_itab .
    --change starts--
    select atinn atwrt atflv into (ausp_itab-atinn , ausp_itab-atwrt ,
                               ausp_itab-atflv) from ausp
          where objek = mara_itab-matnr .
          append ausp_itab .
       endselect .
       clear ausp_itab .
    select  atinn  atwrt atflv  into (ausp_itab-atinn , ausp_itab-atwrt ,
    ausp_itab-atflv) from ibin as a inner join ibinvalues as b
                      on ain_recno = bin_recno
           inner join  ibsymbol as c
                      on bsymbol_id = csymbol_id
        where a~instance = mara_itab-cuobf  .
      append ausp_itab .
    endselect .
    ----CHANGE ENDS HERE -
    sort ausp_itab by atwrt.
    loop at ausp_itab .
    clear date .
    case ausp_itab-atinn .
      when '0000000094' .
        zsdtab1_itab-model = ausp_itab-atwrt .  " model  .
      when '0000000101' .
        zsdtab1_itab-drive = ausp_itab-atwrt .  " drive
      when '0000000095' .
        zsdtab1_itab-converter = ausp_itab-atwrt . "converter
      when '0000000096' .
        zsdtab1_itab-transmssn = ausp_itab-atwrt . "transmission
      when '0000000097' .
        zsdtab1_itab-colour = ausp_itab-atwrt .    "colour
      when '0000000098' .
        zsdtab1_itab-ztrim = ausp_itab-atwrt .     "trim
      when '0000000103' .
    *=========Sujit 14-Mar-2006
       IF AUSP_ITAB-ATWRT(3) EQ 'WDB' OR AUSP_ITAB-ATWRT(3) EQ 'WDD'
       OR AUSP_ITAB-ATWRT(3) EQ 'WDC' OR AUSP_ITAB-ATWRT(3) EQ 'KPD'.
           ZSDTAB1_ITAB-CHASSIS_NO = AUSP_ITAB-ATWRT+3(14).
       ELSE.
           ZSDTAB1_ITAB-CHASSIS_NO = AUSP_ITAB-ATWRT .     "chassis no
       ENDIF.
        zsdtab1_itab-chassis_no = ausp_itab-atwrt .     "chassis no
    *=========14-Mar-2006
      when '0000000166' .
    ----25.05.04
      length = strlen( ausp_itab-atwrt ).
      if length < 15.                       "***aded by patil
       zsdtab1_itab-engine_no = ausp_itab-atwrt .     "ENGINE NO
      else.
    zsdtab1_itab-engine_no = ausp_itab-atwrt+13(14)."Aded on 21.05.04 patil
      endif.
    ----25.05.04
      when '0000000104' .
        zsdtab1_itab-body_no = ausp_itab-atwrt .     "BODY NO
      when '0000000173' .                                          "21.06.98
        zsdtab1_itab-cockpit = ausp_itab-atwrt .     "COCKPIT NO . "21.06.98
      when '0000000102' .
        zsdtab1_itab-dest = ausp_itab-atwrt .     "destination
      when '0000000105' .
        zsdtab1_itab-airbag = ausp_itab-atwrt .     "AIRBAG
      when '0000000110' .
        zsdtab1_itab-trailer_no = ausp_itab-atwrt .     "TRAILER_NO
      when '0000000109' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-fininspdat = date .   "FIN INSP DATE
      when '0000000108' .
        zsdtab1_itab-entrydate = ausp_itab-atwrt .     "ENTRY DATE
      when '0000000163' .
        zsdtab1_itab-regist_no = ausp_itab-atwrt .     "REGIST_NO
      when '0000000164' .
        zsdtab1_itab-mech_key = ausp_itab-atwrt .     "MECH_KEY
      when '0000000165' .
        zsdtab1_itab-side_ab_rt = ausp_itab-atwrt .     "SIDE_AB_RT
      when '0000000171' .
        zsdtab1_itab-side_ab_lt = ausp_itab-atwrt .     "SIDE_AB_LT
      when '0000000167' .
        zsdtab1_itab-elect_key = ausp_itab-atwrt .     "ELECT_KEY
      when '0000000168' .
        zsdtab1_itab-head_lamp = ausp_itab-atwrt .     "HEAD_LAMP
      when '0000000169' .
        zsdtab1_itab-tail_lamp = ausp_itab-atwrt .     "TAIL_LAMP
      when '0000000170' .
        zsdtab1_itab-vac_pump = ausp_itab-atwrt .     "VAC_PUMP
      when '0000000172' .
        zsdtab1_itab-sd_ab_sn_l = ausp_itab-atwrt .     "SD_AB_SN_L
      when '0000000174' .
        zsdtab1_itab-sd_ab_sn_r = ausp_itab-atwrt .     "SD_AB_SN_R
      when '0000000175' .
        zsdtab1_itab-asrhydunit = ausp_itab-atwrt .     "ASRHYDUNIT
      when '0000000176' .
        zsdtab1_itab-gearboxno = ausp_itab-atwrt .     "GEARBOXNO
      when '0000000177' .
        zsdtab1_itab-battery = ausp_itab-atwrt .     "BATTERY
      when '0000000178' .
        zsdtab1_itab-tyretype = ausp_itab-atwrt .     "TYRETYPE
      when '0000000179' .
        zsdtab1_itab-tyremake = ausp_itab-atwrt .     "TYREMAKE
      when '0000000180' .
        zsdtab1_itab-tyresize = ausp_itab-atwrt .     "TYRESIZE
      when '0000000181' .
        zsdtab1_itab-rr_axle_no = ausp_itab-atwrt .     "RR_AXLE_NO
      when '0000000183' .
        zsdtab1_itab-ff_axl_nor = ausp_itab-atwrt .     "FF_AXLE_NO_rt
      when '0000000182' .
        zsdtab1_itab-ff_axl_nol = ausp_itab-atwrt .     "FF_AXLE_NO_lt
      when '0000000184' .
        zsdtab1_itab-drivairbag = ausp_itab-atwrt .     "DRIVAIRBAG
      when '0000000185' .
        zsdtab1_itab-st_box_no = ausp_itab-atwrt .     "ST_BOX_NO
      when '0000000186' .
        zsdtab1_itab-transport = ausp_itab-atwrt .     "TRANSPORT
      when '0000000106' .
        zsdtab1_itab-trackstage = ausp_itab-atwrt .  " tracking stage
      when '0000000111' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_1 = date .    " tracking date for 1.
      when '0000000112' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_5 = date .    " tracking date for 5.
      when '0000000113' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_10 = date .   "tracking date for 10
      when '0000000114' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_15 = date .   "tracking date for 15
      when '0000000115' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_20 = date .   " tracking date for 20
      when '0000000116' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_25 = date .   " tracking date for 25
      when '0000000117' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_30 = date .   "tracking date for 30
      when '0000000118' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_35 = date .   "tracking date for 35
      when '0000000119' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_40 = date .   " tracking date for 40
      when '0000000120' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_45 = date .   " tracking date for 45
      when '0000000121' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_50 = date .   "tracking date for 50
      when '0000000122' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_55 = date .   "tracking date for 55
      when '0000000123' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_60 = date .   " tracking date for 60
      when '0000000124' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_65 = date .   " tracking date for 65
      when '0000000125' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_70 = date .   "tracking date for 70
      when '0000000126' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_75 = date .   "tracking date for 75
      when '0000000127' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_78 = date .   " tracking date for 78
      when '0000000203' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_79 = date .   " tracking date for 79
      when '0000000128' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_80 = date .   " tracking date for 80
      when '0000000129' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_85 = date .   "tracking date for 85
      when '0000000130' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_90 = date .   "tracking date for 90
      when '0000000131' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_95 = date .   "tracking date for 95
      when '0000000132' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_100 = date .   " tracking date for100
      when '0000000133' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_110 = date .   " tracking date for110
      when '0000000134' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_115 = date .   "tracking date for 115
      when '0000000135' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_120 = date .   "tracking date for 120
      when '0000000136' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_105 = date .   "tracking date for 105
      when '0000000137' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_1 = date .     "plan trk date for 1
      when '0000000138' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_5 = date .     "plan trk date for 5
      when '0000000139' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_10 = date .    "plan trk date for 10
      when '0000000140' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_15 = date .    "plan trk date for 15
      when '0000000141' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_20 = date .    "plan trk date for 20
      when '0000000142' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_25 = date .    "plan trk date for 25
      when '0000000143' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_30 = date .    "plan trk date for 30
      when '0000000144' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_35 = date .    "plan trk date for 35
      when '0000000145' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_40 = date .    "plan trk date for 40
      when '0000000146' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_45 = date .    "plan trk date for 45
      when '0000000147' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_50 = date .    "plan trk date for 50
      when '0000000148' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_55 = date .    "plan trk date for 55
      when '0000000149' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_60 = date .    "plan trk date for 60
      when '0000000150' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_65 = date .    "plan trk date for 65
      when '0000000151' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_70 = date .    "plan trk date for 70
      when '0000000152' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_75 = date .    "plan trk date for 75
      when '0000000153' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_78 = date .    "plan trk date for 78
      when '0000000202' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_79 = date .    "plan trk date for 79
      when '0000000154' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_80 = date .    "plan trk date for 80
      when '0000000155' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_85 = date .    "plan trk date for 85
      when '0000000156' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_90 = date .    "plan trk date for 90
      when '0000000157' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_95 = date .    "plan trk date for 95
      when '0000000158' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_100 = date .   "plan trk date for 100
      when '0000000159' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_105 = date .   "plan trk date for 105
      when '0000000160' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_110 = date .   "plan trk date for 110
      when '0000000161' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_115 = date .   "plan trk date for 115
      when '0000000162' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_120 = date .   "plan trk date for 120
    ********Additional fields / 24.05.98**********************************
      when '0000000099' .
        case ausp_itab-atwrt .
          when '540' .
            zsdtab1_itab-roll_blind = 'X' .
          when '482' .
            zsdtab1_itab-ground_clr = 'X' .
          when '551' .
            zsdtab1_itab-anti_theft = 'X' .
          when '882' .
            zsdtab1_itab-anti_tow = 'X' .
          when '656' .
            zsdtab1_itab-alloy_whel = 'X' .
          when '265' .
            zsdtab1_itab-del_class = 'X' .
          when '280' .
            zsdtab1_itab-str_wheel = 'X' .
          when 'CDC' .
            zsdtab1_itab-cd_changer = 'X' .
          when '205' .
            zsdtab1_itab-manual_eng = 'X' .
          when '273' .
            zsdtab1_itab-conn_handy = 'X' .
          when '343' .
            zsdtab1_itab-aircleaner = 'X' .
          when '481' .
            zsdtab1_itab-metal_sump = 'X' .
          when '533' .
            zsdtab1_itab-speaker = 'X' .
          when '570' .
            zsdtab1_itab-arm_rest = 'X' .
          when '580' .
            zsdtab1_itab-aircond = 'X' .
          when '611' .
            zsdtab1_itab-exit_light = 'X' .
          when '613' .
            zsdtab1_itab-headlamp = 'X' .
          when '877' .
            zsdtab1_itab-readlamp = 'X' .
          when '808' .
            zsdtab1_itab-code_ckd = 'X' .
          when '708' .
            zsdtab1_itab-del_prt_lc = 'X' .
          when '593' .
            zsdtab1_itab-ins_glass = 'X' .
          when '955' .
            zsdtab1_itab-zelcl = 'Elegance' .
          when '593' .
            zsdtab1_itab-zelcl = 'Classic' .
        endcase .
    endcase .
    endloop .
    *--Update the sales data .--
    perform get_sales_order using mara_itab-matnr .
    perform get_cartype using mara_itab-matnr .
    append zsdtab1_itab .
    endloop.
    <<<
    loop at zsdtab1_itab .
      if zsdtab1_itab-cartype <> 'W-203'
      or zsdtab1_itab-cartype <> 'W-210'
      or zsdtab1_itab-cartype <> 'W-211'.
          clear zsdtab1_itab-zelcl.
      endif.
    SELECT SINGLE * FROM ZSDTAB1 WHERE COMMNO = MARA_ITAB-MATNR .
    select single * from zsdtab1 where commno = zsdtab1_itab-commno.
    if sy-subrc <> 0 .
        insert into zsdtab1 values zsdtab1_itab .
    else .
        update zsdtab1 set :vbeln = zsdtab1_itab-vbeln
                       bill_doc = zsdtab1_itab-bill_doc
                       dest = zsdtab1_itab-dest
                       lgort = zsdtab1_itab-lgort
                       ship_tp = zsdtab1_itab-ship_tp
                       country = zsdtab1_itab-country
                       kunnr = zsdtab1_itab-kunnr
                       vkbur = zsdtab1_itab-vkbur
                       customer = zsdtab1_itab-customer
                       city   = zsdtab1_itab-city
                       region = zsdtab1_itab-region
                       model = zsdtab1_itab-model
                       drive = zsdtab1_itab-drive
                       converter = zsdtab1_itab-converter
                       transmssn = zsdtab1_itab-transmssn
                       colour = zsdtab1_itab-colour
                       ztrim = zsdtab1_itab-ztrim
                       commno = zsdtab1_itab-commno
                       trackstage = zsdtab1_itab-trackstage
                       chassis_no    =   zsdtab1_itab-chassis_no
                       engine_no     =   zsdtab1_itab-engine_no
                       body_no       =   zsdtab1_itab-body_no
                       cockpit       =   zsdtab1_itab-cockpit
                       airbag        =   zsdtab1_itab-airbag
                       trailer_no    =   zsdtab1_itab-trailer_no
                       fininspdat    =   zsdtab1_itab-fininspdat
                       entrydate     =   zsdtab1_itab-entrydate
                       regist_no     =   zsdtab1_itab-regist_no
                       mech_key      =   zsdtab1_itab-mech_key
                       side_ab_rt    =   zsdtab1_itab-side_ab_rt
                       side_ab_lt    =   zsdtab1_itab-side_ab_lt
                       elect_key     =   zsdtab1_itab-elect_key
                       head_lamp     =   zsdtab1_itab-head_lamp
                       tail_lamp     =   zsdtab1_itab-tail_lamp
                       vac_pump      =   zsdtab1_itab-vac_pump
                       sd_ab_sn_l    =   zsdtab1_itab-sd_ab_sn_l
                       sd_ab_sn_r    =   zsdtab1_itab-sd_ab_sn_r
                       asrhydunit    =   zsdtab1_itab-asrhydunit
                       gearboxno     =   zsdtab1_itab-gearboxno
                       battery       =   zsdtab1_itab-battery
                       tyretype      =   zsdtab1_itab-tyretype
                       tyremake      =   zsdtab1_itab-tyremake
                       tyresize      =   zsdtab1_itab-tyresize
                       rr_axle_no    =   zsdtab1_itab-rr_axle_no
                       ff_axl_nor    =   zsdtab1_itab-ff_axl_nor
                       ff_axl_nol    =   zsdtab1_itab-ff_axl_nol
                       drivairbag    =   zsdtab1_itab-drivairbag
                       st_box_no     =   zsdtab1_itab-st_box_no
                       transport     =   zsdtab1_itab-transport
    OPTIONS-
                       roll_blind    = zsdtab1_itab-roll_blind
                       ground_clr    = zsdtab1_itab-ground_clr
                       anti_theft    = zsdtab1_itab-anti_theft
                       anti_tow      = zsdtab1_itab-anti_tow
                       alloy_whel    = zsdtab1_itab-alloy_whel
                       del_class     = zsdtab1_itab-del_class
                       str_wheel     = zsdtab1_itab-str_wheel
                       cd_changer    = zsdtab1_itab-cd_changer
                       manual_eng    = zsdtab1_itab-manual_eng
                       conn_handy    = zsdtab1_itab-conn_handy
                       aircleaner    = zsdtab1_itab-aircleaner
                       metal_sump    = zsdtab1_itab-metal_sump
                       speaker       = zsdtab1_itab-speaker
                       arm_rest      = zsdtab1_itab-arm_rest
                       aircond       = zsdtab1_itab-aircond
                       exit_light    = zsdtab1_itab-exit_light
                       headlamp      = zsdtab1_itab-headlamp
                       readlamp      = zsdtab1_itab-readlamp
                       code_ckd      = zsdtab1_itab-code_ckd
                       del_prt_lc    = zsdtab1_itab-del_prt_lc
                       ins_glass     = zsdtab1_itab-ins_glass
                       dat_trk_1 = zsdtab1_itab-dat_trk_1
                       dat_trk_5 = zsdtab1_itab-dat_trk_5
                       dat_trk_10 = zsdtab1_itab-dat_trk_10
                       dat_trk_15 = zsdtab1_itab-dat_trk_15
                       dat_trk_20 = zsdtab1_itab-dat_trk_20
                       dat_trk_25 = zsdtab1_itab-dat_trk_25
                       dat_trk_30 = zsdtab1_itab-dat_trk_30
                       dat_trk_35 = zsdtab1_itab-dat_trk_35
                       dat_trk_40 = zsdtab1_itab-dat_trk_40
                       dat_trk_45 = zsdtab1_itab-dat_trk_45
                       dat_trk_50 = zsdtab1_itab-dat_trk_50
                       dat_trk_55 = zsdtab1_itab-dat_trk_55
                       dat_trk_60 = zsdtab1_itab-dat_trk_60
                       dat_trk_65 = zsdtab1_itab-dat_trk_65
                       dat_trk_70 = zsdtab1_itab-dat_trk_70
                       dat_trk_75 = zsdtab1_itab-dat_trk_75
                       dat_trk_78 = zsdtab1_itab-dat_trk_78
                       dat_trk_79 = zsdtab1_itab-dat_trk_79
                       dat_trk_80 = zsdtab1_itab-dat_trk_80
                       dat_trk_85 = zsdtab1_itab-dat_trk_85
                       dat_trk_90 = zsdtab1_itab-dat_trk_90
                       dat_trk_95 = zsdtab1_itab-dat_trk_95
                       dattrk_100 = zsdtab1_itab-dattrk_100
                       dattrk_105 = zsdtab1_itab-dattrk_105
                       dattrk_110 = zsdtab1_itab-dattrk_110
                       dattrk_115 = zsdtab1_itab-dattrk_115
                       dattrk_120 = zsdtab1_itab-dattrk_120
                       pdt_tk_1 = zsdtab1_itab-pdt_tk_1
                       pdt_tk_5 = zsdtab1_itab-pdt_tk_5
                       pdt_tk_10 = zsdtab1_itab-pdt_tk_10
                       pdt_tk_15 = zsdtab1_itab-pdt_tk_15
                       pdt_tk_20 = zsdtab1_itab-pdt_tk_20
                       pdt_tk_25 = zsdtab1_itab-pdt_tk_25
                       pdt_tk_30 = zsdtab1_itab-pdt_tk_30
                       pdt_tk_35 = zsdtab1_itab-pdt_tk_35
                       pdt_tk_40 = zsdtab1_itab-pdt_tk_40
                       pdt_tk_45 = zsdtab1_itab-pdt_tk_45
                       pdt_tk_50 = zsdtab1_itab-pdt_tk_50
                       pdt_tk_55 = zsdtab1_itab-pdt_tk_55
                       pdt_tk_60 = zsdtab1_itab-pdt_tk_60
                       pdt_tk_65 = zsdtab1_itab-pdt_tk_65
                       pdt_tk_70 = zsdtab1_itab-pdt_tk_70
                       pdt_tk_75 = zsdtab1_itab-pdt_tk_75
                       pdt_tk_78 = zsdtab1_itab-pdt_tk_78
                       pdt_tk_79 = zsdtab1_itab-pdt_tk_79
                       pdt_tk_80 = zsdtab1_itab-pdt_tk_80
                       pdt_tk_85 = zsdtab1_itab-pdt_tk_85
                       pdt_tk_90 = zsdtab1_itab-pdt_tk_90
                       pdt_tk_95 = zsdtab1_itab-pdt_tk_95
                       pdt_tk_100 = zsdtab1_itab-pdt_tk_100
                       pdt_tk_105 = zsdtab1_itab-pdt_tk_105
                       pdt_tk_110 = zsdtab1_itab-pdt_tk_110
                       pdt_tk_115 = zsdtab1_itab-pdt_tk_115
                       pdt_tk_120 = zsdtab1_itab-pdt_tk_120
                       cartype = zsdtab1_itab-cartype
                       zelcl = zsdtab1_itab-zelcl
                       excise_no = zsdtab1_itab-excise_no
    where commno = zsdtab1_itab-commno .
       Update table .---------<<<
    endif .
    endloop .
    perform update_excise_date .
    perform update_post_goods_issue_date .
    perform update_time.
    *///////////////////// end of programe /////////////////////////////////
    Get sales data -
    form get_sales_order using matnr .
      data : corr_vbeln like vbrk-vbeln .
    ADDED BY ADITYA / 22.06.98 **************************************
    perform get_order using matnr .
    select single vbeln lgort into (zsdtab1_itab-vbeln , zsdtab1_itab-lgort)
                from vbap where matnr = matnr .   " C-22.06.98
                  from vbap where vbeln = zsdtab1_itab-vbeln .
      if sy-subrc = 0 .
    ************Get the Excise No from Allocation Field*******************
        select single * from zsdtab1 where commno = matnr .
        if zsdtab1-excise_no =  '' .
          select * from vbrp where matnr = matnr .
            select single vbeln into corr_vbeln from vbrk where
            vbeln = vbrp-vbeln and vbtyp = 'M'.
            if sy-subrc eq 0.
              select single * from vbrk where vbtyp = 'N'
              and sfakn = corr_vbeln.      "cancelled doc.
              if sy-subrc ne 0.
                select single * from vbrk where vbeln = corr_vbeln.
                if sy-subrc eq 0.
                  data : year(4) .
                  move sy-datum+0(4) to year .
      select single * from bkpf where awtyp = 'VBRK' and awkey = vbrk-vbeln
                                      and  bukrs = 'MBIL' and gjahr = year .
                  if sy-subrc = 0 .
      select single * from bseg where bukrs = 'MBIL' and belnr = bkpf-belnr
                                       and gjahr = year and koart = 'D' and
                                                               shkzg = 'S' .
                    zsdtab1_itab-excise_no = bseg-zuonr .
                  endif .
                endif.
              endif.
            endif.
          endselect.
        endif .
        select single kunnr vkbur into (zsdtab1_itab-kunnr ,
                zsdtab1_itab-vkbur) from vbak
                where vbeln = zsdtab1_itab-vbeln .
        if sy-subrc = 0 .
          select single name1 ort01 regio into (zsdtab1_itab-customer ,
             zsdtab1_itab-city , zsdtab1_itab-region) from kna1
             where kunnr = zsdtab1_itab-kunnr .
        endif.
      Get Ship to Party **************************************************
        select single * from vbpa where vbeln = zsdtab1_itab-vbeln and
                        parvw = 'WE' .
        if sy-subrc = 0 .
            zsdtab1_itab-ship_tp = vbpa-kunnr .
      Get Destination Country of Ship to Party .************
            select single * from kna1 where kunnr = vbpa-kunnr .
            if sy-subrc = 0 .
               select single * from t005t where land1 = kna1-land1
                                       and spras = 'E' .
               if sy-subrc = 0 .
                   zsdtab1_itab-country = t005t-landx .
               endif .
            endif .
        endif .
      endif .
    endform.                               " GET_SALES
    form update_time.
      update zstatus set zupddate = sy-datum
                         uzeit = sy-uzeit
      where programm = 'ZSDDET01' .
    endform.                               " UPDATE_TIME
    *&      Form  DATE_CONVERT
    form date_convert using atflv changing date .
      data : dt(8) , dat type i .
      dat = atflv .
      dt = dat .
      date = dt .
    endform.                               " DATE_CONVERT
    *&      Form  UPDATE_POST_GOODS_ISSUE_DATE
    form update_post_goods_issue_date .
      types : begin of itab1_type ,
                mblnr like mseg-mblnr ,
                budat like mkpf-budat ,
              end of itab1_type .
      data : itab1 type itab1_type occurs 10 with header line .
      loop at mara_itab .
        select single * from zsdtab1 where commno = mara_itab-matnr .
        if sy-subrc =  0  and zsdtab1-postdate =  '00000000' .
          refresh itab1 . clear itab1 .
        select * from mseg where matnr = mara_itab-matnr and bwart = '601' .
            itab1-mblnr = mseg-mblnr .
            append itab1 .
          endselect .
          loop at itab1 .
            select single * from mkpf where mblnr = itab1-mblnr .
            if sy-subrc = 0 .
              itab1-budat = mkpf-budat .
              modify itab1 .
            endif .
          endloop .
          sort itab1 by budat .
          read table itab1 index 1 .
          if sy-subrc = 0 .
            update zsdtab1 set postdate = itab1-budat
                         where commno = mara_itab-matnr .
          endif .
        endif .
      endloop .
    endform.                               " UPDATE_POST_GOODS_ISSUE_DATE
    *&      Form  UPDATE_EXCISE_DATE
    form update_excise_date.
      types : begin of itab2_type ,
                mblnr like mseg-mblnr ,
                budat like mkpf-budat ,
              end of itab2_type .
      data : itab2 type itab2_type occurs 10 with header line .
      loop at mara_itab .
        select single * from zsdtab1 where commno = mara_itab-matnr .
        if sy-subrc =  0  and zsdtab1-excise_dat  = '00000000' .
          refresh itab2 . clear itab2 .
          select * from mseg where matnr = mara_itab-matnr and
                                  (  bwart = '601' or  bwart = '311' ) .
            itab2-mblnr = mseg-mblnr .
            append itab2 .
          endselect .
          loop at itab2 .
            select single * from mkpf where mblnr = itab2-mblnr .
            if sy-subrc = 0 .
              itab2-budat = mkpf-budat .
              modify itab2 .
            endif .
          endloop .
          sort itab2 by budat .
          read table itab2 index 1 .
          if sy-subrc = 0 .
            update zsdtab1 set excise_dat = itab2-budat
                         where commno = mara_itab-matnr .
          endif .
        endif .
      endloop .
    endform.                               " UPDATE_EXCISE_DATE
    form get_order using matnr .
    types :  begin of itab_type ,
                vbeln like vbap-vbeln ,
                posnr like vbap-posnr ,
             end of itab_type .
    data : itab type itab_type occurs 10 with header line .
    refresh itab . clear itab .
    select * from vbap where matnr = mara_itab-matnr .
       itab-vbeln = vbap-vbeln .
       itab-posnr = vbap-posnr .
       append itab .
    endselect .
    loop at itab .
      select single * from vbak where vbeln = itab-vbeln .
      if vbak-vbtyp <> 'C' .
        delete itab .
      endif .
    endloop .
    loop at itab .
    select single * from vbfa where vbelv = itab-vbeln and
             posnv = itab-posnr and vbtyp_n = 'H' .
    if sy-subrc = 0 .
      delete itab .
    endif .
    endloop .
    clear :  zsdtab1_itab-vbeln ,  zsdtab1_itab-bill_doc .
    loop at itab .
      zsdtab1_itab-vbeln = itab-vbeln .
      select single * from vbfa where vbelv = itab-vbeln and
             posnv = itab-posnr and vbtyp_n = 'M' .
    if sy-subrc = 0 .
      zsdtab1_itab-bill_doc = vbfa-vbeln .
    endif .
    endloop .
    endform .
    *&      Form  GET_CARTYPE
    form get_cartype using matnr .
    select single * from mara where matnr = matnr .
    zsdtab1_itab-cartype = mara-satnr .
    endform.                    " GET_CARTYPE

    Hi,
    I have analysed your program and i would like to share following points for better performance of this report :
    (a)  Use the field Names instead of Select * or Select Single * as if you use the field names it will consume less amount of resources inside the loop as well as you have lot many Select Single * and u r using very big tables like VBAP and many more.
    (b) Trace on ST05 which particular query is mostly effecting your system or use ST12 in current mode to trace for less inputs which run the report for 20-30 min so that we get an idea which queries are effecting the system and taking a lot of time.
    (c) In Case of internal tables sort the data properly and use binary search for getting the data.
    I think this will help.
    Thanks and Regards,
    Harsh

  • Report painter performance problem...

    I have a client which runs a report group consists of 14 reports... When we  run this program... It takes about 20 minutes to get results... I was assigned to optimize this report...
    This is what I've done so far
    (this is a SAP generated program)...
    1. I've checked the tables that the program are using... (a customized table with more than 20,000 entries and many others)
    2. I've created secondary indexes  to the main customized table with (20,000) entries - It improves the performance a bit(results about 18 minutes)...
    3. I divided the report group by 4... 3 reports each report group... It greatly improves the performance... (but this is not what the client wants)...
    4. I've read an article about report group performance that it is a bug. 
    (sap support recognized the fact that we are dealing with a bug in the sap standard functionality)
    http://it.toolbox.com/blogs/sap-on-db2/sap-report-painter-performance-problem-26000
    Anyone have the same problem as mine?
    Edited by: christopher mancuyas on Sep 8, 2008 9:32 AM
    Edited by: christopher mancuyas on Sep 9, 2008 5:39 AM

    Report painter/Writer always creates a prerformance issue.i never preffred them since i have a option with Zreport
    now you can do only one thing put more checks on selection-screen for filtering the data.i think thats the only way.
    Amit.

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Performance problems with abap report SAPLPRGN_STRUCTURE during users login

    Hello,
    after patchimport of SAP BASIS level 21 and SAPABAP level 21 our SAP Production System (ECC6-UNICODE KERNEL 240) denote a performance problems. A lot of ours users during logon phase must wait until 10-12 minutes before that the navigation menu is displayed. The problem is generated by the slow performance of the standard SAP REPORT SAPLPRGN_STRUCTURE that require a lot time.  Any idea?
    Thanks

    Hi Pat,
    I would stop SAP and use DLTR3PKG, If this doesn't help, you need to investigate, if it is a CPU or DB-Time issue in ST03. Then you have to handle accordingly ...
    Regards
    Volker Gueldenpfennig, consolut international ag
    http://www.consolut.net - http://www.4soi.de - http://www.easymarketplace.de

  • Performance problems with Leopard 10.5.1

    Hello,
    I use an iMac 24 Alu 2,8Ghz and upgraded to Leopard. There are some major performance problems and bugs in the recent version of Leopard:
    1. While accessing USB devices, the display speed, windows moving, animations etc. slow down
    2. Adobe CS3 Photoshop 10.01 and Flash CS3 are sometimes extremly slow. I tried the recent demo packages from Adobe:
    2.1. The Photoshop dialogue "save for web" slows down the system completly, and this problem stays when quitting Photoshop. A restart is neccessary then.
    2.2. Flash CS3 movie preview is very slow and stuttering. It's so slow you cannot imagine how the real movie flow will be.
    2.3. Recent Flash Player 9,0,115,0 with hardware acceleration enabled doesn't really work with QuartzGL enabled Leopard: The movies slow down a lot. Try www.neave.tv for example.
    3. Safari, Mail and other bundled software hang sometimes. You have to force quit them then. It doesn't matter whether QuartzGL is enabled or not. This especially happens to my system if it is online for some hours.
    4. A lot of Apple applications doesn't seem to work with 2dextreme enabled. Why this? Apple supporters told me in Leopard there will be a much better 2dextreme support. Also Quartz2dExtreme in OSX 10.4 worked with all applications and i guess it's the same feature like "QuartzGL" in Leopard. So Leopard isn't finished here. It would be nice if Apple could make it's own software QuartzGL compatible.
    5. Very often the desktop slows down or lags. This is the main reason I often still witch to Windows XP PC to do work in a faster, less annoying way.
    6. Safari crashes randomly sometimes. It is unstable still. Also it crashes more often if you resize/move the window a lot, so I guess it is a graphics extension-related problem.
    I hope you people from Apple will fix these annoying points and optimize your new system in the next update release.
    Best regards

    Thanks for you answer. I repaired in the way as described above. There were some errors, some file index was wrong (don't remember exactly the phrase), now the DU reports the partition was successfully repaired / the volume appears to be ok.
    The crashes in Safari are gone, but all other described problems still exist. Adobe CS3 is not really usable for me.
    By the way, in iMacSoftwareUpdate 1.3, which was replaced by OSX 10.5.1 update, there is one extension called AppleVADriver.kext that does not exist in the OSX 10.5.1 update. Is it an important extension?

Maybe you are looking for

  • Integrating EP with R/3, BW & third party systems such as LDAP directories

    Hi Expersts,    I am looking for Integrating EP with R/3, BW & third party systems such as LDAP directories and Portal application development using HTMLB. Can any one send the related information. if you send the detailed documents with real time sc

  • The Addressbook order can't be controlled??

    Since around two months time my addressbook has started to show names in the wrong order. I want it to show names with first names first and then Family names at the end. But more and more names get it the other way. Looking into the names with the w

  • Need help with data link

    I have done data links where I linked from one sql to another on some id. Now I would like to link based on the month (mm),year,district_id and code_id How do I do this. When I link it it links based on one thing and generally the primary key display

  • What is global zone mean???

    I have reinstalled JES message 2005Q4, and try to apply T118207-58 patch, got error: # more /var/sadm/patch/118207-58/log pkgadd: ERROR: The package <SUNWmsgco> is currently installed on the system in the global zone. To install the new instance of t

  • Creating FAQs in RH - use KnowledgeBase project?

    We use RH8 v 8.0.2. We're wanting to add frequently asked questions but not sure what the best approach would be. We have 8 different application hlep files now. Should we add FAQ topics to the application help or should we create a new project and u