Performance Problem Again

Hi all,
We are encountering performance problem again
The batch process deletes 1M rows every night which took 30mins the usual.
But last night (12AM) it took more that 2hrs and hangs.
Does it help if I run gather_schena stats regularly when there is constant DELETE on the table?
Please help me check our ASH, AWR, ADDM to resolve the issue.
ADDM
https://app.box.com/s/7o734e70aa2m2zg087hf
ASH
https://app.box.com/s/xadlxfk0r5y7jvtxfsz7
AWR
https://app.box.com/s/x8ordka2gcc6ibxatvld
Thanks....
zxy

Hi ARM,
***What is the SGA_TARGET or MEMORY_TARGET that the database is running on?
Our server has 8Gb Physical Memory and 8Gb Swap.
What  is the ideal SGA_TARGET and MEMORY_TARGET shouldbe?
Our current setting is:
========
SQL> show parameter memory
NAME                                 TYPE        VALUE
hi_shared_memory_address             integer     0
memory_max_target                        big integer 5936M
memory_target                                big integer 5936M
shared_memory_address                 integer     0
SQL> show parameter sga_
NAME                                 TYPE        VALUE
sga_max_size                         big integer 5936M
sga_target                               big integer 0
Thanks

Similar Messages

  • Performance problem again 9i db

    Hi all,
    I am confused how to answer the users for their common question, What cause their batch job query to slow down?
    The always claim they run it on the same manner . But sometimes it gets too slow and sometimes it is fast.
    I am correct that they can get the answer to their questions if the run statspack report?
    Thanks a lot,
    Kins

    Here the example query :)
    CREATE OR REPLACE FORCE VIEW "JSCUS"."VW_USR_AR_CUST_OFFC_SLS_INVC" ("TRANSACTION_BRANCH", "BRANCH_ADDRESS_1", "BRANCH_ADDRESS_2", "BRANCH_PHONES", "BRANCH_FAX", "BRANCH_TIN", "TRX_NUMBER", "TRX_NUMBER_DISP", "TRX_DATE", "CREATION_DATE", "TRX_DATE_DISP", "ORDER_NUMBER", "CUSTOMER_ORDER_NUMBER", "PURCHASE_ORDER", "SALES_ORDER_DATE", "SALES_ORDER_DATE_DISP", "ORDER_REQUEST_DATE", "ORDER_REQUEST_DATE_DISP", "ORDER_DELIVERY_DATE", "ORDER_DELIVERY_DATE_DISP", "ORDER_TYPE", "SHIP_DATE_ACTUAL_DISP", "TERMS_NAME", "TERMS_NAME_DISP", "SALESPERSON_NAME", "SALESPERSON_PHONE", "LINE_TAX_CODE", "DOCUMENT_TYPE", "DOCUMENT_NAME", "BILL_TO_CUSTOMER_ID", "BILL_TO_CUSTOMER_NAME", "BILL_TO_TAX_REFERENCE", "SHIP_TO_LOCATION_ID", "SHIP_TO_NAME", "SHIP_TO_ADDRESS1", "SHIP_TO_ADDRESS2", "SHIP_TO_ADDRESS3", "SHIP_TO_ADDRESS4", "SHIP_TO_CITY", "TIN_LOCATION", "SHIP_TO_PROVINCE", "SHIP_TO_CITY_PROVINCE", "SHOW_DELIVERY_ADDRESS", "DELIVERY_TO_ADDRESS1", "DELIVERY_TO_ADDRESS2", "DELIVERY_TO_CITY_PROV",
      "LINE_SALES_ORDER", "LINE_NUMBER", "LINE_TYPE", "INVENTORY_ITEM_ID", "INVENTORY_ITEM", "LINE_DESCRIPTION", "ITEM_DESCRIPTION", "ITEM_ALT_DESCRIPTION", "ITEM_CUST_NUMBER", "UOM_CODE", "QUANTITY_INVOICED", "QUANTITY_ORDERED", "CASE_QUANTITY_INVOICED", "LN", "WD", "HT", "UNIT_CASE_VOLUME", "EXTENDED_CASE_VOLUME", "CUBIC_METER", "UNIT_STANDARD_PRICE", "UNIT_SELLING_PRICE", "LINE_AMOUNT", "GROSS_LINE_AMOUNT", "NET_TAX_LINE_AMOUNT", "REVENUE_AMOUNT", "GROSS_UNIT_SELLING_PRICE", "TAX_RATE", "UNIT_PRICE", "GROSS_UNIT_PRICE", "NET_TAX_UNIT_PRICE", "LINE_TAX_AMOUNT", "SALES_TAX_ID", "DISCOUNT", "DISCOUNT_STRING_DISP", "SEGMENT1", "ATTRIBUTE1", "ATTRIBUTE2", "ATTRIBUTE3", "A", "B", "SPEC_INSTR")
    AS
      SELECT
        -- CREATED BY: abc
        ----- header information -----
        DECODE(cust_trx.cust_trx_type_id,1,'Main Office','Cebu Branch') TRANSACTION_BRANCH ,
        DECODE(cust_trx.cust_trx_type_id,1,'abcr','Mandaue North Central, M.L. Quezon Avenue') BRANCH_ADDRESS_1 ,
        DECODE(cust_trx.cust_trx_type_id,1,' Pasig City 1605','Cabangcalan, Mandaue City, Cebu') BRANCH_ADDRESS_2 ,
        DECODE(cust_trx.cust_trx_type_id,1,'Tel.: 916-1111','yxz') BRANCH_PHONES ,
        DECODE(cust_trx.cust_trx_type_id,1,'yxz','xyz') BRANCH_FAX ,
        DECODE(cust_trx.cust_trx_type_id,1,'VAT Reg. TIN',' ') BRANCH_TIN ,
        cust_trx.trx_number TRX_NUMBER ,
        TO_CHAR(cust_trx.trx_number,'0000000') TRX_NUMBER_DISP ,
        cust_trx.trx_date TRX_DATE ,
        cust_trx.creation_date creation_date,
        TO_CHAR(cust_trx.trx_date,'mm/dd/yyyy') TRX_DATE_DISP ,
        trx_line.interface_line_attribute1 ORDER_NUMBER ,
        cust_trx.purchase_order CUSTOMER_ORDER_NUMBER ,
        cust_trx.purchase_order PURCHASE_ORDER ,
        trx_line.sales_order_date SALES_ORDER_DATE ,
        TO_CHAR(trx_line.sales_order_date,'MM/DD/YYYY') SALES_ORDER_DATE_DISP
        --,     TO_CHAR(oe_line.request_date,'MM/DD/YYYY') ORDER_REQUEST_DATE
        --,     TO_CHAR(oe_line.request_date,'MM/DD/YYYY') ORDER_REQUEST_DATE_DISP
        --,     oe_line.request_date ORDER_DELIVERY_DATE
        --,     TO_CHAR(oe_line.request_date,'MM/DD/YYYY') ORDER_DELIVERY_DATE_DISP
        --- NOTE : REQUEST DATE OR DELIVERY DATE OF HEADER AND LINE MAY BE DIFFERENT ----
        TO_CHAR(oe_header.request_date,'MM/DD/YYYY') ORDER_REQUEST_DATE ,
        TO_CHAR(oe_header.request_date,'MM/DD/YYYY') ORDER_REQUEST_DATE_DISP ,
        oe_header.request_date ORDER_DELIVERY_DATE ,
        TO_CHAR(oe_header.request_date,'MM/DD/YYYY') ORDER_DELIVERY_DATE_DISP ,
        trx_line.interface_line_attribute2 ORDER_TYPE ,
        TO_CHAR(cust_trx.ship_date_actual,'MM/DD/YYYY') SHIP_DATE_ACTUAL_DISP ,
        terms.name TERMS_NAME ,
        DECODE(terms.name, NULL, ' ', trim(terms.name)
        || ' day(s)') TERMS_NAME_DISP ,
        sales_rep.name SALESPERSON_NAME ,
        TO_CHAR(COALESCE(resources.source_phone,' ')) SALESPERSON_PHONE ,
        vat_tax.tax_code LINE_TAX_CODE ,
        'INV' DOCUMENT_TYPE ,
        'SALES INVOICE' DOCUMENT_NAME
        ----- customer information -----
        cust_trx.bill_to_customer_id BILL_TO_CUSTOMER_ID ,
        bill_cust.customer_name BILL_TO_CUSTOMER_NAME ,
        bill_cust.tax_reference BILL_TO_TAX_REFERENCE
        ----- customer address information -----
        ship_addr.location_id SHIP_TO_LOCATION_ID ,
        ship_addr.address_lines_phonetic SHIP_TO_NAME ,
        ship_addr.address1 SHIP_TO_ADDRESS1 ,
        ship_addr.address2 SHIP_TO_ADDRESS2 ,
        ship_addr.address3 SHIP_TO_ADDRESS3 ,
        ship_addr.address4 SHIP_TO_ADDRESS4 ,
        ship_addr.city ship_to_city ,
        ship_addr.attribute5 tin_location, -- for tin location of SVI
        ship_addr.province SHIP_TO_PROVINCE ,
        COALESCE(ship_addr.city, ship_addr.province) SHIP_TO_CITY_PROVINCE ,
        COALESCE(ship_addr.attribute3, 'N') SHOW_DELIVERY_ADDRESS ,
        jscus.FUNC_USR_GET_DELIVERY_ADDR1(trx_line.interface_line_attribute1) DELIVERY_TO_ADDRESS1 ,
        jscus.FUNC_USR_GET_DELIVERY_ADDR2(trx_line.interface_line_attribute1) DELIVERY_TO_ADDRESS2 ,
        jscus.FUNC_USR_GET_DELV_CITY_PROV(trx_line.interface_line_attribute1) DELIVERY_TO_CITY_PROV
        ----- line item information -----
        trx_line.sales_order LINE_SALES_ORDER ,
        trx_line.line_number LINE_NUMBER ,
        trx_line.line_type LINE_TYPE ,
        sys_item.inventory_item_id INVENTORY_ITEM_ID ,
        sys_item.segment1 INVENTORY_ITEM ,
        trx_line.description LINE_DESCRIPTION ,
        trx_line.description ITEM_DESCRIPTION ,
        jscus.func_usr_item_alt_desc(cust_trx.bill_to_customer_id, trx_line.inventory_item_id) ITEM_ALT_DESCRIPTION ,
        jscus.func_usr_item_cust_number(cust_trx.bill_to_customer_id, trx_line.inventory_item_id) ITEM_CUST_NUMBER
        ----- ordered item quantity information -----
        trx_line.uom_code UOM_CODE ,
        trx_line.quantity_invoiced QUANTITY_INVOICED ,
        trx_line.quantity_invoiced QUANTITY_ORDERED ,
        DECODE(trx_line.uom_code, 'CS', trx_line.quantity_invoiced, jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,trx_line.uom_code,'PC',to_date(oe_header.ordered_date)) * trx_line.quantity_invoiced / jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,'CS','PC',to_date(oe_header.ordered_date))) case_quantity_invoiced,
        /*CASE
        WHEN trx_line.uom_code = 'CS'
        THEN trx_line.quantity_invoiced
        ELSE (jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,trx_line.uom_code,'PC',to_date(oe_header.ordered_date)) * trx_line.quantity_invoiced / jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,'CS','PC',to_date(oe_header.ordered_date)))
        END CASE_QUANTITY_INVOICED , */
        sys_item.attribute1 LN ,
        sys_item.attribute2 WD ,
        sys_item.attribute3 HT ,
        COALESCE(((sys_item.attribute1                                 * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0) UNIT_CASE_VOLUME ,
        DECODE(trx_line.uom_code, 'CS', COALESCE(((sys_item.attribute1 * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0) * trx_line.quantity_invoiced, (jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,trx_line.uom_code,'PC',to_date(oe_header.ordered_date)) * trx_line.quantity_invoiced / jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,'CS','PC',to_date(oe_header.ordered_date))) * COALESCE(((sys_item.attribute1 * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0)) extended_case_volume,
        /*CASE
        WHEN trx_line.uom_code = 'CS'
        THEN COALESCE(((sys_item.attribute1                                                                                     * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0) * trx_line.quantity_invoiced
        ELSE (jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,trx_line.uom_code,'PC',to_date(oe_header.ordered_date)) * trx_line.quantity_invoiced / jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,'CS','PC',to_date(oe_header.ordered_date))) * COALESCE(((sys_item.attribute1 * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0)
        END EXTENDED_CASE_VOLUME ,*/
        to_number(TO_CHAR(DECODE(trx_line.uom_code, 'CS', COALESCE(((sys_item.attribute1 * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0) * trx_line.quantity_invoiced, (jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,trx_line.uom_code,'PC',oe_header.ordered_date) * trx_line.quantity_invoiced / jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,'CS','PC',oe_header.ordered_date)) * COALESCE(((sys_item.attribute1 * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0))
        /*CASE
        WHEN trx_line.uom_code = 'CS'
        THEN COALESCE(((sys_item.attribute1                                                                                     * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0) * trx_line.quantity_invoiced
        ELSE (jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,trx_line.uom_code,'PC',to_date(oe_header.ordered_date)) * trx_line.quantity_invoiced / jscus.FUNC_USR_CONVERSION_RATE(sys_item.inventory_item_id,'CS','PC',to_date(oe_header.ordered_date))) * COALESCE(((sys_item.attribute1 * sys_item.attribute2 * sys_item.attribute3) / 1000000), 0)
        END */
        ,'999999.9999')) CUBIC_METER
        ----- ordered item amount information -----
        trx_line.unit_standard_price UNIT_STANDARD_PRICE ,
        trx_line.unit_selling_price UNIT_SELLING_PRICE ,
        COALESCE(trx_line.gross_extended_amount,trx_line.extended_amount) LINE_AMOUNT ,
        COALESCE(trx_line.gross_extended_amount,trx_line.extended_amount) GROSS_LINE_AMOUNT ,
        trx_line.extended_amount NET_TAX_LINE_AMOUNT ,
        trx_line.revenue_amount REVENUE_AMOUNT ,
        trx_line.gross_unit_selling_price GROSS_UNIT_SELLING_PRICE ,
        tax_line.tax_rate TAX_RATE ,
        to_number(REPLACE(trim(list_line.attribute1),',')) UNIT_PRICE ,
        TO_NUMBER(REPLACE(TRIM(LIST_LINE.ATTRIBUTE1),',')) GROSS_UNIT_PRICE ,
        trx_line.unit_selling_price / ( NVL(1-LIST_LINE.ATTRIBUTE2 ,1) * NVL(1-LIST_LINE.ATTRIBUTE3 ,1) * NVL(1-LIST_LINE.ATTRIBUTE4 ,1) * NVL(1-LIST_LINE.ATTRIBUTE5 ,1) * NVL(1-LIST_LINE.ATTRIBUTE6 ,1)) NET_TAX_UNIT_PRICE ,
        --( to_number(REPLACE(trim(list_line.attribute1),',')) / (1 + (COALESCE(tax_line.tax_rate, 0) / 100)) ) NET_TAX_UNIT_PRICE ,
        tax_line.extended_amount LINE_TAX_AMOUNT ,
        trx_line.sales_tax_id SALES_TAX_ID ,
        ( NVL(1                                    -LIST_LINE.ATTRIBUTE2 ,1) * NVL(1-LIST_LINE.ATTRIBUTE3 ,1) * NVL(1-LIST_LINE.ATTRIBUTE4 ,1) * NVL(1-LIST_LINE.ATTRIBUTE5 ,1) * NVL(1-LIST_LINE.ATTRIBUTE6 ,1) ) DISCOUNT ,
        COALESCE(trim(TO_CHAR(list_line.attribute2 * 100,'990.0')),' ')
        || nvl2(list_line.attribute2,'%','')
        || nvl2(list_line.attribute3, ',', '')
        || COALESCE(trim(TO_CHAR(list_line.attribute3 * 100,'990.0')),' ')
        || nvl2(list_line.attribute3,'%','')
        || nvl2(list_line.attribute4, ',', '')
        || COALESCE(trim(TO_CHAR(list_line.attribute4 * 100,'990.0')),' ')
        || nvl2(list_line.attribute4,'%','')
        || nvl2(list_line.attribute4, ',', '')
        || COALESCE(trim(TO_CHAR(list_line.attribute5 * 100,'990.0')),' ')
        || nvl2(list_line.attribute5,'%','')
        || nvl2(list_line.attribute5, ',', '')
        || COALESCE(trim(TO_CHAR(list_line.attribute6 * 100,'990.0')),' ')
        || nvl2(list_line.attribute6,'%','') DISCOUNT_STRING_DISP,
        sys_item.segment1,
        sys_item.attribute1,
        sys_item.attribute2,
        sys_item.attribute3,
        sys_item.attribute1  * sys_item.attribute2 * sys_item.attribute3 a,
        (sys_item.attribute1 * sys_item.attribute2 * sys_item.attribute3) / 1000000 b,
        oe_header.attribute16 spec_instr
      FROM apps.ra_customer_trx_all cust_trx ,
        apps.ra_customer_trx_lines_all tax_line ,
        apps.ra_customer_trx_lines_all trx_line ,
        apps.hr_all_organization_units org_unit ,
        apps.ra_customers bill_cust ,
        apps.oe_order_headers_all oe_header ,
        apps.oe_order_lines_all oe_line ,
        apps.ra_site_uses_all ship_site ,
        apps.ra_addresses_all ship_addr ,
        apps.ra_terms terms ,
        apps.ra_salesreps_all sales_rep ,
        apps.jtf_rs_defresources_vl resources ,
        apps.mtl_system_items_b sys_item ,
        apps.ar_vat_tax_all vat_tax ,
        apps.qp_list_lines_v list_line
      WHERE cust_trx.customer_trx_id         = trx_line.customer_trx_id
      AND trx_line.line_type                 = 'LINE'
      AND trx_line.customer_trx_line_id      = tax_line.link_to_cust_trx_line_id
      AND trx_line.customer_trx_id           = tax_line.customer_trx_id
      AND cust_trx.org_id                    = org_unit.organization_id
      AND cust_trx.bill_to_customer_id       = bill_cust.customer_id
      AND trx_line.interface_line_attribute6 = oe_line.line_id
      AND oe_header.header_id                = oe_line.header_id
      AND oe_header.order_number             = trx_line.interface_line_attribute1
      AND cust_trx.ship_to_site_use_id       = ship_site.site_use_id
      AND ship_site.address_id               = ship_addr.address_id
      AND cust_trx.term_id                   = terms.term_id(+)
      AND cust_trx.primary_salesrep_id       = sales_rep.salesrep_id(+)
      AND cust_trx.org_id                    = sales_rep.org_id(+)
      AND sales_rep.resource_id              = resources.resource_id(+)
      AND trx_line.inventory_item_id         = sys_item.inventory_item_id
      AND sys_item.organization_id           = 105
      AND trx_line.vat_tax_id                = vat_tax.vat_tax_id(+)
      AND (oe_line.inventory_item_id         = list_line.product_attr_value(+)
      AND oe_line.price_list_id              = list_line.list_header_id(+)
      AND oe_line.pricing_quantity_uom       = list_line.product_uom_code(+)
      AND OE_LINE.PRICING_DATE BETWEEN COALESCE(LIST_LINE.START_DATE_ACTIVE, OE_LINE.PRICING_DATE) AND COALESCE(LIST_LINE.END_DATE_ACTIVE, OE_LINE.PRICING_DATE) )
      AND trim(cust_trx.bill_to_customer_id) NOT IN
        -- Customers excluded in Sales Invoice
        ( '1042'
        --and oe_header.order_number = '10115049'
        --and oe_header.order_number = '10115409'
        --AND cust_trx.ct_reference = '10016712';;;;;;;;;;;;
    select      TRANSACTION_BRANCH
    ,     BRANCH_ADDRESS_1
    ,     BRANCH_ADDRESS_2
    ,     BRANCH_PHONES
    ,     BRANCH_FAX
    ,     BRANCH_TIN
    ,     TRX_NUMBER
    ,     TRX_NUMBER_DISP
    ,     TRX_DATE
    ,     TRX_DATE_DISP
    ,     BILL_TO_CUSTOMER_ID
    ,     BILL_TO_CUSTOMER_NAME
    ,     SHIP_DATE_ACTUAL_DISP
    ,     SHIP_TO_NAME
    ,     SHIP_TO_ADDRESS1
    ,     SHIP_TO_ADDRESS2
    ,     SHIP_TO_ADDRESS3
    ,     SHIP_TO_ADDRESS4
    ,     SHIP_TO_CITY_PROVINCE
    ,     ITEM_ALT_DESCRIPTION
    ,     ITEM_CUST_NUMBER
    ,     CUBIC_METER
    ,     INVENTORY_ITEM
    ,     ITEM_DESCRIPTION
    ,     LINE_NUMBER
    ,     LINE_DESCRIPTION
    ,     UOM_CODE
    ,     QUANTITY_ORDERED
    ,     QUANTITY_INVOICED
    ,     UNIT_STANDARD_PRICE
    ,     UNIT_SELLING_PRICE
    ,     LINE_SALES_ORDER
    ,     LINE_TYPE
    ,     NET_TAX_LINE_AMOUNT
    ,     REVENUE_AMOUNT
    ,     TAX_RATE
    ,     GROSS_UNIT_SELLING_PRICE
    ,     GROSS_LINE_AMOUNT
    ,     LINE_AMOUNT
    ,     LINE_TAX_AMOUNT
    ,     PURCHASE_ORDER
    ,     TERMS_NAME
    ,     SALES_TAX_ID
    ,     SALESPERSON_NAME
    ,     SALESPERSON_PHONE
    ,     CUSTOMER_ORDER_NUMBER
    ,     DISCOUNT_STRING_DISP
    ,     GROSS_UNIT_PRICE
    ,     UNIT_PRICE
    ,     NET_TAX_UNIT_PRICE
    ,     DOCUMENT_TYPE
    ,     DOCUMENT_NAME
    ,     SALES_ORDER_DATE
    ,     SALES_ORDER_DATE_DISP
    ,     LINE_TAX_CODE
    ,     ORDER_REQUEST_DATE
    ,       ORDER_NUMBER
    ,     BILL_TO_TAX_REFERENCE
    ,     SHOW_DELIVERY_ADDRESS
    ,     DELIVERY_TO_ADDRESS1
    ,     DELIVERY_TO_ADDRESS2
    ,     DELIVERY_TO_CITY_PROV
    ,     decode(tin_location,null,' ','-'||tin_location) tin_location
    ,     spec_instr
    from jscus.VW_USR_AR_CUST_OFFC_SLS_INVC
    where quantity_invoiced > 0
    and order_type = coalesce($P{OE_TRAN_TYPE},order_type)
    and trim(order_number) between coalesce($P{OE_START_ORDNBR},trim(order_number))
    and coalesce($P{OE_END_ORDNBR},$P{OE_START_ORDNBR},trim(order_number))
    and to_date(sales_order_date) between coalesce($P{OE_START_ORDDATE},to_date(sales_order_date))
    and coalesce($P{OE_END_ORDDATE},$P{OE_START_ORDDATE},to_date(sales_order_date))
    and to_date(order_delivery_date) between coalesce($P{OE_START_REQDATE},to_date(order_delivery_date))
    and coalesce($P{OE_END_REQDATE},$P{OE_START_REQDATE},to_date(order_delivery_date))
    order by trx_number
    ,     document_type
    ,     inventory_item

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • SQL report performance problem

    I have a SQL classic report in Apex 4.0.2 and database 11.2.0.2.0 with a performance problem.
    The report is based on a PL/SQL function returning a query. The query is based on a view and pl/sql functions. The Apex parsing schema has select grant on the view only, not the underlying objects.
    The generated query runs in 1-2 sec in sqlplus (logged in as the Apex parsing schema user), but takes many minutes in Apex. I have found, by monitoring the database sessions via TOAD, that the explain plan in the Apex and sqlplus sessions are very different.
    The summary:
    In sqlplus SELECT STATEMENT ALL_ROWS Cost: 3,695                                                                            
    In Apex SELECT STATEMENT ALL_ROWS Cost: 3,108,551                                                        
    What could be the cause of this?
    I found a blog and Metalink note about different explain plans for different users. They suggested to set optimizer_secure_view_merging='FALSE', but that didn't help.

    Hmmm, it runs fast again in SQL Workshop. I didn't expect that, because both the application and SQL Workshop use SYS.DBMS_SYS_SQL to parse the query.
    Only the explain plan doesn't show anything.
    To add: I changed the report source to the query the pl/sql function would generate, so the selects are the same in SQL Workshop and in the application. Still in the application it's horribly slow.
    So, Apex does do something different in the application compared to SQL Workshop.
    Edited by: InoL on Aug 5, 2011 4:50 PM

  • IPad 1 email and performance problems ios5

    Since I upgraded my iPad 1(wifi only) to iOS 5' I have been having numerous performance problems, and my email continuously sys "checking Email". I have tried restarting my iPod ith no luck. Deleting d reading the accounts woks briefly, but after 15-20 minutes the email problem ones back. The update has relly crippled my iPad.
    Has anyone else had this problem? Any ideas to resolve it?
    Regards,
    Jason

    While Downgrading is not actually supported by Apple I found a video on You Type that walked me through it. Just do a search on You tube and you will find it. The one I used was by lizards821. His videos cover any errors you might have.
    Once I reverted, I then did a restore from the backup I made when upgrading to IOS 5 and all of my apps were restored while keeping my downgraded IOS.
    Good Luck, it was well worth it for me. I have the stable IPAD I love so much back again.

  • 6321 performance problems

    Hello again,
    we are experiencing performance problems with the 6321 in our environment - 10 kHz system clock, i.e. the analog input channels should be read in every 100 µs. This works fine with one card (differentially cabled), but if we plug a second card the systems just comes to a grinding halt.
    It appears, that one AI channel register access is taking us about 4 µs, which would mean 64 µs using 16 channels. We have tried RLP, "pure" DMA and DMA using interrupts.
    Am I getting something wrong or was the 6321 just not designed for this kind of application?
    Best regards,
    Philip

    Hello again,
    we are experiencing performance problems with the 6321 in our environment - 10 kHz system clock, i.e. the analog input channels should be read in every 100 µs. This works fine with one card (differentially cabled), but if we plug a second card the systems just comes to a grinding halt.
    It appears, that one AI channel register access is taking us about 4 µs, which would mean 64 µs using 16 channels. We have tried RLP, "pure" DMA and DMA using interrupts.
    Am I getting something wrong or was the 6321 just not designed for this kind of application?
    Best regards,
    Philip

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

  • LR3 "Extra Processing in Develop" Performance Problem

    I have been investigating a specific LR3 performance problem.  It may explain a small subset of the problems people have reported in the "Why is LR3 So Slow?" thread.   I'm starting this thread to focus on this particular problem.  I hope others will confirm/refute/refine my findings.
    The Problem
    In Develop, when I make an adjustment, normally the following happens: The CPU usage (as shown in Activity Monitor's bar graph) jumps to between 50 and 75% for all four cores, the updated image appears, and the CPU usage settles back down.  This all happens in less than half a second.  Note: this is with the image at the Fit size.  However, sometimes I instead get the following after an adjustment: the CPU usage jumps to 50 to 75% for all four cores and the updated image appears as usual, however, instead of settling back down, the CPU usage jumps up to 90 to 100% for all cores and stays there for 3 to 5 seconds before settling down. Thus it appears that LR is doing some kind of "extra processing" since a lot of computation is happening AFTER the updated image has already appeared.  I will refer to this problem as "EP".  Obviously, when you are getting EP, editing in Develop becomes very balky.
    Dependency on ratio between image size and displayed size
    It appears that EP only happens when the displayed size of the image (in Fit zoom level and perhaps also Fill zoom level) is above a certain percentage of the actual image size (as currently cropped).  Evidence: When editing full 21MP 5D2 images, I don't experience EP.  If I crop the 5D2 image fairly significantly, then I can get EP.  When editing 10MP images from my Canon S90, I usually get EP for landscape orientation pictures but not for portrait orientation pictures (since in Fit mode, landscape images display at a higher zoom level than portrait images).  If I am getting EP, I can eliminate it by sufficiently reducing the size that LR is displaying the image by resizing the LR window smaller, opening additional panels (I normally edit with only the right panel open), displaying the toolbar, etc.  It appears that EP is enabled when the displayed image is about 50% or larger w.r.t. the actual image (as currently cropped).  For example, EP becomes enabled when a 3648 pixel wide S90 image is displayed at least 17 and 7/8 inches wide on my 100 ppi monitor (i.e. about 1787 pixels).
    Dependency on HOW an adjustment is invoked
    Even when the displayed image size is large enough w.r.t. the actual image size to enable EP, whether you get it on a given adjustment depends on how you invoke it:
    - If you CLICK (i.e. press the mouse button down and quickly release it) on the track of one of the sliders (a technique I use often to make big jumps), EP will happen.
    - If you press the mouse button down on a slider handle, drag it to a new position, and quickly release the mouse button), EP will happen
    - If you press the mouse button down on a slider handle, drag it to a new position, but continue to hold the mouse button down until the displayed image is updated, EP does NOT happen (either before or after you then release the mouse button).
    - If you highlight the numeric field at the end of a slider and use the arrow keys (possibly along with Shift) to increment or decrement the value, EP does NOT happen.
    - EP will happen if you resize the LR window such that the displayed image size is above the threshold.  (In fact, I determined the threshold by making a series of window width increases until I saw EP indicated by the CPU bar graphs.)
    - EP can happen with local adjustment brush applications, but as with the sliders, it depends on HOW you perform the brush stroke.  Single click and drags with immediate mouse release cause EP, drags with delayed mouse button release don't.
    - Clicking an earlier History state causes EP
    - More exploration could be done.  For example, I haven't looked at Graduated Filter and Spot Removal adjustments.
    My theory of what's happening
    With LR2, my understanding is that in Develop mode when the displayed image is below 1:1 zoom level, after an adjustment is invoked, LR calculates the new version of the image to display using a fast, simplified algorithm that doesn't include the more computationally intensive adjustments like Sharpening and Noise Reduction (and perhaps works on a lower rez version of the image with multiple sensels binned together?).  It appears that in conditions described above, LR3 calculates the initial, fast image update and then goes on to do the full update of the image, including the computationally intensive adjustments.  Evidence:   setting Sharpening Amount and Luminance and Color Noise Reduction to zero eliminates EP (or reduces the amount of time it takes to be barely noticeable).  I'm not sure whether the displayed image is updated with the results of the extra processing.  I think the answer is Yes since when I tried an adjustment of changing Sharpening Amount from 0 to 90, the initial update of the displayed image showed sharpening but after the EP, the displayed image was updated again to show somewhat different sharpening. Perhaps Adobe felt that it would be useful to see the more accurate version of the image when it is at or above 50% zoom.  Maybe the UI is supposed to cancel the EP if you start to make another adjustment before it has completed but the canceling doesn't happen unless you invoke the adjustment in one of the ways described above that doesn't cause EP.  
    Misc
    - EP doesn't seem to happen for Process 2003
    - As others have mentioned, I'm surprised that LR (both version 2 and 3) in 64bit mode doesn't use more available RAM.  I don't think I've seen LR go above 4GB of virtual memory or above 3GB "Real Memory" (as reported by Activity Monitor) even though I have several GB free.
    - It should be obvious from the above that if you experience EP, there are workarounds: reduce the size of the displayed image (e.g. by window resizing), invoke adjustments in ways that don't cause EP, turn off Sharpening and Noise Reduction until the end of editing an image.
    System specs
    First generation Intel Mac Pro with two dual-core CPUs at 2.66 Ghz
    OS 10.5.8
    21GB RAM
    ACR cache on volume striped across 3 internal SATA drives
    LR catalog and RAWs on an internal SATA drive
    30" HP LP3065 monitor (2560 pixels wide)
    NVIDIA GeForce 7300 GT

    I'm impressed by your thorough analysis.
    Clearly, the programmers haven't figured out the best way to do intelligent caching and/or parallel rendering at a reduced size yet.
    In my experience reducing the settings in the "Details" panel doesn't help.
    What really bugs me is that the lag (or increasing lack of interactiveness) depends on the number of adjustments one has made.
    This shouldn't be the case. If a cache is produced then every further adjustment should only cost the effort for that latest adjustment and not include adjustments before it. There are things that stand in the way of straightforward edit applications:
    If you work below 1:1 preview, adjustments have to be shown in a reduced form. If you don't have a way to faithfully mimic the adjustments on the reduced size, you have to do them on the original image and then scale down. That's expensive.
    To the best of my knowledge LR uses a fixed image pipeline. Hence, independently of the order in which you apply edits, they are always performed in the same fixed order. Say all spot removal operations are done first. If you have a lot of adjustment brush edits and then add a spot removal operation, it means that all the adjustment brush operations have to be replayed each time you do a little adjustment on your spot removal edit.
    I believe what you are seeing is mostly related to 1.
    I also believe that the way LR currently handles a moderate number of edits is unacceptable and incompatible with the notion that it is usable in a commercial setting for more than trivial edits. I suspect there is something else going on. If everyone saw the deterioration in performance after a number of edits that I see, I don't think LR would be as accepted as it is. Having said that, I've read that the problem of repeated applications of the adjustment brush slowing LR down has existed for a long time. I truly hope that this doesn't mean we'll have to live for it for the foreseeable future.
    There are two ways I can see how 2. should be addressed:
    combine the effects of a set of operations into one bitmap operation. Instead of replaying all adjustment brush strokes one after the other (speedwise it feels like this is happening), compute a single bitmap operation that combines all effects.
    give up the idea that there is an image pipeline with a fixed execution order.
    Some might argue that the second point is at odds with the whole idea of parametric editing, but I dispute that. Either edit operations are commutable in which case the order is immaterial, or they are not. If they are not, the user applies the edits in a way as he/she sees fit and will thus compensate for any effect of a changed ordering.
    N.B., currently the doctrine of "fixed ordering of edit applications" results in the effect that even if you convert an image into B&W all your adjustment brush edits that applied colour tints will still show through. Reasoning: The user should be able to locally tint a B&W image. I agree with the latter but this could be achieved by only applying those tinting brush strokes that were created after the B&W conversion. All the ones that happened before should only be used to obtain the correct luminance values for the B&W conversion but obviously they shouldn't cause tinted areas.
    The above example demonstrates to me that users naturally expect operations to occurr in the order they have been introduced, not in a fixed predefined order. If that principle were followed, I see no reason why the speed of a single edit should depend on the number of edits that were done to the image before.
    I hope the programmers can (and the management wants to) address the performance issues. While I find LR usable for pretty modest edits, in no way the performance on my system approaches that would I would expect from an industrial strenght application.
    P.S.: Your message reminded me of the following: When I experience serious lag with LR showing the strokes I make with an adjustment brush, it helps to pause a moment after the first click before one starts moving. This allows LR to catch up and then one can see the effect of the application pretty much interactively. Otherwise, there is terrible lag and the feedback where you have brushed an effect comes way too late.

  • Performance problems after installing XP Service Pack 3

    Hello,
    After upgrading my client XP-machine to Service Pack 3, I have serious performance issues working with ApEx. Firstly, establishing the connection to the ApEx Workspace and then logging in both are very slow. Navigating through the workspace works fine, however when I then open any page for editing and/or try to run it, again it is very slow. As we work with PL/sql Embedded gateway I have changed the parameter SHARED_SERVERS to 10, but this did not make any difference. Remotely connecting to this server in other ways such as ping, connecting to xmlpserver, mapping network drives and connecting to the database all work fine.
    My collegues also working witk Service Pack 3 have no performance issues. However when I performed a rollback of the XP Service Pack 3, the performance issue was gone!
    Does anyone know of any issues similar to this problem?
    Regards,
    Ben van Dort

    This issue is probably related to an update of InternetExplorer by Service Pack 3. Working with Firefox instead of IE gives no performance problems whatsoever.
    Version of IE after Service Pack 3: 6.0.2900.5512.xpsp_sp3_gdr.080814-1236CO
    Ben

  • URGENT------MB5B : PERFORMANCE PROBLEM

    Hi,
    We are getting the time out error while running the transaction MB5B. We have posted the same to SAP global support for further analysis, and SAP revrted with note 1005901 to review.
    The note consists of creating the Z table and some Z programs to execute the MB5B without time out error, and SAP has not provided what type logic has to be written and how we can be addressed this.
    Could any one suggest us how can we proceed further.
    Note as been attached for reference.
              Note 1005901 - MB5B: Performance problems
    Note Language: English Version: 3 Validity: Valid from 05.12.2006
    Summary
    Symptom
    o The user starts transaction MB5B, or the respective report
    RM07MLBD, for a very large number of materials or for all materials
    in a plant.
    o The transaction terminates with the ABAP runtime error
    DBIF_RSQL_INVALID_RSQL.
    o The transaction runtime is very long and it terminates with the
    ABAP runtime error TIME_OUT.
    o During the runtime of transaction MB5B, goods movements are posted
    in parallel:
    - The results of transaction MB5B are incorrect.
    - Each run of transaction MB5B returns different results for the
    same combination of "material + plant".
    More Terms
    MB5B, RM07MLBD, runtime, performance, short dump
    Cause and Prerequisites
    The DBIF_RSQL_INVALID_RSQL runtime error may occur if you enter too many
    individual material numbers in the selection screen for the database
    selection.
    The runtime is long because of the way report RM07MLBD works. It reads the
    stocks and values from the material masters first, then the MM documents
    and, in "Valuated Stock" mode, it then reads the respective FI documents.
    If there are many MM and FI documents in the system, the runtimes can be
    very long.
    If goods movements are posted during the runtime of transaction MB5B for
    materials that should also be processed by transaction MB5B, transaction
    MB5B may return incorrect results.
    Example: Transaction MB5B should process 100 materials with 10,000 MM
    documents each. The system takes approximately 1 second to read the
    material master data and it takes approximately 1 hour to read the MM and
    FI documents. A goods movement for a material to be processed is posted
    approximately 10 minutes after you start transaction MB5B. The stock for
    this material before this posting has already been determined. The new MM
    document is also read, however. The stock read before the posting is used
    as the basis for calculating the stocks for the start and end date.
    If you execute transaction MB5B during a time when no goods movements are
    posted, these incorrect results do not occur.
    Solution
    The SAP standard release does not include a solution that allows you to
    process mass data using transaction MB5B. The requirements for transaction
    MB5B are very customer-specific. To allow for these customer-specific
    requirements, we provide the following proposed implementation:
    Implementation proposal:
    o You should call transaction MB5B for only one "material + plant"
    combination at a time.
    o The list outputs for each of these runs are collected and at the
    end of the processing they are prepared for a large list output.
    You need three reports and one database table for this function. You can
    store the lists in the INDX cluster table.
    o Define work database table ZZ_MB5B with the following fields:
    - Material number
    - Plant
    - Valuation area
    - Key field for INDX cluster table
    o The size category of the table should be based on the number of
    entries in material valuation table MBEW.
    Report ZZ_MB5B_PREPARE
    In the first step, this report deletes all existing entries from the
    ZZ_MB5B work table and the INDX cluster table from the last mass data
    processing run of transaction MB5B.
    o The ZZ_MB5B work table is filled in accordance with the selected
    mode of transaction MB5B:
    - Stock type mode = Valuated stock
    - Include one entry in work table ZZ_MB5B for every "material +
    valuation area" combination from table MBEW.
    o Other modes:
    - Include one entry in work table ZZ_MB5B for every "material +
    plant" combination from table MARC
    Furthermore, the new entries in work table ZZ_MB5B are assigned a unique
    22-character string that later serves as a key term for cluster table INDX.
    Report ZZ_MB5B_MONITOR
    This report reads the entries sequentially in work table ZZ_MB5B. Depending
    on the mode of transaction MB5B, a lock is executed as follows:
    o Stock type mode = Valuated stock
    For every "material + valuation area" combination, the system
    determines all "material + plant" combinations. All determined
    "material + plant" combinations are locked.
    o Other modes:
    - Every "material + plant" combination is locked.
    - The entries from the ZZ_MB5B work table can be processed as
    follows only if they have been locked successfully.
    - Start report RM07MLBD for the current "Material + plant"
    combination, or "material + valuation area" combination,
    depending on the required mode.
    - The list created is stored with the generated key term in the
    INDX cluster table.
    - The current entry is deleted from the ZZ_MB5B work table.
    - Database updates are executed with COMMIT WORK AND WAIT.
    - The lock is released.
    - The system reads the next entry in the ZZ_MB5B work table.
    Application
    - The lock ensures that no goods movements can be posted during
    the runtime of the RM07MLBD report for the "material + Plant"
    combination to be processed.
    - You can start several instances of this report at the same
    time. This method ensures that all "material + plant"
    combinations can be processed at the same time.
    - The system takes just a few seconds to process a "material +
    Plant" combination so there is just minimum disruption to
    production operation.
    - This report is started until there are no more entries in the
    ZZ_MB5B work table.
    - If the report terminates or is interrupted, it can be started
    again at any time.
    Report ZZ_MB5B_PRINT
    You can use this report when all combinations of "material + plant", or
    "material + valuation area" from the ZZ_MB5B work table have been
    processed. The report reads the saved lists from the INDX cluster table and
    adds these individual lists to a complete list output.
    Estimated implementation effort
    An experienced ABAP programmer requires an estimated three to five days to
    create the ZZ_MB5B work table and these three reports. You can find a
    similar program as an example in Note 32236: MBMSSQUA.
    If you need support during the implementation, contact your SAP consultant.
    Header Data
    Release Status: Released for Customer
    Released on: 05.12.2006 16:14:11
    Priority: Recommendations/additional info
    Category: Consulting
    Main Component MM-IM-GF-REP IM Reporting (no LIS)
    The note is not release-dependent.     
    Thanks in advance.
    Edited by: Neliea on Jan 9, 2008 10:38 AM
    Edited by: Neliea on Jan 9, 2008 10:39 AM

    before you try any of this try working with database-hints as described in note 921165, 902157, 918992

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • Performance Problem between Oracle 9i to Oracle 10g using Crystal XI

    We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
    Our technical Setup:
    Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
    Database server is Oracle 10g
    What we have concluded:
    Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
    We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
    We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
    Oracle 10g no longer supports the /*+ RULE */ hint.
    Verify DB Query:
    select /*+ RULE */ *
    from
    (select /*+ RULE */ null table_qualifier, o1.owner table_owner,
    o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
    'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
    decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
    o1.object_type), o1.object_type) table_type, null remarks from all_objects
    o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
    table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
    table_type, null remarks from all_objects o3, all_synonyms s where
    o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
    s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
    s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
    null remarks from all_synonyms s1 where s1.db_link is not null ) tables
    WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
    3
    SQL From Main Report:
    SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
    FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
    WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
    SQL From Sub Report:
    SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
    FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
    WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
    Does anyone have any suggestions on how we can improve the report performance with 10g?

    Hi Eric,
    Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
    While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
    Here are the Doc Ids, if you are interested:
    Note 377037.1
    Note:364822.1
    Thanks again for your response.
    Venu Boddu.

  • Performance problems on a Oracle 11G with Windows 2008 64bits.

    Hi everyone,
    I have noticed that our db is going low and low every week. My server has 16GB RAM and 10GB are dedicated to the Oracle database, this is a 11.2.0.1 with Windows 2008 R2 SP1 64bits. I like to know acording to the nexts values what you guys recommend to adjust in the init.ora:
    orcl.__db_cache_size=5402263552
    orcl.__java_pool_size=33554432
    orcl.__large_pool_size=33554432
    orcl.__pga_aggregate_target=3657433088
    orcl.__sga_target=6878658560
    orcl.__shared_io_pool_size=0
    orcl.__shared_pool_size=1308622848
    orcl.__streams_pool_size=33554432
    *.memory_target=10511974400
    *.open_cursors=5000
    *.optimizer_mode='RULE'
    *.processes=300
    Acording to the memory target on how values can be increased the processes, pga_agregate_target, etc.
    Also we have problems related to the bug Bug 9593134 that “Connections to Oracle 11g are slow and can take anywhere from 10 seconds to 2 minutes.” there is a fix on linux by removing the dns names on it but anyone have experience on windows platforms?
    Thanks for all and sorry for my english.
    Regards.
    Arturo.

    Regarding the long connection times, have you tried using network packet capture software (such as Wireshark) to determine what the client computer is doing when a connection attempt is initiated?
    The Oracle Database time model statistics, along with the system wide wait events may help you diagnose the non-connection related performance issues (you should not just look at the statistics, but instead capture the current values, wait a period of time, capture the statistics again, and compare the changes in the statistic values). A statspack report might also help you - but a 10046 trace at level 8 or 12 is more appropriate if you are able to identify a couple of sessions that experience performance problems.
    I do not suggest just blindly modifying parameters, although I am curious to know:
    * Why the session level parameter OPEN_CURSORS is set to 5000 - do you expect a single session to hold open 5,000 cursors?
    * Why are you using the deprecated RULE based optimizer?
    * Why is the MEMORY_TARGET parameter used when the SGA_TARGET and PGA_AGGREGATE target are specified?
    Charles Hooper
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • 1.1 performance problems related to system configuration?

    It seems like a lot of people are having serious performance problems with Aperture 1.1 in areas were they didn't have any (or at least not so much) problems in the previous 1.01 release.
    Most often these problems occur as slow behaviour of the application when switching views (especially into and out of full view), loading images into the viewer or doing image adjustments. In most cases Aperture works normal for some time and then starts to slow down gradually up to point were images are no longer refreshed correctly or the whole application crashes. Most of the time simply restarting Aperture doesn't help, one has to restart the OS.
    Most of the time the problems occur in conjunction with CPU usage rates which are much higher than in 1.0.1.
    For some people even other applications seem to be affected to a point where the whole system has to be restarted to get everything working up at full speed again. Also shutdown times seem to increase dramatically after such an Aperture slowdown.
    My intention in this thread is to collect information from users who are experiencing such problems about their system configuration. At the moment it does not look like these problems are related to special configurations only, but maybe we can find a common point when we collect as much information as possible about system where Aperture 1.1 shows this behaviour.
    Before I continue with my configuration, I would like to point out that this thread is not about general speed issues with Aperture. If you're not able to work smoothly with 16MPix RAW files on G5 systems with Radeon 9650 video cards or Aperture is generally slow on your iBook 14" system where you installed it with a hack, than this is not the right thread. I fully understand if you want to complain about these general speed issues, but please refrain from doing so in this thread.
    Here I only want to collect information from people who either know that some things works considerably faster in the previous release or who notice that Aperture 1.1 really slows down after some time of use.
    Enough said, here is my information:
    - Powermac G5 Dualcore 2.0
    - 2.5 GB RAM
    - Nvidia 7800GT (flashed PC version)
    - System disk: Software RAID0 (2 WD 10000rpm 74GB Raptor drives)
    - Aperture library on a hardware RAID0 (2 Maxtor 160GB drives) connected to Highpoint RocketRAID 2320 PCIe adapter
    - Displays: 17" and 15" TFT
    I do not think, that we need more information, things like external drives (apart from ones used for the actual library), superdrive types, connected USB stuff like printers, scanners etc. shouldn't make any difference so no need to report that. Also it is self-evident that Mac OS 10.4.6 is used.
    Of interest might be any internal cards (PCIe/PCI/PCI-X...) build into your system like my RAID adapter, Decklink cards (wasn't there a report about problems with them?), any other special video or audio cards or additional graphic cards.
    Again, please only post here if you're experiencing any of the mentioned problems and please try to keep your information as condensed as possible. This thread is about collecting data, there are already enough other threads where the specific problems (or other general speed issues) are discussed.
    Bye,
    Carsten
    BTW: Within the next week I will perform some tests which will include replacing my 7800GT with the original 6600 and removing as much extra stuff from my system as possible to see if that helps.

    Yesterday i had my first decent run in 1.1 and was pleased i avoided a lot perfromance issues that seemed to affect others.
    After i posted, i got hit by a big slow-down in system perfromance. I tried to quit Aperture but couldn't, it had no tasks in its activity window. However Activity Monitor showed Aperture as a 30 thread 1.4GB Virtual memory hairball soaking-up 80-90% of my 4 cpu's. Given the high cpu activity i suspected the reason was not my 2GB of RAM, althought its obviously better with more. So what caused the sudded decerease in system perfromance after 6 hours of relative trouble free editing/sorting with 1.1 ?
    This morning i re-created the issue. Before i go further, when i ran 1.1 for the first time i did not migrate my whole library to the new raw algorithum (its not called the bleeding edge for nothing). So this morning i selected one project to migrate all its raw images to 1.1 and after the progress bar completed its work, the cpus ramped and system got bogged-down again.
    So Aperture is doing a background task that is consuming large amounts of cpu power, shows nothing in its activity monitor and takes a very long time to complete. My project had 89 raw images migrated to the 1.1 algorithum and it took 4 minutes to complete those 'background processes' (more reconstituting of images?). I'm not sure what its doing, but it takes a long time and shows no obvious sign it is normal. If you leave it to complete its work, the system returns to normal. More of an issue is the system allows you to continue to work as the background processes crank, compounding the heavy workload.
    Bit of a guess this, but is this what is causing people system's problems ? As i said if i left my quad alone for 4 minutes all returns as normal. Its just not normal to think it will ever end, so you do more and compound the slow-down ?
    In the interests of research i did another project migrating 245 8MB raws to the 1.1 algorithum and it took 8 minutes. First 5mins consumed 1GB of virtual memory over 20 threads at average 250% CPU usage for Aperture alone. The last three minutes saw the cpus ramp higher to 350%, virtual memory increase to 1.2GB. After the 8 minutes all returned to nornal and fans slowed down (excellent fan/noise behaviour on these quads).
    Is this what others are seeing ?
    When you force quit Aperture during these system slow-downs what effect does this having on your images ? Do the uncompleted background processes restart when you go to try and view them ?
    If i get time i'll try and compare to my MBP.

  • 10.6 Performance Problems

    Although I installed Snow Leopard from scratch, I encountered severe performance problems after a while. To copy a file, for instance, took minutes instead of seconds for a some 100 MB file. To switch between windows took long time. The processing was interrupted by waiting loops every few seconds. And so on.
    I looked around in various forums to find hints how to solve this problem, but nothing worked. The activity monitor doesn't show anything unusual; from its point of view everything is fine.
    In the meantime, I reinstalled again Snow Leopard from scratch. After installing iLife 08, I now have the impression that some Finder operations are again getting slower. This may be a trace to the reason for that performance problems. However, this only affects the file copying times, not the application performance, so this does not explain the full picture
    So my question: does anyone else - having performance problems with SL - has similar observations in combination with iLife 08? Does anyone else have similar performance problems and solved them?
    Regards,
    Hardy

    Sometimes the performance of the system is impacted by permission errors, I would recommend running Disk Utility and repair permissions, also, just in case... check the disk to make sure you don't have nay bad sectors. You can also use a system utility to optimize system performance, Onyx is a good utility that is also free, just make sure to download the appropriate version for your system. http://www.titanium.free.fr/pgs2/english/download.html

Maybe you are looking for