OWB Performance issue (mapping execution always takes min 60 sec)

Hi All,
any owb mapping we execute in one of our environment , it seems the execution hangs for some time before it build the Attempting to create native operator 'class.RuntimePlatform.0.NativeExecution.PLSQL' statement. The log file shows a constant difference of 30 sec. before executing the <map>.main() function. the data extraction is very low some thing like 10-1000 records
Action taken : increase the SGA pool size to allow more resource. at the DB level
changed the -Xms64M -Xmx256M to -Xms335M -Xmx440M.
But no help
Extraction from the owb log is as follows.
2006/03/15-09:29:09-WST [1E0BF3BF] Initializing execution for auditId= 28339 parentAuditId= null topLevelAuditId=28339 taskName=XXIF_OUT_CSV_TRANS
2006/03/15-09:29:09-WST [1E0BF3BF] Attempting to create adapter 'class.RuntimePlatform.0.NativeExecution'
2006/03/15-09:29:09-WST [1E0BF3BF] Attempting to create native operator 'class.RuntimePlatform.0.NativeExecution.PLSQL'
2006/03/15-09:29:39-WST [1E0D73BF] PLSQL callspec: declare l_env wb_rt_mapaudit.wb_rt_name_values; l_IN_BATCH_ID null........
Kindly note the difference of 30 sec between create native operator to actual execution of PLSQL code.
The same set of mapping is working fine(5-15secs) in our Dev env. but is taking additional time (kind of 1-3 mins ) in Test env. and the execution of mapping does not go in parallel mode(Is this an expected behaviour ?) and if we have 10 seperate excution of the mapping , it takes 30mins to complete in TEST env. compare to 3mins in DEV env.
The noticible difference between these two env. is
These mapping been created using 10.1.0.2.0 client and deployed on 10.1.0.1.0 repository. in DEV env.
but the TEST Env. uses 10.1.0.4.0 repository.
When check the audit browser .. Can see the total elapse time is 61sec but the actual mapping exec time is only 1 sec.
Is there any configuration settings which cause this delay.
Any pointers on this will be of great help.
Regards,
njain

I am having exactly the same issue as you have described here. i.e. my mappings are taking some time to initialize before they run. Did you or anyone find a solution to this problem?

Similar Messages

  • OWB Performance Issue

    Hi
    I have a performance issue with OWB.
    OWB 9.0.4.8.21
    Oracle 9.2.0.1.0
    I designed some mappings with business rules/ transformations on windows system (AMD athalon,
    1*1.8 Ghz CPU, 1 GB ram) . When I run these mappings, my CPU usage goes to 100 % while my ram usage stays at 70 %.
    My mapping loads data from a flatfile to tables using External tables. As my cpu usage is 100% the upload time statistics are not reliable. So i have transferred the .mdl from windows to a higher end unix machine (HP-UX 11.00, 6*450 Mhz CPU, 6 Gb Ram, OWB 9.2(Unix),
    Oracle 9.2.0.1.0).
    Logically my mappings should run faster on Unix but it is taking 3 times more time than the windows system. Here also my CPU usage goes to 100 % while ram usage stays at
    30 %.
    One more observation on Unix machine is that the Oracle process which runs my mapping uses only 1 CPU while other CPUs are not utilized.
    Is there a way where in i can fork/thread my oracle process which runs the mapping to use all the CPUs instead of only 1 CPU.
    Do i need to do some changes in my mapping configuration /properties or in Oracle (init.ora) to
    improve the performance of my upload.
    Thanks in Advance.
    Manoj

    Manoj,
    If you use Process Flow that includes several mappings then use of FORK activity insures parallel execution of multiple mappings. See more on this in OWB 9.2 User Guide, page 10-24 "FORK".
    If you wish to have a single mapping execute on multiple CPUs in parallel, then it all depends on the nature of the mapping and corresponding configuration options set:
    - You mentioned using External Tables. External tables have configuration options "Parallel Access Mode" and "Parallel Access Drivers" that control parallelism. See more on this in OWB 9.2 User Guide, page 5-18 "Parallel".
    Other cases:
    - If the mapping is run in Row based mode with Parallel Row Code option set to 'True'. This takes advantage or Database's table function feature. See more on this in OWB 9.2 User Guide, page 11-7 "Parallel Row Code".
    - If the mapping inserts into multiple tables using Splitter operator with Optimized Code option set to 'True'. This takes advantage or Database's multi-table insert feature. See more on this in OWB 9.2 User Guide, page 11-8 "Optimized Code".
    The fact that a more powerful Unix server takes much longer to execute the same mappings suggests a problem with database configuration. The parallelism is usually most optimally achieved with PARALLEL_AUTOMATIC_TUNING set to 'true'. Beyond that, many manual options of configuring the database for parallelism are available. See "Oracle9i Data Warehousing Guide", Chapter 21 "Using Parallel Execution".
    Nikolai

  • OWB 11gR2 - Template Mapping execution failed: ORA-00936 (DB2 to Oracle)

    I've created a mapping to load data from a DB2 table (using JDBC) and import into an Oracle table.
    In the "Execution View", I've associated the LCT_SQL_TO_ORACLE code template to the DB2 execution unit, and the ICT_ORACLE_INCR_UPD code template to the Oracle execution unit (following the documentation: http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/sap_km_mappings.htm#insertedID9).
    After that, I validated, ganerated and deployed the Mapping. When executing, I got the error *"Job 49 error: java.sql.SQLException: ORA-00936: missing expression".*
    Please, how can I fix this problem?
    Here is the complete log message:
    Job     Rows Selected     Rows Inserted     Rows Updated     Rows Deleted     Errors     Warnings     Start Time     Elapsed Time     
    APCLS_MAP                         2          2010-02-10 10:25:09.0     7     
    Job 49 ended                                             
    Job 49 error: java.sql.SQLException: ORA-00936: missing expression
    while executing
    "$g_stmt executeUpdate "$p_ddl_processed""
    invoked from within
    "set g_res [$g_stmt executeUpdate "$p_ddl_processed"]"
    while executing
    "error $errorInfo"
    ("if" then script line 12)
    invoked from within
    "if {[catch {set g_res  [$g_stmt executeUpdate "$p_ddl_processed"]} result]} {
    global errorCode
    global errorInfo
    if {[llength $errorCode] =..."
    (procedure "jdbc" line 12)
    invoked from within
    "jdbc $connection $p_tgtstmt"
    while executing
    "error $jdbc_errorInfo"
    ("if" then script line 2)
    invoked from within
    "if {[info exists jdbc_errorInfo]} {
    error $jdbc_errorInfo
    (procedure "execjdbc" line 31)
    invoked from within
    "execjdbc $tgt_stmt "TGT_AC" "$TGT_LOCATION" "" "true" "false" "false""
    (procedure "10_INSERT_FLOW_INTO_I__TABLE_main" line 43)
    invoked from within
    "10_INSERT_FLOW_INTO_I__TABLE_main "
    while executing
    "error $errorInfo"
    ("if" then script line 4)
    invoked from within
    "if {[catch {
            logTaskStart Started $taskDetail {3} {}
    global _OMBTaskFlowAudit
    set _OMBTaskFlowAudit {TASK_LOADING|10_INSERT_FL..."
    ("if" then script line 2)
    invoked from within
    "if {$INSERT == "true" || $UPDATE == "true" || $FLOW_CONTROL == "true" || $SYNC_JRN_DELETE == "true"} {
    if {[catch {
            logTaskStart Started..."
        (procedure "10_INSERT_FLOW_INTO_I__TABLE" line 39)
        invoked from within
    "10_INSERT_FLOW_INTO_I__TABLE"
        (procedure "ICT_ORACLE_INCR_UPD::ICT_ORACLE_INCR_UPD_main" line 55)
        invoked from within
    "ICT_ORACLE_INCR_UPD::ICT_ORACLE_INCR_UPD_main "jdbc/ORACLE_DATABASE_LOCATION-BB_WPC-BBDW-PUBLIC_PROJECT-BB_STG" "jdbc/ORACLE_DATABASE_LOCATION-BB_WPC-..."
        (in namespace eval "::BB_STG_EU" script line 4)
        invoked from within
    "namespace eval BB_STG_EU {
      logPhaseStart "" {BB_STG_EU} "" {5}
    eval [getScript "CODETEMPLATE-BB_WPC-BBDW-PUBLIC_PROJECT-BUILT_IN_CT-ICT_ORACLE_INC..."
    null                                             
      BBDW_DB2_EU                              1     2010-02-10 10:25:10.0     3     
       BBDW_DB2_EU ended                                             
       1_VALIDATE_KM_OPTIONS                                   2010-02-10 10:25:10.0          
       2_DROP_WORK_TABLE                                   2010-02-10 10:25:10.0          
       3_CREATE_WORK_TABLE                                   2010-02-10 10:25:10.0          
       5_LOAD_DATA                                   2010-02-10 10:25:10.0     3     
       6_ANALYZE_WORK_TABLE                              1     2010-02-10 10:25:13.0          
      BB_STG_EU                         1          2010-02-10 10:25:13.0          
       Job 49 error: java.sql.SQLException: ORA-00936: missing expression
        while executing
    "$g_stmt executeUpdate "$p_ddl_processed""
        invoked from within
    "set g_res  [$g_stmt executeUpdate "$p_ddl_processed"]"
    while executing
    "error $errorInfo"
    ("if" then script line 12)
    invoked from within
    "if {[catch {set g_res  [$g_stmt executeUpdate "$p_ddl_processed"]} result]} {
    global errorCode
    global errorInfo
    if {[llength $errorCode] =..."
    (procedure "jdbc" line 12)
    invoked from within
    "jdbc $connection $p_tgtstmt"
    while executing
    "error $jdbc_errorInfo"
    ("if" then script line 2)
    invoked from within
    "if {[info exists jdbc_errorInfo]} {
    error $jdbc_errorInfo
    (procedure "execjdbc" line 31)
    invoked from within
    "execjdbc $tgt_stmt "TGT_AC" "$TGT_LOCATION" "" "true" "false" "false""
    (procedure "10_INSERT_FLOW_INTO_I__TABLE_main" line 43)
    invoked from within
    "10_INSERT_FLOW_INTO_I__TABLE_main "
    while executing
    "error $errorInfo"
    ("if" then script line 4)
    invoked from within
    "if {[catch {
            logTaskStart Started $taskDetail {3} {}
    global _OMBTaskFlowAudit
    set _OMBTaskFlowAudit {TASK_LOADING|10_INSERT_FL..."
    ("if" then script line 2)
    invoked from within
    "if {$INSERT == "true" || $UPDATE == "true" || $FLOW_CONTROL == "true" || $SYNC_JRN_DELETE == "true"} {
    if {[catch {
            logTaskStart Started..."
        (procedure "10_INSERT_FLOW_INTO_I__TABLE" line 39)
        invoked from within
    "10_INSERT_FLOW_INTO_I__TABLE"
        (procedure "ICT_ORACLE_INCR_UPD::ICT_ORACLE_INCR_UPD_main" line 55)
        invoked from within
    "ICT_ORACLE_INCR_UPD::ICT_ORACLE_INCR_UPD_main "jdbc/ORACLE_DATABASE_LOCATION-BB_WPC-BBDW-PUBLIC_PROJECT-BB_STG" "jdbc/ORACLE_DATABASE_LOCATION-BB_WPC-..."
        (in namespace eval "::BB_STG_EU" script line 4)
        invoked from within
    "namespace eval BB_STG_EU {
      logPhaseStart "" {BB_STG_EU} "" {5}
    eval [getScript "CODETEMPLATE-BB_WPC-BBDW-PUBLIC_PROJECT-BUILT_IN_CT-ICT_ORACLE_INC..."
    null                                             
       1_VALIDATE_KM_OPTIONS                                   2010-02-10 10:25:13.0          
       2_CHECK_RDBMS_VERSION                                   2010-02-10 10:25:13.0          
       3_CREATE_TARGET_TABLE                                   2010-02-10 10:25:13.0          
       4_DROP_FLOW_TABLE                                   2010-02-10 10:25:13.0          
       5_CREATE_FLOW_TABLE_I_                                   2010-02-10 10:25:13.0          
       6_DELETE_TARGET_TABLE                                   2010-02-10 10:25:13.0          
       7_TRUNCATE_TARGET_TABLE                                   2010-02-10 10:25:13.0          
       8_ANALYZE_TARGET_TABLE                                   2010-02-10 10:25:13.0          
       10_INSERT_FLOW_INTO_I__TABLE                         1          2010-02-10 10:25:13.0          
    ------                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi Danilo
    That's great. This combination is what we spoke about as a hybrid style mapping, leveraging the flexibility of an open template system for loading and staging, and then OWB's classic operator set for integrating. For Oracle database targets, the Oracle Target code template is great, you can also leverage pretty much all existing transformation capabilities, so use load templates/KMs for getting the data into Oracle and the Oracle Target template for transforming and integrating.
    Cheers
    David

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Process flow/map performance issues

    We have some issues with our OWB-based application and we're looking to find out if there are different ways we could be using the tool, or features/options we've missed.
    We are trying to maintain a near real time feed of data from a front end system into our warehouse which was built using OWB 10.2.0.3 over a 10.2.0.4 database. The bulk of the application consists of OWB maps with a few hand-written PL/SQL objects, all executed in a series of hierachical OWB process flows. Maps/transformations are executed either sequentially or in parallel where the referential integrity of the model allows.
    The problem is that we have around 150 tables in the datamart which could potentially require updating on each refresh cycle, although in practice only a few tables have any activity on a typical refresh cycle. The cycle consists of loading data into a set of staging tables, and from there the data is transformed into the main schema, often with multiple maps per target table.
    On every cycle we run hundreds of maps, the vast majority of which process zero rows. Each map runs quickly and efficiently in its own right but collectively they add up to a 5 - 10 min cycle even if there is no data to process.
    There are 2 avenues which we'd like to explore and would be grateful if anyone could provide any pointers/suggestions :-
    1) It appears that each map opens and closes its own database session when it executes. I presume this was done because a single process flow could be constructed with maps executing in different target schemas, but we know that's not the case for us. We'd like to know if there is anyway to configure the database connection at a higher level (eg. process flow) so it opens a connection once and executes each of the maps (database packages) in that one session.
    Our DBAs are experimenting with 'shared server' settings at a database level which may help to some degree but won't be the whole story.
    2) Another option is simply to run less maps eg. load the staging area as now, collate stats on which staging tables contain new data, and then apply some logic such that subsequent maps only execute if the relevant staging table(s) contain(s) some new data, otherwise bypass that map.
    We tried experimenting with the 'Pre Mapping Process' operator, but essentially that just generates another function call from the map package, so we still have the overhead of opening a database session for each map to run the package. Minimal gain.
    We thought about adding a function call in the process flow before each map and then branching to either execute/bypass the map as approriate, but the function call still requires opening/closing of a database session each time so, once again, minimal gain.
    What we really want is some way for a map or process flow to check without logging onto the database repeatedly.
    Any ideas on the above, or other potential solutions anyone could suggest, would be greatly appreciated.

    Hi,
    Please see if these documents help.
    Note: 554635.1 - Create Accounting Process Performs Poorly When 100K + Distributions are Passed for an Event
    Note: 954273.1 - Multiple Create Accounting Requests Result In Poor Performance For Online Accruals
    Note: 763500.1 - R12: Performance Issue with Create Accounting
    Note: 733637.1 - R12:Performance Issue When Running Accounting Program Xlaaccup
    Note: 781311.1 - Create Accounting Process Taking A Long Time To Complete After Appying Critical Patches
    Note: 557869.1 - EBS: R12 Oracle Financials Critical Patches
    Regards,
    Hussein

  • Regarding default select clauses executing during OWB Mapping execution -

    All-
    While observing the statements executing during an OWB(11gR2) Mapping execution,by monitoring the session using SQL Developer
    Iam finding the following statements executing multiple times and possibly consuming more of the mapping execution time:-
    SELECT MAX(EXECUTION_AUDIT_ID) FROM ALL_RT_AUDIT_MAP_RUNS WHERE MAP_NAME = :B1
    SELECT SYS_CONTEXT('owb_workspace' , 'workspaceID' ) FROM DUAL
    SELECT nvl2(translate(20040101, 'A1234567890','A'), 'F', 'T') FROM DUAL
    SELECT USER FROM SYS.DUAL
    These statements have no relation to be business logic being implemented.And seem to be generated by OWB default settings.
    Can anyone please let me know how to reduce the frequencey of the above mentioned statements or if possible remove them from the OWB mapping execution.
    So that the mapping would run more faster

    Hi,
    these statement are required to set the runtime audit data. Usually, they do not really impact the performance of a mapping so there is not need to bother about them.
    What causes performance problems is usually the business logic part of the mappings.
    You may purge old runtime metadata manually. Look at the script
    %ORACLE_HOME%\owb\rtp\jrtaudit\owbsys\purge_audit_tables.sqlDocumentation is included.
    Regards,
    Carsten.

  • OWB11gR2: Mapping execution in a process flow not visible in OWB Browser

    When a mapping is executed inside a process flow, execution details are not visible in OWB Repository Browser (Control Center reports) - rows processed, errors etc. Mapping row is missing in a log, like it never happened (but it did).
    This auditing information is very important for monitoring reasons (to our customers also) and I just don't get it how this functionality is lost with this version. Another serious bug?

    Hi David,
    I was rather tired and frustrated last evening, so today I noticed some things I didn't yesterday. Your reply gave me a new motivation.
    The conclusion is - a mapping execution in a process flow is logged, but the way activities are displayed in OWB Browser are now different than in previous versions. If I click on 'Execution Job Report' on a process flow, I see all the activities listed except mappings (transformations, assign, file exists, subprocess etc.). If I want to see mapping execution row, I must click on a plus (expand) sign.
    This kind of behavior will make processes with a complex hierarchy (usually we have more than 5 levels of subprocesses) rather vast to monitor. In 10gR2, a drilling down was accomplished by opening a new browser tab (Execution Job Report link) for each subprocess/mapping activity. Now it shall remain on one huge screen (list) that keeps expanding.
    But, if that is the new feature, we shall live with that. If our customers won't like it, they will have to get used to it.
    Thank you for your reply!

  • IOS7 iPad mini Stage3D performance issue.

    My friend tell me that he testing own game on iOS6 with Starling and game shows 60FPS. After upgrading iPad mini to iOS7 - he saw 25-30fps.
    After updating Adobe Air to latest one Air 3.9.0.880 - FPS grow up to 40FPS. But not to 60FPS as was before he setup iOS 7. So there some issues with iOS7 and iPad mini on Adobe Air.
    Someone have similar problems?

    Could you please open a new bug report on this over at https://bugbase.adobe.com?  When adding the bug, please include some sample code or a sample application so we can quickly test this out internally.  If you'd like to keep this private, feel free to email the attachment to me directly ([email protected]). 
    Once added, please post back with the URL so that others effected can add their comments and votes.  I'll ask the team to take a look.

  • How do you take care of performance issues in your ABAP programs?

    HI,
    SEND ME REPLAY

    hi,
    This topic has been discussed often in these forums.Check these links:
    Re: Monitoring & Performance Tuning
    Re: Query Enhancement and performance
    Re: performance tuning related issues
    Re: Performance analysis
    Re: BW performance
    In specific check this:
    take a look
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    and e-learning
    https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/fe5b0b5e-0501-0010-cd88-c871915ec3bf [original link is broken]
    'intermediate course'
    also take a look
    Prakash's weblog
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    BW Performance Tuning Knowledge Center - SAP Developer Network (SDN)
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'.
    Also take a look sdn forum performance center
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    there are lots of docs
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5ee957e5-0201-0010-9c83-fe14a43cd04a
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    and check oss note
    557870 'FAQ BW Query Performance' and 567746 'Composite note BW 3.x performance Query and Web'
    417307, 409641 etc (search with 'bw performance')
    Regards,
    Jayant

  • Logging OWB mapping execution in Shell script

    Hi,
    I am executing a OWB mapping from a shell script like this
    $OWB_SQLPLUS MY_WAREHOUSE plsql MY_MAPPING "," ","
    I want to log this mapping execution process into a file.
    Please let me know if this will work:
    $OWB_SQLPLUS MY_WAREHOUSE plsql MY_MAPPING "," "," >> LOGFIL.log
    I will just be using this log file to track all the execution and use it for logging purpose.
    If this wont work, please tell me the proper way to do this...
    Thanks.

    Avatar,
    ">>" is the Unix operator that will redirect output and append to a particular file, so what you have should work if you're executing it from the shell prompt. Although I don't know specifically what OWB_SQLPLUS and MY_WAREHOUSE are.
    In my company, we have the call to the owb script inside another script. For example, file x contains the following line:
    sqlplus repository_user/pwd@database @sqlplus_exec_template.sql repository_owner location task_type task_name custom_params system_params
    Then at the prompt, we enter:
    nohup x > x.log &
    And the mapping or workflow executes.
    Jakdwh,
    Are you redirecting your output to a file so you can see why it's returning a '3'? The log file will usually tell you where the error occurred. I don't know what your input parameters for your mapping is, but the script is pretty picky about the date format. Also, even if you don't have any input parameters, the "," still has to be sent into the script.
    Hope this helps,
    Heather

  • Performance Issue using min() and max() in one SQL statement

    I have a simple query that selects min() and max() from one column in a table in one sql statment.
    The table has about 9 Million rows and the selected column has a non unique index. The query takes 10 secs. When i select min() and max() in separate statements, each takes only 10 msecs:
    This statement takes 10 secs:
    select min(date_key) , max(date_key)
    from CAPS_KPIC_BG_Fact_0_A
    where date_key != TO_DATE(('1900-1-1' ),( 'YYYY-MM-DD'))
    This statement takes 10 msecs:
    select min(date_key)
    from MYTABLE
    where date_key != TO_DATE(('1900-1-1' ),( 'YYYY-MM-DD'))
    union all
    select max(date_key) from MYTABLE
    Because the first statement is part of an autmatic generated SQL of an application, i can't change it and i have to optimize the data model. How can i speed up the first statement?

    I've ran similar query on a table that has 10 milliion rows, with an index on the date column
    This is what I have found:
    SQL> set timing on
      1  SELECT MIN(ID_DATE) MIN_DATE, MAX(ID_DATE) MAX_DATE
      2      FROM MY_DATE
      3*     WHERE ID_DATE != TO_DATE(('1900-1-1' ),( 'YYYY-MM-DD'))
    SQL> /
    MIN_DATE  MAX_DATE
    03-APR-76 06-JAN-02
    real: 43383
    SQL> SELECT MIN(ID_DATE) MIN_DATE FROM MY_DATE
      2   WHERE ID_DATE != TO_DATE(('1900-1-1' ),( 'YYYY-MM-DD'))
      3  UNION ALL
      4  SELECT MAX(ID_DATE) MAX_DATE FROM MY_DATE
      5   WHERE ID_DATE != TO_DATE(('1900-1-1' ),( 'YYYY-MM-DD'))
      6  /
    MIN_DATE
    03-APR-76
    06-JAN-02
    real: 20
    SQL> SELECT MIN_DATE, MAX_DATE FROM
      2  (SELECT MAX(ID_DATE) MAX_DATE FROM MY_DATE
      3  WHERE ID_DATE != TO_DATE(('1900-1-1' ),( 'YYYY-MM-DD')) ) A,
      4  (SELECT MIN(ID_DATE) MIN_DATE FROM MY_DATE
      5  WHERE ID_DATE != TO_DATE(('1900-1-1' ),( 'YYYY-MM-DD')) ) B
      6  /
    MIN_DATE  MAX_DATE
    03-APR-76 06-JAN-02
    real: 10
    SQL> My conculsion, there is nothing you can do to the tables that will improve that particular statement.
    Why can't you modify the application?

  • Performance issues executing process flows after upgrading db to 10G

    We have installed OWF 2.6.2, and initially our database was at 9.2. Last week we updated our database to 10g, and process flow executions are taking a lot longer, from 1 minute to 15 minutes.
    Any ideas anyone what could be the cause of this performance issue?
    Thanks,
    Yanet

    Hi,
    Oracle10g database behaves differently on the statistics of tables and indexes. So check these and check wether the mappings are updating these statistics at the right moments with respect to the ETL-proces and with the right interval.
    Also, check your generated sources on how statistics are gathered (dmbs_stats.gather....). Does the index that might play a vital role in Oracle9i get new statistics, or only the table? Or only the table where doubled in amount of rows by this mapping?
    You can always take matter into your own hands, by letting OWB NOT generate the source for gathering statistics, and call your own procedure in a post-mapping.
    Regards,
    André

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Control Center is very slow after mapping execution

    Hi,
    I'm using OWB 10gR2 (10.2.0.1) on DB 10.2. When I execute different mappings and try to monitor the execution results in Control Center I'm facing a problem. The execution time of the mapping is quite normal. But after successfull execution I have to wait about one minute for the execution results (row activity). Furthermore opening and closing the job detail window takes about 1 minute to until the control center reacts.
    I've worked with same OWB10gR2 on DB 9.2 in the past and everythings works fine. But my DBA upgraded oracle database and i migrated to this new repository. The location that I use is set to 10.2.
    Do you know this issue? Any advices for "tuning" my repository are very desirable.
    Steffen

    Hi,
    I think this is not related with database issue. And also there is nothing like tuning for this issue.
    I will suggest you to upgrade your RAM and always try to work on OWB with minimum CPU load like an example if there are any other exisitng databases rather than OWB then stop the services.
    Take care of all these things.
    And always deploy your mappings in Control Center.
    Regards
    Tarang Jain

  • Performance Issues with Photoshop CS6 64-Bit

    Hello -
    Issue at hand: over the course of the last few weeks, I have noticed significant issues with performance since the last update to PS CS6 via the Adobe Application Manager, ranging from unexpected shut downs to bringing my workstation to a crawl (literally, my cursor seems to crawl across my displays). I'm curious as to if anyone else is experiencing these issues, or if there is a solution I have not yet tried. Here is a list of actions that result in these performance issues - there are likely more that I have either not experienced due to my frustration, or have not documented as occuring multiple times:
    Opening files - results in hanging process, takes 3-10 seconds to resolve
    Pasting from clipboard - results in hanging process, takes 3-10 seconds to resolve
    Saving files - takes 3-10 seconds to open the dialog, another 3-10 seconds to return to normal window (saving a compressed PNG)
    Eyedropper tool - will either crash Photoshop to desktop, or take 5-15 seconds to load
    Attempting to navigate any menu - will either crash Photoshop to desktop, or take 5-15 seconds to load
    Attempts I've taken to resolve this matter, which have failed:
    Uninstalled all fonts that I have added since the last update (this was a pain in the ***, thank you Windows explorer for being glitchy)
    Uninstall application and reinstall application
    Use 32-bit edition
    Changing process priority to Above Normal
    Confirm process affinity to all available CPU cores
    Change configuration of Photoshop performance options
    61% of memory is available to Photoshop to use (8969 MB)
    History states: 20; Cache levels: 6; Cache tile size: 1024K
    Scratch disks: active on production SSD, ~10GB space available
    Dedicated graphics processor is selected (2x nVidia cards in SLI)
    System Information:
    Intel i7 2600K @ 3.40GHz
    16GB DDR3, Dual Channel RAM
    2x nVidia GeForce GTS 450 cards, 1GB each
    Windows 7 Professional 64-bit
    Adobe Creative Cloud
    This issue is costing me time I could be working every day, and I'm about ready to begin searching for alternatives and cancel my membership if I can't get this resolved.

    Adobe Photoshop Version: 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00) x64
    Operating System: Windows 7 64-bit
    Version: 6.1 Service Pack 1
    System architecture: Intel CPU Family:6, Model:10, Stepping:7 with MMX, SSE Integer, SSE FP, SSE2, SSE3, SSE4.1, SSE4.2, HyperThreading
    Physical processor count: 4
    Logical processor count: 8
    Processor speed: 3392 MHz
    Built-in memory: 16350 MB
    Free memory: 12070 MB
    Memory available to Photoshop: 14688 MB
    Memory used by Photoshop: 61 %
    Image tile size: 1024K
    Image cache levels: 6
    OpenGL Drawing: Enabled.
    OpenGL Drawing Mode: Basic
    OpenGL Allow Normal Mode: True.
    OpenGL Allow Advanced Mode: True.
    OpenGL Allow Old GPUs: Not Detected.
    OpenCL Version: 1.1 CUDA 4.2.1
    OpenGL Version: 3.0
    Video Rect Texture Size: 16384
    OpenGL Memory: 1024 MB
    Video Card Vendor: NVIDIA Corporation
    Video Card Renderer: GeForce GTS 450/PCIe/SSE2
    Display: 2
    Display Bounds: top=0, left=1920, bottom=1080, right=3840
    Display: 1
    Display Bounds: top=0, left=0, bottom=1080, right=1920
    Video Card Number: 3
    Video Card: NVIDIA GeForce GTS 450
    Driver Version: 9.18.13.1106
    Driver Date: 20130118000000.000000-000
    Video Card Driver: nvd3dumx.dll,nvwgf2umx.dll,nvwgf2umx.dll,nvd3dum,nvwgf2um,nvwgf2um
    Video Mode:
    Video Card Caption: NVIDIA GeForce GTS 450
    Video Card Memory: 1024 MB
    Video Card Number: 2
    Video Card: LogMeIn Mirror Driver
    Driver Version: 7.1.542.0
    Driver Date: 20060522000000.000000-000
    Video Card Driver:
    Video Mode: 1920 x 1080 x 4294967296 colors
    Video Card Caption: LogMeIn Mirror Driver
    Video Card Memory: 0 MB
    Video Card Number: 1
    Video Card: NVIDIA GeForce GTS 450
    Driver Version: 9.18.13.1106
    Driver Date: 20130118000000.000000-000
    Video Card Driver: nvd3dumx.dll,nvwgf2umx.dll,nvwgf2umx.dll,nvd3dum,nvwgf2um,nvwgf2um
    Video Mode: 1920 x 1080 x 4294967296 colors
    Video Card Caption: NVIDIA GeForce GTS 450
    Video Card Memory: 1024 MB
    Serial number: 90970233273769828003
    Application folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\
    Temporary file path: C:\Users\ANDREW~1\AppData\Local\Temp\
    Photoshop scratch has async I/O enabled
    Scratch volume(s):
      C:\, 111.8G, 7.68G free
    Required Plug-ins folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Required\
    Primary Plug-ins folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Plug-ins\
    Additional Plug-ins folder: not set
    Installed components:
       ACE.dll   ACE 2012/06/05-15:16:32   66.507768   66.507768
       adbeape.dll   Adobe APE 2012/01/25-10:04:55   66.1025012   66.1025012
       AdobeLinguistic.dll   Adobe Linguisitc Library   6.0.0  
       AdobeOwl.dll   Adobe Owl 2012/09/10-12:31:21   5.0.4   79.517869
       AdobePDFL.dll   PDFL 2011/12/12-16:12:37   66.419471   66.419471
       AdobePIP.dll   Adobe Product Improvement Program   7.0.0.1686  
       AdobeXMP.dll   Adobe XMP Core 2012/02/06-14:56:27   66.145661   66.145661
       AdobeXMPFiles.dll   Adobe XMP Files 2012/02/06-14:56:27   66.145661   66.145661
       AdobeXMPScript.dll   Adobe XMP Script 2012/02/06-14:56:27   66.145661   66.145661
       adobe_caps.dll   Adobe CAPS   6,0,29,0  
       AGM.dll   AGM 2012/06/05-15:16:32   66.507768   66.507768
       ahclient.dll    AdobeHelp Dynamic Link Library   1,7,0,56  
       aif_core.dll   AIF   3.0   62.490293
       aif_ocl.dll   AIF   3.0   62.490293
       aif_ogl.dll   AIF   3.0   62.490293
       amtlib.dll   AMTLib (64 Bit)   6.0.0.75 (BuildVersion: 6.0; BuildDate: Mon Jan 16 2012 18:00:00)   1.000000
       ARE.dll   ARE 2012/06/05-15:16:32   66.507768   66.507768
       AXE8SharedExpat.dll   AXE8SharedExpat 2011/12/16-15:10:49   66.26830   66.26830
       AXEDOMCore.dll   AXEDOMCore 2011/12/16-15:10:49   66.26830   66.26830
       Bib.dll   BIB 2012/06/05-15:16:32   66.507768   66.507768
       BIBUtils.dll   BIBUtils 2012/06/05-15:16:32   66.507768   66.507768
       boost_date_time.dll   DVA Product   6.0.0  
       boost_signals.dll   DVA Product   6.0.0  
       boost_system.dll   DVA Product   6.0.0  
       boost_threads.dll   DVA Product   6.0.0  
       cg.dll   NVIDIA Cg Runtime   3.0.00007  
       cgGL.dll   NVIDIA Cg Runtime   3.0.00007  
       CIT.dll   Adobe CIT   2.1.0.20577   2.1.0.20577
       CoolType.dll   CoolType 2012/06/05-15:16:32   66.507768   66.507768
       data_flow.dll   AIF   3.0   62.490293
       dvaaudiodevice.dll   DVA Product   6.0.0  
       dvacore.dll   DVA Product   6.0.0  
       dvamarshal.dll   DVA Product   6.0.0  
       dvamediatypes.dll   DVA Product   6.0.0  
       dvaplayer.dll   DVA Product   6.0.0  
       dvatransport.dll   DVA Product   6.0.0  
       dvaunittesting.dll   DVA Product   6.0.0  
       dynamiclink.dll   DVA Product   6.0.0  
       ExtendScript.dll   ExtendScript 2011/12/14-15:08:46   66.490082   66.490082
       FileInfo.dll   Adobe XMP FileInfo 2012/01/17-15:11:19   66.145433   66.145433
       filter_graph.dll   AIF   3.0   62.490293
       hydra_filters.dll   AIF   3.0   62.490293
       icucnv40.dll   International Components for Unicode 2011/11/15-16:30:22    Build gtlib_3.0.16615  
       icudt40.dll   International Components for Unicode 2011/11/15-16:30:22    Build gtlib_3.0.16615  
       image_compiler.dll   AIF   3.0   62.490293
       image_flow.dll   AIF   3.0   62.490293
       image_runtime.dll   AIF   3.0   62.490293
       JP2KLib.dll   JP2KLib 2011/12/12-16:12:37   66.236923   66.236923
       libifcoremd.dll   Intel(r) Visual Fortran Compiler   10.0 (Update A)  
       libmmd.dll   Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler   12.0  
       LogSession.dll   LogSession   2.1.2.1681  
       mediacoreif.dll   DVA Product   6.0.0  
       MPS.dll   MPS 2012/02/03-10:33:13   66.495174   66.495174
       msvcm80.dll   Microsoft® Visual Studio® 2005   8.00.50727.6195  
       msvcm90.dll   Microsoft® Visual Studio® 2008   9.00.30729.1  
       msvcp100.dll   Microsoft® Visual Studio® 2010   10.00.40219.1  
       msvcp80.dll   Microsoft® Visual Studio® 2005   8.00.50727.6195  
       msvcp90.dll   Microsoft® Visual Studio® 2008   9.00.30729.1  
       msvcr100.dll   Microsoft® Visual Studio® 2010   10.00.40219.1  
       msvcr80.dll   Microsoft® Visual Studio® 2005   8.00.50727.6195  
       msvcr90.dll   Microsoft® Visual Studio® 2008   9.00.30729.1  
       pdfsettings.dll   Adobe PDFSettings   1.04  
       Photoshop.dll   Adobe Photoshop CS6   CS6  
       Plugin.dll   Adobe Photoshop CS6   CS6  
       PlugPlug.dll   Adobe(R) CSXS PlugPlug Standard Dll (64 bit)   3.0.0.383  
       PSArt.dll   Adobe Photoshop CS6   CS6  
       PSViews.dll   Adobe Photoshop CS6   CS6  
       SCCore.dll   ScCore 2011/12/14-15:08:46   66.490082   66.490082
       ScriptUIFlex.dll   ScriptUIFlex 2011/12/14-15:08:46   66.490082   66.490082
       svml_dispmd.dll   Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler   12.0  
       tbb.dll   Intel(R) Threading Building Blocks for Windows   3, 0, 2010, 0406  
       tbbmalloc.dll   Intel(R) Threading Building Blocks for Windows   3, 0, 2010, 0406  
       updaternotifications.dll   Adobe Updater Notifications Library   6.0.0.24 (BuildVersion: 1.0; BuildDate: BUILDDATETIME)   6.0.0.24
       WRServices.dll   WRServices Friday January 27 2012 13:22:12   Build 0.17112   0.17112
    Required plug-ins:
       3D Studio 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Accented Edges 13.0
       Adaptive Wide Angle 13.0
       Angled Strokes 13.0
       Average 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Bas Relief 13.0
       BMP 13.0
       Camera Raw 8.1
       Camera Raw Filter 8.1
       Chalk & Charcoal 13.0
       Charcoal 13.0
       Chrome 13.0
       Cineon 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Clouds 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Collada 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Color Halftone 13.0
       Colored Pencil 13.0
       CompuServe GIF 13.0
       Conté Crayon 13.0
       Craquelure 13.0
       Crop and Straighten Photos 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Crop and Straighten Photos Filter 13.0
       Crosshatch 13.0
       Crystallize 13.0
       Cutout 13.0
       Dark Strokes 13.0
       De-Interlace 13.0
       Dicom 13.0
       Difference Clouds 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Diffuse Glow 13.0
       Displace 13.0
       Dry Brush 13.0
       Eazel Acquire 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Embed Watermark 4.0
       Entropy 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Extrude 13.0
       FastCore Routines 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Fibers 13.0
       Film Grain 13.0
       Filter Gallery 13.0
       Flash 3D 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Fresco 13.0
       Glass 13.0
       Glowing Edges 13.0
       Google Earth 4 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Grain 13.0
       Graphic Pen 13.0
       Halftone Pattern 13.0
       HDRMergeUI 13.0
       IFF Format 13.0
       Ink Outlines 13.0
       JPEG 2000 13.0
       Kurtosis 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Lens Blur 13.0
       Lens Correction 13.0
       Lens Flare 13.0
       Liquify 13.0
       Matlab Operation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Maximum 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Mean 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Measurement Core 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Median 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Mezzotint 13.0
       Minimum 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       MMXCore Routines 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Mosaic Tiles 13.0
       Multiprocessor Support 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Neon Glow 13.0
       Note Paper 13.0
       NTSC Colors 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Ocean Ripple 13.0
       Oil Paint 13.0
       OpenEXR 13.0
       Paint Daubs 13.0
       Palette Knife 13.0
       Patchwork 13.0
       Paths to Illustrator 13.0
       PCX 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Photocopy 13.0
       Photoshop 3D Engine 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Picture Package Filter 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Pinch 13.0
       Pixar 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Plaster 13.0
       Plastic Wrap 13.0
       PNG 13.0
       Pointillize 13.0
       Polar Coordinates 13.0
       Portable Bit Map 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Poster Edges 13.0
       Radial Blur 13.0
       Radiance 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Range 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Read Watermark 4.0
       Reticulation 13.0
       Ripple 13.0
       Rough Pastels 13.0
       Save for Web 13.0
       ScriptingSupport 13.1.2
       Shear 13.0
       Skewness 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Smart Blur 13.0
       Smudge Stick 13.0
       Solarize 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Spatter 13.0
       Spherize 13.0
       Sponge 13.0
       Sprayed Strokes 13.0
       Stained Glass 13.0
       Stamp 13.0
       Standard Deviation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       STL 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Sumi-e 13.0
       Summation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Targa 13.0
       Texturizer 13.0
       Tiles 13.0
       Torn Edges 13.0
       Twirl 13.0
       Underpainting 13.0
       Vanishing Point 13.0
       Variance 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Variations 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Water Paper 13.0
       Watercolor 13.0
       Wave 13.0
       Wavefront|OBJ 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       WIA Support 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       Wind 13.0
       Wireless Bitmap 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
       ZigZag 13.0
    Optional and third party plug-ins: NONE
    Plug-ins that failed to load: NONE
    Flash:
       Mini Bridge
       Kuler
    Installed TWAIN devices: NONE

Maybe you are looking for

  • Process Multiple Files in PSE7 - o/p files to Organizer?

    I'm using PSE7. When I run 'Process Multiple Files' from the File menu in Editor I can create a new set of jpegs. For example, I can run them through the Quick Fix processes. If I create new files as output rather than overwriting the originals, is t

  • How can i enable ldap service in my system?

    How can i enable ldap service in my system?while running my prgram i am getting an error connection refused why?

  • Trying to call a function from C#

    Hi all, I am having problems calling a function that returns a sys refcursor and takes a varchar2 as an input. From the SQL Developer IDE I see that the function is run like DECLARE X_AR_MHTR VARCHAR2(200); v_Return sys_refcursor; BEGIN X_AR_MHTR :=

  • PSA 2.0 Tagserweiterung

    Hallo Spezialisten Wie kann ich im Photoshopalbum weitere Tags hinzufügen. Ich will z.B. ein "Tagbaum" mit folgender Struktur haben: - Familie - Muster - Hans - Hilda - Hans Junior - Vreni Nehme an. dass ich diese Struktur noch erweitern kann. Leider

  • Why does safari suddenly zoom in without me touching anything?

    why does safari suddenly zoom in without me touching anything?