BI Statistics Highly Aggregated cube 0TCT_CA1 poor load performance

BI Statistics Highly Aggregated cube 0TCT_CA1 load ffrom DataSource 0TCT_DSA1 has very poor load performance.
In our DEV BW, it ran 8 min for 12,000 records. We have even worse performance in test box.
Initial loads then run very long since DataSource 0TCT_DSA1 does not allow us to load by calendar month.
If you have seen this issue, please let me know.
Jay Roble
General Mills

Compressing the cube would not help, since the cube is empty & we are trying to load 90 days of history.
The source table has an index on the timestamp field that the extractor uses in it's delta loads.
The loads run very slow, even with index dropped & no PSA.
We know that in production there will be appx rows 400,000 loaded 14,400 added with daily delta loads due to aggregation.
So we are seeing slow delta loads in our QA testing.
Not sure why the extractor can't just deliver the 14K aggregated rows vs. 400K.
Note: DS 0TCT_DSA1 has no selction criteria when doing initial full loads.

Similar Messages

  • Cube to Cube vs ODS to Cube good for loading performance

    BW Experts,
    I have an option of loading from Cube to Cube or ODS to Cube. It has data in 20 m records.
    So in terms of performance of loading perspective, which one is better..
    Thanks in advance,
    BWer

    Hi,
    of course (but you didn't mention that the level of detail was different) it's depending on if there's alot more data in the DSO than in the cube.
    A DSO is 1 table which is easier to extract data from and build additional indexes on to speed up load time.
    The Info cube is as you know more compelx built with the multi dimensional model.
    Kind regards
    /martin

  • Extremely poor load performance into RDF store

    We are seeing unacceptably poor performance loading our RDF models. We have triedloading with Jena adaptor as well as loading using the BatchLoader class. Either way, the absolute best rate we have seen is about 20000/minute. Here are the specs:
    Loading a triples file from disk which contains 5.08 million triples
    Oracle 11gr2 running on Enterprise Linux64 with 16 GB RAM
    Oracle 11gr2 running on Solaris 10 with all the latest patches and packages...64 bit with 8GB RAM
    Sun is a bit slower than linux, but both are quite poor. I can load the identical triples file into Allegro running on my desktop PC and consistently get load rates of 1 million per minute.
    Can anyone suggest what could be causing this performance issue? Oracle is a very recent install on both the sun and linux boxes, and there is nothing whatsoever running on either one other than oracle, and I am the only oracle user. All install and parameter settings are oracle default.

    An update on the stats. I changed the way models are created, doing it manually from sqlplus first (vs letting the Jena stuff create the model on the fly), and that made things run about 3 times faster, but I still am not seeing anything better than triple load times of 79K/minute.
    Here is the java that runs the load:
    String model = "GMI";
    oracleGraph = oracleTools.makeGraph(model);
    oracleModel = new ModelOracleSem(oracleGraph);
    InputStream is = FileManager.get().open(props.getProperty(model +".NTRIPLES_IND_FILE"));
    StopWatch timer = new StopWatch();
    timer.start();
    // read file contents into oracleModel
    oracleModel.read(is, "", "N-TRIPLE");
    timer.stop();
    is.close();
    System.out.println("\nLoad time into Oracle: " +timer.getElapsedTimeSecs());
    Here is most of the show parameters output:
    O7_DICTIONARY_ACCESSIBILITY boolean FALSE
    active_instance_count integer
    aq_tm_processes integer 0
    archive_lag_target integer 0
    asm_diskgroups string
    asm_diskstring string
    asm_power_limit integer 1
    asm_preferred_read_failure_groups string
    audit_file_dest string /opt/oracle/
    p
    audit_sys_operations boolean FALSE
    audit_syslog_level string
    audit_trail string DB
    background_core_dump string partial
    background_dump_dest string /opt/oracle/
    /jtvOrcl/tra
    backup_tape_io_slaves boolean FALSE
    bitmap_merge_area_size integer 1048576
    blank_trimming boolean FALSE
    buffer_pool_keep string
    buffer_pool_recycle string
    cell_offload_compaction string ADAPTIVE
    cell_offload_decryption boolean TRUE
    cell_offload_parameters string
    cell_offload_plan_display string AUTO
    cell_offload_processing boolean TRUE
    cell_partition_large_extents string DEFAULT
    circuits integer
    client_result_cache_lag big integer 3000
    client_result_cache_size big integer 0
    cluster_database boolean FALSE
    cluster_database_instances integer 1
    cluster_interconnects string
    commit_logging string
    commit_point_strength integer 1
    commit_wait string
    commit_write string
    compatible string 11.2.0.0.0
    control_file_record_keep_time integer 7
    control_files string /opt/oracle/
    ntrol01.ctl,
    ta/jtvOrcl/c
    control_management_pack_access string DIAGNOSTIC+T
    core_dump_dest string /opt/oracle/
    /jtvOrcl/cdu
    cpu_count integer 4
    create_bitmap_area_size integer 8388608
    create_stored_outlines string
    cursor_sharing string EXACT
    cursor_space_for_time boolean FALSE
    db_16k_cache_size big integer 0
    db_2k_cache_size big integer 0
    db_32k_cache_size big integer 0
    db_4k_cache_size big integer 0
    db_8k_cache_size big integer 0
    db_block_buffers integer 0
    db_block_checking string FALSE
    db_block_checksum string TYPICAL
    db_block_size integer 8192
    db_cache_advice string ON
    db_cache_size big integer 0
    db_create_file_dest string
    db_create_online_log_dest_1 string
    db_create_online_log_dest_2 string
    db_create_online_log_dest_3 string
    db_create_online_log_dest_4 string
    db_create_online_log_dest_5 string
    db_domain string domain
    db_file_multiblock_read_count integer 75
    db_file_name_convert string
    db_files integer 200
    db_flash_cache_file string
    db_flash_cache_size big integer 0
    db_flashback_retention_target integer 1440
    db_keep_cache_size big integer 0
    db_lost_write_protect string NONE
    db_name string jtvOrcl
    db_recovery_file_dest string
    db_recovery_file_dest_size big integer 0
    db_recycle_cache_size big integer 0
    db_securefile string PERMITTED
    db_ultra_safe string OFF
    db_unique_name string jtvOrcl
    db_writer_processes integer 1
    dbwr_io_slaves integer 0
    ddl_lock_timeout integer 0
    deferred_segment_creation boolean TRUE
    dg_broker_config_file1 string /opt/oracle/
    1/dbs/dr1jtv
    dg_broker_config_file2 string /opt/oracle/
    1/dbs/dr2jtv
    dg_broker_start boolean FALSE
    diagnostic_dest string /opt/oracle
    disk_asynch_io boolean TRUE
    dispatchers string (PROTOCOL=TC
    lXDB)
    distributed_lock_timeout integer 60
    dml_locks integer 1088
    dst_upgrade_insert_conv boolean TRUE
    enable_ddl_logging boolean FALSE
    event string
    fal_client string
    fal_server string
    fast_start_io_target integer 0
    fast_start_mttr_target integer 0
    fast_start_parallel_rollback string LOW
    file_mapping boolean FALSE
    fileio_network_adapters string
    filesystemio_options string none
    fixed_date string
    gcs_server_processes integer 0
    global_context_pool_size string
    global_names boolean FALSE
    global_txn_processes integer 1
    hash_area_size integer 131072
    hi_shared_memory_address integer 0
    hs_autoregister boolean TRUE
    ifile file
    instance_groups string
    instance_name string jtvOrcl
    instance_number integer 0
    instance_type string RDBMS
    java_jit_enabled boolean TRUE
    java_max_sessionspace_size integer 0
    java_pool_size big integer 0
    java_soft_sessionspace_limit integer 0
    job_queue_processes integer 1000
    large_pool_size big integer 0
    ldap_directory_access string NONE
    ldap_directory_sysauth string no
    license_max_sessions integer 0
    license_max_users integer 0
    license_sessions_warning integer 0
    listener_networks string
    local_listener string
    lock_name_space string
    lock_sga boolean FALSE
    log_archive_config string
    max_dispatchers integer
    max_dump_file_size string unlimited
    max_enabled_roles integer 150
    max_shared_servers integer
    memory_max_target big integer 0
    memory_target big integer 0
    nls_calendar string
    nls_comp string BINARY
    nls_currency string
    nls_date_format string
    nls_date_language string
    nls_dual_currency string
    nls_iso_currency string
    nls_language string AMERICAN
    nls_length_semantics string BYTE
    nls_nchar_conv_excp string FALSE
    nls_numeric_characters string
    nls_sort string
    nls_territory string AMERICA
    nls_time_format string
    nls_time_tz_format string
    nls_timestamp_format string
    nls_timestamp_tz_format string
    object_cache_max_size_percent integer 10
    object_cache_optimal_size integer 102400
    olap_page_pool_size big integer 0
    open_cursors integer 300
    open_links integer 4
    open_links_per_instance integer 4
    optimizer_capture_sql_plan_baselines boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 11.2.0.1
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    optimizer_use_invisible_indexes boolean FALSE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines boolean TRUE
    os_authent_prefix string ops$
    os_roles boolean FALSE
    parallel_adaptive_multi_user boolean TRUE
    parallel_automatic_tuning boolean FALSE
    parallel_degree_limit string CPU
    parallel_degree_policy string MANUAL
    parallel_execution_message_size integer 16384
    parallel_force_local boolean FALSE
    parallel_instance_group string
    parallel_io_cap_enabled boolean FALSE
    parallel_max_servers integer 80
    parallel_min_percent integer 0
    parallel_min_servers integer 0
    parallel_min_time_threshold string AUTO
    parallel_server boolean FALSE
    parallel_server_instances integer 1
    parallel_servers_target integer 32
    parallel_threads_per_cpu integer 2
    permit_92_wrap_format boolean TRUE
    pga_aggregate_target big integer 5905M
    plscope_settings string IDENTIFIERS:NONE
    plsql_ccflags string
    plsql_code_type string INTERPRETED
    plsql_debug boolean FALSE
    plsql_optimize_level integer 2
    plsql_v2_compatibility boolean FALSE
    plsql_warnings string DISABLE:ALL
    pre_page_sga boolean FALSE
    processes integer 150
    query_rewrite_enabled string TRUE
    query_rewrite_integrity string enforced
    rdbms_server_dn string
    read_only_open_delayed boolean FALSE
    recovery_parallelism integer 0
    recyclebin string on
    redo_transport_user string
    remote_dependencies_mode string TIMESTAMP
    remote_listener string
    remote_login_passwordfile string EXCLUSIVE
    remote_os_authent boolean FALSE
    remote_os_roles boolean FALSE
    replication_dependency_tracking boolean TRUE
    resource_limit boolean FALSE
    resource_manager_cpu_allocation integer 4
    resource_manager_plan string
    result_cache_max_result integer 5
    result_cache_max_size big integer 2624K
    result_cache_mode string MANUAL
    result_cache_remote_expiration integer 0
    resumable_timeout integer 0
    rollback_segments string
    sec_case_sensitive_logon boolean TRUE
    sec_max_failed_login_attempts integer 10
    sec_protocol_error_further_action string CONTINUE
    sec_protocol_error_trace_action string TRACE
    sec_return_server_release_banner boolean FALSE
    serial_reuse string disable
    service_names string jtvOrcl.domain
    session_cached_cursors integer 50
    session_max_open_files integer 10
    sessions integer 248
    sga_max_size big integer 512M
    sga_target big integer 512M
    shadow_core_dump string partial
    shared_memory_address integer 0
    shared_pool_reserved_size big integer 16148070
    shared_pool_size big integer 0
    shared_server_sessions integer
    shared_servers integer 1
    skip_unusable_indexes boolean TRUE
    smtp_out_server string
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    spfile string /opt/oracle/product/11.2.0/db_
    1/dbs/spfilejtvOrcl.ora
    sql92_security boolean FALSE
    sql_trace boolean FALSE
    sqltune_category string DEFAULT
    standby_archive_dest string ?/dbs/arch
    standby_file_management string MANUAL
    star_transformation_enabled string FALSE
    statistics_level string TYPICAL
    streams_pool_size big integer 0
    tape_asynch_io boolean TRUE
    thread integer 0
    timed_os_statistics integer 0
    timed_statistics boolean TRUE
    trace_enabled boolean TRUE
    tracefile_identifier string
    transactions integer 272
    transactions_per_rollback_segment integer 5
    undo_management string AUTO
    undo_retention integer 900
    undo_tablespace string UNDOTBS1
    use_indirect_data_buffers boolean FALSE
    user_dump_dest string /opt/oracle/diag/rdbms/jtvorcl
    /jtvOrcl/trace
    utl_file_dir string
    workarea_size_policy string AUTO
    xml_db_events string enable

  • Poor Load perfomance

    Hi Gurus,
    Everyday i am loading 16 million records to cube via PSA.It usually takes 3 hours and 30 mins. But last week it has taken 7 hours. Could somebody tell me the reasons for this poor load performance.
    Regards,
    Neeraj

    Hi,
    did you create indexes in between? Or did you forget to drop them before your load?
    Did someone create an index on your PSA table?
    Can you identify where is the loading taking much more time from the monitor? Is it extraction, TRules, URules or just the posting in the Fact tables?
    Is your cube collapsed?
    there can be many things...
    hope this helps,
    Olivier.

  • HOW DOES COMPRESSION OF AN INFOCUBE INCREASE THE LOAD PERFORMANCE?

    Hi all
    I see that Compression of infocube is one of the parameters to improve/increase load performance. If I am not wrong, Can some one please explain how compressing a cube improves the load performance?
    Thanks in advance
    Rishi

    Hi,
    I see that Compression of infocube is one of the parameters to improve/increase load performance. If I am not wrong, Can some one please explain how compressing a cube improves the load performance?
    As per my information Compression improves the Quaer performance not loading perforamnce
    when u do compression the same characterstics which are having the same values those records will be moved to E Fact table
    ex  Custno   Mat NO Qty   Value
          C101     M101     10      100
          c101     m101      20      200
    when u do comperession the recors will be compressed as below
    c101   m101  30 300
    when the query execution instead of reading two reocrds and compressing at the time of producing output at report level
    but when already compressed it fetchs one  record directly form E table like that the  query perforamnce will be improved.
    Not loading perforamnce.
    Thansk & regards,
    sathish

  • Once the aggregated cube how to run the query

    hai ,
    i had cube havind lot of data .
    so i was used aggregation .
    after that how to run  query from aggragated cube
    when ever i went to rrmx . but it has showing not aggregated cube.
    once aggregate the cube where is stored
    plz let me know

    InfoCube aggregates are <b>separate database tables</b>.
    Aggregates are more summarized versions of the base InfoCube.  There is an aggregate fact table, e.g.  /BIC/E1#####  ===>  /BIC/E100027.  If you don't automatically compress your aggregates, there would also be an F fact table /BIC/F100027,
    There are aggregate dimension tables that are also created, e.g. /BIC/D1000271.  If a dimension for the aggregate is the same as the base InfoCube, then there is no aggregate dimension table for that dimension and the queries will use that dimension table from the base cube.
    As long as the agggregate is active, the BW automatically will use it instead of the base cube as long as the aggregate contains all the characteristics necessary to satisfy the query. 
    You can verify the aggregate's usage by looking at info in table RSDDSTAT - it will show the Aggregate number if used (will not show aggregate usage for queries on a MultiProvider if you are on a more recent Svc Pack).
    You can also run the query thru RSRT, using the Exec & Debug option - check the "Display aggregate found" option and it will display what aggregate(s) it found and which one(s) it used.

  • ERROR - 1270027 - Cannot proceed while the cube is being loaded.

    Helloo,
    I am getting the following error messages from loading to ASO from MaxL
    ERROR - 1270027 - Cannot proceed while the cube is being loaded.
    ERROR - 1241101 - Unexpected Essbase error 1270027.
    Any Help would be appriciated.
    Thanks
    SJ

    If you initialized a load buffer and your process did not complete or you did not load the buffer or destroy it, then you would get the error. Try destroying the load buffer and try again

  • BI statistic cube 0TCT_CA1

    Hi,
    I am using query based on the BI statistic cube 0TCT_CA1 to get the workbook execution times and it is working fine.
    But I have two queries.
    1) some workbook stored in favourites. Is it possible to show them in separate category or have them displayed in a separate workbook?
    2) I do find the calendar year/month characteristic in this cube, but it seems this one is not populated in the cube. Insteadly in 0TCT_C01, the calendar year/month is populated correctly.
    Many Thanks
    Jonathan

    Hi,
    In addtion, I found the NO. of execution of BI workbooks are different in cube 0TCT_CA1 and 0TCT_C01 in some cases?
    can anyone help?

  • Refreshing statistics in Basis Cube

    Hi Experts,
    Why do we refresh statistics in Basis Cubes ? What are the implications if we do not refresh statistics ?
    Thanks in advance.
    Sheeja

    Hi
    It will refresh the new data in all the interrelated tables in case if they are not refreshed with the changes happend in the related tables
    Regards
    N Ganesh

  • Cube.obj wont load

    cube.obj wont load, I installed java3d, all other java3d tutorials work except the ones with wavefront .obj models. My code is:
    //Comment out the following package statement to compile separately.
    package com.javaworld.media.j3d;
    import java.awt.*;
    import java.awt.event.*;
    import java.io.*;
    import javax.media.j3d.*;
    import javax.vecmath.*;
    import com.sun.j3d.loaders.*;
    import com.sun.j3d.loaders.objectfile.*;
    import com.sun.j3d.utils.geometry.*;
    import com.sun.j3d.utils.universe.*;
    * Example06 uses Sun's ObjectFile to load in Wavefront OBJ
    * format 3D content.
    * <P>
    * Note that using the default ObjectFile settings does read
    * in the data, but does not give the Java 3D runtime enough
    * information to display the file correctly after parsing
    * it. The world looks empty, though the object has been
    * loaded.
    * <P>
    * To display and interact with OBJ content in Java 3D,
    * please use Sun's more robust demo application, ObjLoad.
    * (Available in the Java 3D examples download from Sun's site.)
    * <P>
    * This version is compliant with Java 1.2 and
    * Java 3D 1.1 Beta 2, Nov 1998. Please refer to: <BR>
    * http://www.javaworld.com/javaworld/jw-01-1999/jw-01-media.html
    * <P>
    * @author Bill Day <[email protected]>
    * @version 1.0
    * @see com.javaworld.media.j3d.Example04
    * @see com.sun.j3d.loaders.Scene
    * @see com.sun.j3d.loaders.objectfile.ObjectFile
    public class Example06 extends Frame {
    * Instantiates an Example06 object.
    public static void main(String args[]) {
    new Example06();
    * The Example06 constructor sets the frame's size, adds the
    * visual components, and then makes them visible to the user.
    * <P>
    * We place a Canvas3D object into the Frame so that Java 3D
    * has the heavyweight component it needs to render 3D
    * graphics into. We then call methods to construct the
    * View and Content branches of our scene graph.
    public Example06() {
    //Title our frame and set its size.
    super("Java 3D Example06");
    setSize(800,600);
    //Here is our first Java 3d-specific code. We add a
    //Canvas3D to our Frame so that we can render our 3D
    //graphics. Java 3D requires a heavyweight component
    //Canvas3D into which to render.
    Canvas3D myCanvas3D = new Canvas3D(null);
    add(myCanvas3D,BorderLayout.CENTER);
    //Turn on the visibility of our frame.
    setVisible(true);
    //We want to be sure we properly dispose of resources
    //this frame is using when the window is closed. We use
    //an anonymous inner class adapter for this.
    addWindowListener(new WindowAdapter()
    {public void windowClosing(WindowEvent e)
             {dispose(); System.exit(0);}
    //Use SimpleUniverse. Move viewing position so we can
    //see everything in our scene.
    SimpleUniverse myUniverse = new SimpleUniverse(myCanvas3D);
    BranchGroup contentBranchGroup = constructContentBranch();
    myUniverse.addBranchGraph(contentBranchGroup);
    ViewingPlatform myView = myUniverse.getViewingPlatform();
    TransformGroup myViewTransformGroup = myView.getViewPlatformTransform();
    Transform3D myViewTransform = new Transform3D();
    myViewTransform.setTranslation(new Vector3f(2.0f,0.0f,11.0f));
    myViewTransformGroup.setTransform(myViewTransform);
    * constructContentBranch() is where we specify the 3D graphics
    * content to be rendered. Here we read in a square using
    * Sun's OBJ loader, then return this to be rendered. This
    * square could be replaced with more complicated content exported
    * from 3D modeling programs supporting the OBJ format.
    private BranchGroup constructContentBranch() {
    ObjectFile myOBJ = new ObjectFile();
    Scene myOBJScene = null;
    //Attempt to load in the OBJ content using ObjectFile.
    try {
    myOBJScene = myOBJ.load("cube.obj");
    } catch (FileNotFoundException e) {
    System.out.println("Could not open OBJ file...exiting");
    System.exit(1);
    //Construct and return branch group containing our OBJ scene.
    BranchGroup contentBranchGroup = new BranchGroup();
    contentBranchGroup.addChild(myOBJScene.getSceneGroup());
    return(contentBranchGroup);
    }

    Most probably the obj file is not in the right directory (it must be in the working directory as I have seen from the code).

  • BW Statistics - WHM (0BWTC_C05)  Cube QM status problem

    Hi,
       I am loading data in to the cube 0BWTC_C05 . after loading the data The QM status is in yellow color only. i am unable to find any reason ..why it is in yellow color...when i am converting the QM status manually to greencolor then only i am getting the request for reporting Available symbol...
       please let me know the reason why the QM status is in yellow color. I checked the Details & status Tab. Every thing is fine .........
       Its urgent task ....i will be waiting for your reply .
    Thanks.
    kris

    HI Jans-Beken,
      Thanks a lot .....your reply helped me a very lot..I am unable to find the reason from last 2 days....
      thanks a lot once again

  • Is there a documented correlation between high CPU/RAM usage and poor system performance?

    I realize this subject is not black and white and has quite a bit of depth to it as high usage of either the CPU or RAM does not necessarily mean that a computer is running slowly.
    However, is there any documentation, scientific research or academic journal that makes a correlation between poor system performance and the consumption of system resources.
    It goes without saying that if you use all your RAM then there won't be any available for additional programs, but can this be substantiated with metrical data?

    Check this:
    http://superuser.com/questions/78362/what-is-the-relationship-between-cpu-usage-and-ram
    http://www.computermemoryupgrade.net/memory-influence-on-performance.html
    Fouad Roumieh

  • APO info cube creation and loading

    Hi,
    Can some buddy tell me Info Cube Creation and Loading Cycle, step by step .....?
    Regards

    Hi R S D,
    The following link will guide you to create infocube creation as you require.
    http://help.sap.com/saphelp_scm70/helpdata/EN/23/054e3ce0f9fe3fe10000000a114084/frameset.htm
    The menupath is modelling > infoproviders > infocubes > dimension > creating infocubes.
    Hope you got your solution.
    Please confirm
    Regards
    R. Senthil Mareeswaran.

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • Loading Performance Scorecard measure with DIM

    Hi,
    I'm having a bit of a problem loading measure into Hyperion Performance Scorecard 9.3.1 with DIM adapter. I have managed to load the measures and set some of the properties, however there seems to be a problem setting the High / Low result (HigherIsBetter port) and Result frequency. Tried passing in T and F for HigherIsBetter and 0 to 8 for ResultFrequency but nothing happened. Has anybody had this problem before? I have applied the 9.3.0.1.1 patch to the DIM 9.3.0.1 adapter.
    Regards,
    Gerd

    After compression, the requested IDs will delete but where the deleted request IDs goes? Is there any chance to get them back?
    After the compression request id replace by 0. You can't find anymore.
    what about if I want to load delta on already compressed cube? how compression impact on cube after delta?
    This is normal behavior. we can load delta data to compressed cube. no issues with it.
    Compression won't impact on delta.
    compression - just moves data from F fact table E fact table. Your loaded delta records(uncompressed) will be at F fact table.
    Compression always do on old request not on the latest requests.
    Aggregates on cube will improve query performance, apart from query response time how do you know query performance increased?
    first you can check your how much time taking without aggregates note down it.
    later create aggregates and check query performance time.
    You can check query statistics at RSRT --> use cache or aggregates, execute+debug.
    About DSO requirement:
    As my guess we never get such requirement to load 10 objects from one source and remaining 4 objects from another source to fill 14 objects of cube.
    You can load, but thru one dso you will get 10 objects data and 4 objects data will be blank.
    2dso also same it will fill 4 objects data and 10 objects will be blank.
    routines - we can't about this. its purely depend on your requirement.
    BI Content - if you want one data flow, you need objects, based on your data source or cube you will get/find required info objects by using bi content.
    Thanks

Maybe you are looking for

  • How can I sync my old ipod touch apps to my new ipod touch

    I have an ipod touch 3G and just got an ipod touch 4g, I want to get the apps with their history (such as my calendar and golf scores) on to my new one. But apparently I made a mistake when I registered my new touch.  Is there any way I can sync my o

  • Error during Homogenous System copy of ABAP+JAVA stack

    Hi All, I'm trying to do a homogeneous system copy of an XI system NW04s on Windows 2k3 and Oracle database to another windows 2k3 machine. SCS and Database went on fine but during CI installation on the target system I'm stuck at "Run ABAP Reports"

  • Ipod touch wifi isnt connecting

    I am having a problem with my ipod touch i am not able to connect to the wifi, my brothers ipod is working on the wifi but mine isnt, it says 'unale to join the network" can anyone help me??

  • Please help me with simple program

    Can someone please write a simple program for me that opens up a webpage in the center of the screen , with a set size, and then asks the user where they would like to click on the screen. Then once the person clicks it asks how many times they would

  • My MacBook Pro is slow, only safari open

    It is 2 years old. 2.26GHz, 2GB 1067 MHz DDR3. Not sure what to do? I only have safari open., My MacBook Pro is slow It is 2 years old. 2.26GHz, 2GB 1067 MHz DDR3. Not sure what to do? I only have safari open, and it will freeze or the spinning rainb