GoldenGate + Auditing = Performance issue ?

Hello people,
we are running GG on Solaris for the replication part. Since we set AUDIT TRAIL = TRUE we had an increase from 5% to 30% in CPU.
Is it possible that the combination of GG + Auditing is responsible for this increase? Please note that I just change the parameter. No auditing on objects enabled.
Please for your opinions.
Thank you in advance.
Edited by: drbiloukos on 26 Ιουν 2012 3:13 μμ

drbiloukos wrote:
CPU increased immediately after seting AUDIT TRAIL and remained as such.
I do not say that the combination of GG and Audit is definetely the one to blame. I just need to know if there is an known issue, and your opinion on my suspicion.
It might be just an effect of Auditing, although Oracle claims aprox 5% of CPU overhead.
I will disable Auditing on first chance.The CPU change depends on what is being audited, and to where it is being audited (database, os file). Afaik, setting it to "true" isn't a valid value? Don't you have to specify what is to be audited (e.g., db vs. extended) and to where? (e.g., db, os file, xml, ...) By default it could have already been enabled (audit_trail=db) -- depending on your version of oracle & how the db was created (e.g, dbca). I'm not sure what the result of setting it to "true" would be...?
In theory, if enabling auditing on the target DB, then "replicat" -- as just a normal SQL client -- will increase load on the target when the volume of replication is high (just like any other sql client). If enabling auditing on the source, you have the exact same issue as the target (additional sql clients increase load) -- plus, if your audit info is written to the database, at the very least, "extract" has to parse through and ignore the audit info from the logs. (Worst case would be if your "extract" may inadvertently be picking up these records - but I don't think that will happen if logged to a "sys" table.)

Similar Messages

  • Performance issue of Security Audit log

    Hello,
              My client would like to activate the Security Audit log on his system. However he will like to know whether there could be any performance issue when activating it. Since I do not have any prior experience, can you please give me your general feedback on this subject. Have any of you experience performance issue when implementing security audit log and what can be done to minimize its effect?

    Hai,
    Activating Security Audit logs will not affect the performance of your SAP system. Since SAP Systems maintain their audit logs on a daily basis. The system does not delete or overwrite audit files from previous days; it keeps them until you manually delete them. Due to the amount of information that may accumulate, you should archive these files on a regular basis and delete the originals from the application server. This is the only thing you really need to take care since they might fill up the disk space if you dont archive or delete them on regular basis. Also since the data is very sensitive you should take extra care to protect the data.
    Please follow the below links for more details.....
    http://help.sap.com/saphelp_nw04/helpdata/EN/95/d2a8e36d6611d1a5700000e835363f/frameset.htm
    http://www.saptechies.com/faq-answers-to-questions-about-the-security-audit-log/
    Regards,
    Yoganand.V

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Oracle Performance Issue

    Hardware Configuration:
    Regarding Oracle Performance Issue.
    Configuration 1
    ================
    SunV880 - Sunfire
    32 GB RAM
    14 numbers of 36GB hard disk
    8 CPUs
    CPU Speed 750MZ.
    Software Configuration:
    Oracle 8i
    OS version - Solaris 8
    Customized our own application - Namex
    Configuration 2
    ================
    Intel PIII - 750 MZ
    2 GB RAM
    2 CPUS
    Software configuration
    Oracle 8i
    OS version linux 6.2
    Customized our own application - Namex (multi threaded application)
    We installed the oracle application in all hard disks. All tables
    are splited in to separate hard disks.
    OS installed in 1 hard disk.
    namex application installed in 1 hard disk
    Oracle installed in 1 hard disk.
    All tables are splited in to other hard disks.
    We are trying to insert some user databases in oracle table. We
    achieved up to 150 records/second in Sun server. But in lower
    configuration our application inserts up to 100 records/second.
    (configuration 2)
    We want improve our inserting database records/per rate
    in Sun Server.
    How to tune our oracle application parameter values in init.ora
    file. Our application tries to insert up to 500 records per second.
    But I can't able to achieve this value.
    init.ora file
    =============
    db_name = "namex"
    instance_name = namex64
    service_names = namex64
    control_files = ("/disk1/oracle64/OraHome1/oradata/Namex64/control01.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control02.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control03.ctl")
    open_cursors = 300
    max_enabled_roles = 145
    #db_block_buffers = 20480
    db_block_buffers = 604800
    #shared_pool_size = 419430400
    shared_pool_size = 8000000000
    #log_buffer = 163840000
    log_buffer = 2147467264
    #large_pool_size = 614400
    java_pool_size = 0
    log_checkpoint_interval = 10000
    log_checkpoint_timeout = 1800
    processes = 1014
    # audit_trail = false # if you want auditing
    # timed_statistics = false # if you want timed statistics
    timed_statistics = true # if you want timed statistics
    # max_dump_file_size = 10000 # limit trace file size to 5M each
    # Uncommenting the lines below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest_1 = "location=/disk1/oracle64/OraHome1/admin/namex64/arch"
    # log_archive_format = arch_%t_%s.arc
    #DBCA uses the default database value (30) for max_rollback_segments
    #100 rollback segments (or more) may be required in the future
    #Uncomment the following entry when additional rollback segments are created and made online
    #max_rollback_segments = 500
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    #rollback_segments = ( RBS0, RBS1, RBS2, RBS3, RBS4, RBS5, RBS6, RBS7, RBS8, RBS9, RBS10, RBS11, RBS12, RBS13, RBS14, RBS15, RBS16, RBS17, RBS18, RBS19, RBS20, RBS21, RBS22, RBS23, RBS24, RBS25, RBS26, RBS27, RBS28 )
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    # global_names = false
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = true
    # define directories to store trace and alert files
    background_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/bdump
    core_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/cdump
    #Uncomment this parameter to enable resource management for your database.
    #The SYSTEM_PLAN is provided by default with the database.
    #Change the plan name if you have created your own resource plan.# resource_manager_plan = system_plan
    user_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/udump
    db_block_size = 16384
    remote_login_passwordfile = exclusive
    os_authent_prefix = ""
    compatible = "8.0.5"
    #sort_area_size = 65536
    sort_area_size = 1024000000
    sort_area_retained_size = 65536
    DB_WRITER_PROCESSES=4
    How to improve my performance activities on Oracle server.
    Please guide me regarding this issue.
    If anyone wants more info, please let me know.
    Best regards,
    Senthilkumar

    Are you sure that it is not an application constraint ? i.e. the application can't handle so much data per second ? (application locks, threads )
    Have you tried to write a simple test program, which inserts predefined data (which your application inserts) the same data, only changing keys ?
    Then comparing the values from the 1st and the 2nd configuration ?
    Did you check the way your application is communicating with oracle ? If it is TCP/ip (even on the local machine) then this is your main problem.
    And one more thing, do you know if your application is able to run the load (inserts) of data on different threads (i.e. in parallel), because if is not, you won't be able to push the speed higher because your constraint is the speed of a single CPU. Consider running several process, which loads the data.
    We had the same problem ot AIX machines with 4 cpus. Monitoring the machine, we found that only 25% (1 cpu) where in use. We had to run 4 processes to push the speed up. Check your system's overal load while running the 'load' (inserts).
    log_checkpoint_interval = 10000
    Check if this value is appropriate. Maybe you should set it to 0 (infinite). This will disable checkpoints on a 'number of undo record' basis. Checpoints will occure only on log switch.
    How much redo files per redo groups do you have ? What is their size ? Are they on different disks ? How much redo data is generated by a single 'record' inserted ?
    Hope i helped at least a little.

  • Performance issue on the sys.dba_audit_session

    i have the following query which is taking long time and had performance issue.
    SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS
    failed_count
    FROM sys.dba_audit_session
    WHERE returncode != 0
    AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.04 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 68.42 216.08 3943789 3960058 0 1
    total 4 68.43 216.13 3943789 3960058 0 1
    The view dba_audit_session is a select from the view dba_audit_trail. If you
    look at the definition of dba_audit_trail it does a CAST on the ntimestamp#
    column. Therefore disabling index access because there is not a function
    based index on ntimestamp#. I am not even sure a function based index would
    work to match what the view does.
    cast ( /* TIMESTAMP */
    (from_tz(ntimestamp#,'00:00') at local) as date),
    To get index access the metric would have to avoid the use of the view. I have changed the query like this.
    SELECT /*+ INDEX(a I_AUD3) */ TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD
    HH24:MI:SS TZD') AS curr_timestamp, COUNT(userid) AS failed_count
    FROM sys.aud$ a
    WHERE returncode != 0
    and action# between 100 and 102
    AND ntimestamp# >= systimestamp at time zone 'GMT' - 30/1440
    is it correct way to di it?
    could you comment on this ?

    The query is run by grid Control (or DBConsole) to count the metric related to audit sessions which is ON by default in 11g. To decrease the impact of this query you should purge the aud$ table regularly.
    Best way is to use DBMS_AUDIT_MGMT to periodically purge the data older than "whatever date". If you don't need the audit infor, you can simply truncate aud$.

  • Oracle BPM 11.1.1.5 Performance issues

    Hi,
    I have 2 Node Cluster of Oracle SOA 11.1.1.5 installed. Have a separate cluster for SOA/BPM, BAM, OSB, WSM. Here are the OS and Java versions
    Java Vendor: HP
    Java Version: 1.6.0.14
    OS: HP-UX
    OS Version: B.11.31
    Running into an issue where Oracle BPM is performing very slow the task forms takes forever to come up and performing any actions takes too long to proceed som time it keeps timing out. I have over 100 SOA Process running plus some BPM Process but BMP Workspace application is running too slow as have customized tasks forms. But the same works a bit better in other instance which is non-clustered. Any idea what can be done on the server level to get around the performance issues. So far have modified Audit Levels and reduced soa-infra bpm log have the following settings in setDomainEnv.sh for USER_MEM_ARGS=-Xms2000m -Xmx6000m -XX:PermSize=1000m
    But not help. Any idea what else to look into to get around these bpm performance issues. Here is what I have in setSOADomainEnv.sh
    # 8395254: add -da:org.apache.xmlbeans... in EXTRA_JAVA_PROPERTIES
    EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -da:org.apache.xmlbeans..."
    XENGINE_DIR="${SOA_ORACLE_HOME}/soa/thirdparty/edifecs/XEngine"
    DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
    PORT_MEM_ARGS="-Xms768m -Xmx1536m"
    if [ "${JAVA_VENDOR}" != "Oracle" ] ; then
      DEFAULT_MEM_ARGS="${DEFAULT_MEM_ARGS} -XX:PermSize=128m -XX:MaxPermSize=512m"
      PORT_MEM_ARGS="${PORT_MEM_ARGS} -XX:PermSize=256m -XX:MaxPermSize=512m"
    fi
    #========================================================
    # setup LD_LIBRARY_PATH if directory is present...
    #========================================================
    if [ -d ${XENGINE_DIR}/bin ]; then
       LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${XENGINE_DIR}/bin"
       export LD_LIBRARY_PATH
    fi
    #========================================================
    # setup platform specific environment variables
    #========================================================
    case ${PLATFORM_TYPE} in
      # AIX
      AIX)
        if [ -d ${XENGINE_DIR}/bin ]; then
           LIBPATH="${LIBPATH}:${XENGINE_DIR}/bin"
           export LIBPATH
        fi
        USER_MEM_ARGS=${PORT_MEM_ARGS}
        export USER_MEM_ARGS
        # Fix for 7828060
        POST_CLASSPATH=${POST_CLASSPATH}:${SOA_ORACLE_HOME}/soa/modules/soa-ibm-addon.jar
        # Fix for 7520915 and 8264518 and 8305217
        EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Djavax.xml.datatype.DatatypeFactory=org.apache.xerces.jaxp.datatype.DatatypeFactoryImpl -Djava.endorsed.dirs=${SOA_ORACLE_HOME}/bam/modules/org.apache.xalan_2.7.1"
        export EXTRA_JAVA_PROPERTIES
      # HPUX
      HP-UX)
        if [ -d ${XENGINE_DIR}/bin ]; then
           SHLIB_PATH="${SHLIB_PATH}:${XENGINE_DIR}/bin"
           export SHLIB_PATH
        fi
        LD_LIBRARY_PATH="${XENGINE_DIR}/bin:${LD_LIBRARY_PATH}"
        export LD_LIBRARY_PATH
        USER_MEM_ARGS="-d64 ${PORT_MEM_ARGS}"
        export USER_MEM_ARGS
        ;;And here is what I have in setDomainEnv.sh
    XMS_SUN_64BIT="256"
    export XMS_SUN_64BIT
    XMS_SUN_32BIT="256"
    export XMS_SUN_32BIT
    XMX_SUN_64BIT="512"
    export XMX_SUN_64BIT
    XMX_SUN_32BIT="512"
    export XMX_SUN_32BIT
    XMS_JROCKIT_64BIT="256"
    export XMS_JROCKIT_64BIT
    XMS_JROCKIT_32BIT="256"
    export XMS_JROCKIT_32BIT
    XMX_JROCKIT_64BIT="512"
    export XMX_JROCKIT_64BIT
    XMX_JROCKIT_32BIT="512"
    export XMX_JROCKIT_32BIT
    if [ "${JAVA_VENDOR}" = "Sun" ] ; then
         WLS_MEM_ARGS_64BIT="-Xms256m -Xmx512m"
         export WLS_MEM_ARGS_64BIT
         WLS_MEM_ARGS_32BIT="-Xms256m -Xmx512m"
         export WLS_MEM_ARGS_32BIT
    else
         WLS_MEM_ARGS_64BIT="-Xms512m -Xmx512m"
         export WLS_MEM_ARGS_64BIT
         WLS_MEM_ARGS_32BIT="-Xms512m -Xmx512m"
         export WLS_MEM_ARGS_32BIT
    fi
    if [ "${JAVA_VENDOR}" = "Oracle" ] ; then
         CUSTOM_MEM_ARGS_64BIT="-Xms${XMS_JROCKIT_64BIT}m -Xmx${XMX_JROCKIT_64BIT}m"
         export CUSTOM_MEM_ARGS_64BIT
         CUSTOM_MEM_ARGS_32BIT="-Xms${XMS_JROCKIT_32BIT}m -Xmx${XMX_JROCKIT_32BIT}m"
         export CUSTOM_MEM_ARGS_32BIT
    else
         CUSTOM_MEM_ARGS_64BIT="-Xms${XMS_SUN_64BIT}m -Xmx${XMX_SUN_64BIT}m"
         export CUSTOM_MEM_ARGS_64BIT
         CUSTOM_MEM_ARGS_32BIT="-Xms${XMS_SUN_32BIT}m -Xmx${XMX_SUN_32BIT}m"
         export CUSTOM_MEM_ARGS_32BIT
    fi
    MEM_ARGS_64BIT="${CUSTOM_MEM_ARGS_64BIT}"
    export MEM_ARGS_64BIT
    MEM_ARGS_32BIT="${CUSTOM_MEM_ARGS_32BIT}"
    export MEM_ARGS_32BIT
    if [ "${JAVA_USE_64BIT}" = "true" ] ; then
         MEM_ARGS="${MEM_ARGS_64BIT}"
         export MEM_ARGS
    else
         MEM_ARGS="${MEM_ARGS_32BIT}"
         export MEM_ARGS
    fi
    MEM_PERM_SIZE_64BIT="-XX:PermSize=128m"
    export MEM_PERM_SIZE_64BIT
    MEM_PERM_SIZE_32BIT="-XX:PermSize=128m"
    export MEM_PERM_SIZE_32BIT
    if [ "${JAVA_USE_64BIT}" = "true" ] ; then
         MEM_PERM_SIZE="${MEM_PERM_SIZE_64BIT}"
         export MEM_PERM_SIZE
    else
         MEM_PERM_SIZE="${MEM_PERM_SIZE_32BIT}"
         export MEM_PERM_SIZE
    fi
    MEM_MAX_PERM_SIZE_64BIT="-XX:MaxPermSize=512m"
    export MEM_MAX_PERM_SIZE_64BIT
    MEM_MAX_PERM_SIZE_32BIT="-XX:MaxPermSize=512m"
    export MEM_MAX_PERM_SIZE_32BIT
    if [ "${JAVA_USE_64BIT}" = "true" ] ; then
         MEM_MAX_PERM_SIZE="${MEM_MAX_PERM_SIZE_64BIT}"
         export MEM_MAX_PERM_SIZE
    else
         MEM_MAX_PERM_SIZE="${MEM_MAX_PERM_SIZE_32BIT}"
         export MEM_MAX_PERM_SIZE
    fi
    if [ "${JAVA_VENDOR}" = "Sun" ] ; then
         if [ "${PRODUCTION_MODE}" = "" ] ; then
              MEM_DEV_ARGS="-XX:CompileThreshold=8000 ${MEM_PERM_SIZE} "
              export MEM_DEV_ARGS
         fi
    fi
    # Had to have a separate test here BECAUSE of immediate variable expansion on windows
    if [ "${JAVA_VENDOR}" = "Sun" ] ; then
         MEM_ARGS="${MEM_ARGS} ${MEM_DEV_ARGS} ${MEM_MAX_PERM_SIZE}"
         export MEM_ARGS
    fi
    if [ "${JAVA_VENDOR}" = "HP" ] ; then
         MEM_ARGS="${MEM_ARGS} ${MEM_MAX_PERM_SIZE}"
         export MEM_ARGS
    fi
    if [ "${JAVA_VENDOR}" = "Apple" ] ; then
         MEM_ARGS="${MEM_ARGS} ${MEM_MAX_PERM_SIZE}"
         export MEM_ARGS
    fi
    if [ "${debugFlag}" = "true" ] ; then
         JAVA_OPTIONS="${JAVA_OPTIONS} -da:org.apache.xmlbeans... "
         export JAVA_OPTIONS
    fi
    export USER_MEM_ARGS="-Xms4g -Xmx6g -XX:PermSize=2g -XX:+UseParallelGC -XX:+UseParallelOldGC"Here is the output of the top command
    Load averages: 0.08, 0.06, 0.06
    315 processes: 205 sleeping, 110 running
    Cpu states:
    CPU   LOAD   USER   NICE    SYS   IDLE  BLOCK  SWAIT   INTR   SSYS
    0    0.06   4.0%   0.2%   1.0%  94.8%   0.0%   0.0%   0.0%   0.0%
    2    0.07   3.6%   5.0%   0.2%  91.2%   0.0%   0.0%   0.0%   0.0%
    4    0.07   2.0%   0.2%   0.0%  97.8%   0.0%   0.0%   0.0%   0.0%
    6    0.07   2.4%   0.2%   0.8%  96.6%   0.0%   0.0%   0.0%   0.0%
    8    0.09   1.2%   0.2%  10.9%  87.7%   0.0%   0.0%   0.0%   0.0%
    10    0.12   3.0%   0.0%  11.1%  85.9%   0.0%   0.0%   0.0%   0.0%
    12    0.08   3.0%   0.2%   6.6%  90.3%   0.0%   0.0%   0.0%   0.0%
    14    0.09   4.2%   1.2%   0.8%  93.8%   0.0%   0.0%   0.0%   0.0%
    avg   0.08   3.0%   1.0%   3.8%  92.2%   0.0%   0.0%   0.0%   0.0%
    System Page Size: 4Kbytes
    Memory: 38663044K (38379156K) real, 149420048K (148978096K) virtual, 26349848K f
    ree  Page# 1/63
    CPU TTY  PID USERNAME PRI NI   SIZE    RES STATE    TIME %WCPU  %CPU COMMAND
    4   ?  3926 root     152 20   213M 70040K run    694:47 10.05 10.04 cimprovagt
    4   ?  6855 user1  152 20  7704M  1125M run      4:36  9.31  9.29 java
    0   ?  6126 user2  152 20  2790M  1863M run     22:57  4.16  4.15 javaHere is the Memory on the box
    Memory: 98132 MB (95.83 GB)
    Thanks

    After changing JVM settigns for soa cluster it's a bit better but still slow so wondering what other tweaks can be done on the JVM side. Here is what is for the SOA Cluster
    USER_MEM_ARGS="-server -Xms12928m -Xmx12928m -XX:PermSize=3072m -Xmn3232m -XX:+SXTElimination -XX:+UseParallelGC -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent -XX:-TraceClassLoading -XX:-TraceClassUnloading"
    We are running multiple instances on the same boxes and the total RAM on the machine is 95 GB on each box which is being shared across 4 cluster environments. For now other 3 cluster environments have the SOA Cluster JVM setting as
    USER_MEM_ARGS="-server -Xms4096m -Xmx4096m -XX:PermSize=1024m -Xmn1152m -XX:+SXTElimination -XX:+UseParallelGC -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent -XX:-TraceClassLoading -XX:-TraceClassUnloading"
    Any help on what else I can tweak or set in JVM to get a better performance.
    Thanks

  • Premiere Pro 2014 8.1 Poor Performance Issues

    I'm having the following issues across Premiere, Audition, and Media Encoder:
    1.  Slow, choppy, unpredictable playback in the timeline
         -I'm using a Blackmagic Ultrastudio 4K for monitor output, and that plays out smooth in general
    2.  Overall UI lag
         -Clicking between files
         -Clicking between Sequence tabs
         -Clicking on clips in Icon view and scrubbing is slow
    3.  Export and Media Encoder Presets buggy
         -Cannot change the settings for codecs (button is non-selectable)
         -Media Encoder crashes when exporting certain presets (h.264, etc.)
    4.  Specifically, I've tried transcoding a ProRes LT 1080p 23.98 Sequence > ProRes LT 720p 59.94 Quicktime for playback on an Atomos Samurai Blade.  This workflow
    worked fine until I updated to 8.1.
    The only solutions I've seen thus far have been to trash prefs and clean the cache, but I honestly think there may need to be a major cleanup on Adobe's end.
    Anyone else having issues like these?
    -E

    Hi E,
    I have a list, which I'll amend according to your system.
    Try the following:
    Sign out from Creative Cloud, restart Premiere Pro, then sign in
    Update any GPU driversYou have AMD, so you can't do this with Mac OS X.
    Trash preferences
    Ensure Adobe preference files are set to read/write
    This one causes a lot of performance issues in Mavericks and YosemiteImportant: Pay attention to the step about “Apply to enclosed items…”
    You can also rename the folders recommended in the blog post
    Delete media cache
    Remove plug-ins
    If you have AMD GPUs, make sure CUDA is not installedThis was a bug fixed in 8.0.
    Repair permissions
    In Sequence Settings > Video Previews, change the codec to one that matches your footage
    Disconnect any third party hardwareDisconnect any BMD or external drives (if you can)
    See if changing the setting for the Mercury Playback Engine to Software only helps
    Do a test with a brand new project and non-QuickTime/ProRes source and see if you have those issues.Projects updated from 8.0 and 8.0.1 might have issues updating to 8.1.
    Make sure all other applications are closed, including web browsers and other Adobe applications.
    Disable App Nap
    Reboot
    Report back if anything here worked for you.
    Thanks,
    Kevin

  • Performance issues - v7.0.112 microsoft

    Hi,
    I have been experiencing severe performance issues with BPC whereby response times have been extremely slow and often BPC for excel hangs with the following message
    "microsoft office excel is waiting for another application to complete an ole action"
    Other issues have taken the form of a modify/process of an application taking over 20 minutes to complete the "Make OLAP database and Journal/Audit reports" step instead of around 4 or not completing at all, giving the message
    "An unhandled exception has occurred in your application........ Object reference not set to an instance of an object"
    Is there a top 5 of things to invesitgate as I have access to the application and database servers?
    I am not familiar with the BPC install process and architecture as I primarily work with the front end so forgive me if I am missing somethig obvious.
    Questions I am trying to answer are
    - Is it the database server
    - Is it the application server
    - Is it the application conguration
    - Is it the design of the application and input schedules
    - Is it the network ie communication between the app and SQL server
    Any help would be greatly appreciated
    Phil

    Hi all ,
    You have to do the follow changes:
    1. Into database appserver you have to change
    - into tblappsetinfo for your appsets you have to add for connection string pooling=false;
    it should be like :
    Initial Catalog=ApShell;Data Source="yourserver";Connect Timeout=90;integrated security=SSPI;pooling=false;
    - into tblserverdefaults you have to add at the end of connection string with keyid "Constr" pooling= false;
    it should be something like:
    Initial Catalog=%DBNAME%;Data Source=%SQLServer%;Connect Timeout=90;integrated security=SSPI;pooling=false;
    2. Into Outlooksoft.config from  from folder ...BPC\Websrvr\bin you have to change into file:
    <add key="Database_AppServerDBConn" value="Server=IWDF0119;Database=AppServer;Trusted_Connection=True;"/>
    with
    <add key="Database_AppServerDBConn" value="Server=IWDF0119;Database=AppServer;Trusted_Connection=True;pooling=false"/>
    I think it is also a note specifying this but for the moment I didn't find the number.
    Regards
    Sorin Radulescu

  • Performance Issue with sql query

    Hi,
    My db is 10.2.0.5 with RAC on ASM, Cluster ware version 10.2.0.5.
    With bsoa table as
    SQL> desc bsoa;
    Name                                      Null?    Type
    ID                                        NOT NULL NUMBER
    LOGIN_TIME                                         DATE
    LOGOUT_TIME                                        DATE
    SUCCESSFUL_IND                                     VARCHAR2(1)
    WORK_STATION_NAME                                  VARCHAR2(80)
    OS_USER                                            VARCHAR2(30)
    USER_NAME                                 NOT NULL VARCHAR2(30)
    FORM_ID                                            NUMBER
    AUDIT_TRAIL_NO                                     NUMBER
    CREATED_BY                                         VARCHAR2(30)
    CREATION_DATE                                      DATE
    LAST_UPDATED_BY                                    VARCHAR2(30)
    LAST_UPDATE_DATE                                   DATE
    SITE_NO                                            NUMBER
    SESSION_ID                                         NUMBER(8)
    The query
    UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID
    Is taking a lot of time to execute and in AWR reports also it is on top in
    1. SQL Order by elapsed time
    2. SQL order by reads
    3. SQL order by gets
    So i am trying a way to solve the performance issue as the application is slow specially during login and logout time.
    I understand that the function in the where condition cause to do FTS, but i can not think what other parts to look at.
    Also:
    SQL> SELECT COUNT(1) FROM BSOA;
      COUNT(1)
       7800373
    The explain plan for  "UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID" is
    {code}
    PLAN_TABLE_OUTPUT
    Plan hash value: 1184960901
    | Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT   |                    |     1 |    26 | 18748   (3)| 00:03:45 |
    |   1 |  UPDATE            | BSOA |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL| BSOA |     1 |    26 | 18748   (3)| 00:03:45 |
    Predicate Information (identified by operation id):
       2 - filter("SESSION_ID"=TO_NUMBER(SYS_CONTEXT('USERENV','SESSIONID')))
    {code}

    Hi,
    There are also triggers before update and AUDITS on this table.
    CREATE OR REPLACE TRIGGER B2.TRIGGER1
    BEFORE UPDATE
    ON B2.BSOA  REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    :NEW.LAST_UPDATED_BY   := USER    ;
    :NEW.LAST_UPDATE_DATE  := SYSDATE ;
    END;
    CREATE OR REPLACE TRIGGER B2.TRIGGER2
    BEFORE INSERT
    ON B2.BSOA  REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    :NEW.CREATED_BY        := USER ;
    :NEW.CREATION_DATE     := SYSDATE ;
    :NEW.LAST_UPDATED_BY   := USER    ;
    :NEW.LAST_UPDATE_DATE  := SYSDATE ;
    END;
    And also there is an audit on this table
    AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER SUCCESSFUL;
    AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER NOT SUCCESSFUL;
    And the sessionid column in BSOA has height balanced histogram.
    When i create an index i get the following error. As i am on 10g I can't use DDL_LOCK_TIMEOUT . I may have to wait for next down time.
    SQL> CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS;
    CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS
    ERROR at line 1:
    ORA-00054: resource busy and acquire with NOWAIT specified
    Thanks

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

  • Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Thanks Kelly,
    The answers would be the following:
    1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
    Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
    The cells will be numeric, free text, drop downs, and some calculated numeric.
    Are we reaching the limits for UI performance?
    Thanks

Maybe you are looking for