Performance impacts from uneven subpartitions

I am currently working on a project with a large amount of metric data in a table that is partitioned by week, and then subpartitioned by one of 26 different machine types. For the most part, we get the same amount of data each week, so the partitions are about even in size. The subpartitions, on the other hand, have vastly different sizes (hundreds of rows compared to 10's of millions of rows), and in most cases, queries span the subpartitions.
Would these subpartitions degrade my performance when querying using multiple subpartitions? If so, is there a best practice for how evenly distributed data needs to be in order to get benefit out of subpartions?
Thanks,
Jon

explain plan for SELECT ATTR_XML_NAME AS "attributeXmlName", sum(IN_SCANS_CNT) AS "inScansCount", sum(OUT_SCANS_CNT) AS "outScansCount", sum(FILE_CNT) AS "fileCount", sum(MSGS_CNT) AS "messageCount", sum(ATTR_IN_SCANS_CNT) AS "attrInScansCount", sum(ATTR_OUT_SCANS_CNT) AS "attrOutScansCount", sum(ATTR_FILE_CNT) AS "attrFileCount", sum(ATTR_MSGS_CNT) AS "attrMessageCount", sum(SEED_SCANS_CNT) AS "seededScansCount", sum(SEED_FILE_CNT) AS "seededFileCount", sum(SEED_ATTR_SCANS_CNT) AS "seededAttrScansCount", sum(SEED_ATTR_FILE_CNT) AS "seededAttrFileCount" FROM ATTR_SMRY_BY_DAY WHERE (DAY_ID BETWEEN ? AND ?) AND FILE_TYPE = ? AND APPL_ID = ? AND SEED_GRP_ID = ? GROUP BY ATTR_XML_NAME ORDER BY upper(ATTR_XML_NAME) asc
PLAN_TABLE_OUTPUT
| Id  | Operation                              |  Name                           | Rows  | Bytes | Cost  | Pstart| Pstop |
|   0 | SELECT STATEMENT                       |                                 |   119 |  6902 | 70018 |       |       |
|   1 |  SORT ORDER BY                         |                                 |   119 |  6902 | 70018 |       |       |
|   2 |   SORT GROUP BY                        |                                 |   119 |  6902 | 70018 |       |       |
|*  3 |    FILTER                              |                                 |       |       |       |       |       |
|   4 |     PARTITION RANGE ITERATOR           |                                 |       |       |       |   KEY |   KEY |
|   5 |      PARTITION LIST ALL                |                                 |       |       |       |     1 |    24 |
|*  6 |       TABLE ACCESS BY LOCAL INDEX ROWID| ATTR_SMRY_BY_DAY                | 10455 |   592K| 69990 |   KEY |   KEY |
|*  7 |        INDEX RANGE SCAN                | ATTR_SMRY_BY_DAY_ATTRNAME_IDX2  | 56460 |       | 40314 |   KEY |   KEY |
Predicate Information (identified by operation id):
   3 - filter(TO_DATE(:Z)<=TO_DATE(:Z))
   6 - filter("ATTR_SMRY_BY_DAY"."FILE_TYPE"=:Z)
   7 - access("ATTR_SMRY_BY_DAY"."DAY_ID">=:Z AND "ATTR_SMRY_BY_DAY"."SEED_GRP_ID"=:Z AND "ATTR_SMRY_BY_DAY"."APPL_ID"=:Z
              AND "ATTR_SMRY_BY_DAY"."DAY_ID"<=:Z)
       filter("ATTR_SMRY_BY_DAY"."APPL_ID"=:Z AND "ATTR_SMRY_BY_DAY"."SEED_GRP_ID"=:Z)
Note: cpu costing is off
Completed Wed Apr 1 10:08:01 CDT 2009

Similar Messages

  • Performance impact after changing the awr snapshot timing from 1 hour to 15 minuts.

    want to know performance impact after changing the AWR snapshot timing from 1 hour to 15 minutes.

    Hi,
    1) typically performance impact is negligible
    2) we have no way of knowing whether or system fits into the definition of "typical"
    3) the best way would be to do that on a test system and measure the impact
    4) I would be concerned more about SYSAUX growth than performance impact -- you need to make sure that you won't run out of space because of x4 more frequent snapshots
    Best regards,
      Nikolay

  • Performance impact of using Web Services?

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

  • Performance impact of Web Services

    As WebLogic adds support for Web Services to its platform, what is
    your plan for quantifying the performance impact and the functional
    correctness of using web services before going live with the final
    application.
    Empirix is hosting a free one hour web event discussion on web
    services testing and automated web services testing solutions on
    Thursday, January 17, 2-3pm Eastern time.
    To register for this web event or learn about other web events being
    offering by Empirix this month, go to:
    http://webevents.empirix.com
    The complete abstract is below:
    The advent of web services has brought the promises of integrating
    multiple software applications from heterogeneous networks and for
    exchanging information from vendor-to-vendor or vendor-to-consumer in
    a standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be critical that
    web services undergo performance testing. As with any enterprise
    software project, the adoption of proper test methodologies and use of
    testing tools will play a key part in the overall success or failure
    of projects utilizing web services. In a compressed software project
    schedule, an organization must quickly determine if its web services
    will operate successfully under a variety of load conditions. Like
    other web-based technologies, successful web services will need to
    respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing challenges
    created by this emerging technology, along with the variety of testing
    solutions available. Automated web service testing will be discussed
    and demonstrated using FirstACT, the first web services performance
    testing solution available on the market. Using a sample web service,
    automatic test case creation, scalability testing, and results
    analysis will be explored.

    Hi,
    We test several frameworks and find out that usually JAXB 2.0 performs better than XMLBeans, but that is not a strict rule.
    Regards,
    LG

  • Regarding performance impact if I do DB accessing coding in comp Controller

    Hi ,
    This is my project requirement, I have to use some com compoment which in turn fetches data from the database. I am using a java com bridge tool to do this. This tool is generating the java proxy classes for the VB com component.
    I am using java proxy classes( This class files are using JNI to connect to VB COM compnent and fetch the data from DB) in my webdynpro component controller.
    The architecture is aas below
    WEBDYNPRO    >>   JAVA Classes object( generated by the JAVA- COM bridge   tool )                         >>   JAVA-COM bridge  tool >> VB COM+ Component   >> SQL server.
    The issue
       Performance :-   first time it is OK but for Consecutive calls the application is going down very visibly and after 4 iteration it hangs . When I look at the log I am getting this
    Message : Exception occured during processing of Web Dynpro application com/oreqsrch/com.oreqsrchapp.OReqSrchApp.
    The causing exception is nested.
    [EXCEPTION]
    com.sap.tc.webdynpro.services.session.LockException: Thread SAPEngine_Application_Thread[impl:3]_36 failed to acquire exclusive lock on client session ClientSession(id=(J2EE9536400)ID1120562150DB11245826542790956137End_1159630423). Existing locks: LockingManager(ThreadName:SAPEngine_Application_Thread[impl:3]_36, exclusive client session lock:
    ClientSessionLock(SAPEngine_Application_Thread[impl:3]_9), shared client session locks: ClientSessionSharedLockManager([]), app session locks: ApplicationSessionLockManager([]), current request: com/oreqsrch/com.oreqsrchapp.OReqSrchApp).
    Hint: Take a thread dump of the server node to find the blocking thread that causes the problem.
    Is this issue because I have return the code data access code in the component controller rather wrting in some beans ?
    My questions regarding
    What would the performance impact if write the DB access code in the webdynpro component controller rather than writing in a bean or an EJB?( I know ideally DB access code has to write in Bean or EJB ).
    Please address  this with respedct to performance  point of view .
    thanks
    pkiran

    Hi Both,
    Thanks for the reply.
    Yes they are closed and set it to null;
    Connection max and mini properties are controlled at COM+ components in VB.
    Since I am using COM - JAVA bridge,  I am just invoking the methods defined ijn the VB code  thru the bridge tool. all the objects which are retrieving the data are closed and nullify it.
    My question is
    if I write DB access code in component control instead in EJB or Java bean, will there be any performance issue ?
    regards
    pkiran

  • EBS performance impact using it as a Data Source

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.Hi,
    Tough to answer without looking at data; my suggestion would be to have a test EBS environment setup, get permission from the vendors to run performance test without buying license - compare AWRs from both scenarios and then decide.
    Generally speaking, native XML publisher (BI Publisher) has less of database performance hit than external reporting tools using ODBC.
    Hope this helps.
    Regards,

  • Performance impact in Oracle 8i - BLOB vs BFILE

    Hi Guys,
    We are evaluting intermedia to store multimedia objects.
    Does any know if storing and retreiving documents in Oracle database has impact on standard data stored in the database?
    Is it worth having a seperate database instance for storing tables with intermedia objects?
    Pal

    Part 2:
    Example 1: Let us estimate the storage requirements for a data set consisting of 500 video clips comprising a total size of 250MB (average size 512K bytes). Assume a LOB chunk size of 32768 bytes. Our model estimates that we need (8000 * 32) bytes or 250 k bytes for the index and 266 MB to hold the media data. Since the original media size is 250 MB, this represents about a 6.5% storage overhead for storing the media data in the database. The following table definition could be used to store this amount of data.
    create table video_items
    video_id number ,
    video_clip ordsys.ordvideo
    -- storage parameters for table in general
    tablespace video1 storage (initial 1M next 10M )
    -- special storage parameters for the video content
    lob (video_clip.source.localdata) store as
    (tablespace video2 storage (initial 260k next 270M )
    disable storage in row nocache nologging chunk 32768);
    Example 2: Let us estimate the storage requirements for a data set consisting of 5000 images with an average size of 56K bytes. The total amount of media data is 274 MB. Since the average image size is smaller, it is more space efficient to choose a smaller chunk size, say 8K, to store the data in the lob. Our model estimates that we will need about 313 MB to store the data and a little over 1 MB to store the index. In this case the 40 MB of storage required beyond the raw media content size represents a 15% overhead.
    Estimating retrieval costs
    Performance testing has shown that Oracle can achieve equivalent and even higher throughput performance for media content retrieval than a file system. The test was configured to retrieve media data from a server system to a requesting client system. In the database case, simple C client programs used OCI with LOB read callbacks to retrieve the data from the database. For the file system case, the client program used the standard C library functions to read data from the file system. Note that in this client server configuration, files are served remotely by the file server system. In essence, we are comparing distributed file system performance with Oracle database and SQLNet performance. These tests were performed on Windows NT 4 SP5.
    Although Oracle achieved higher absolute performance, the relative CPU cost per unit of throughput ranged from 1.7 to 3 times the file system cost. (For these tests, database performance ranged from 3.4 million to 9 million bytes/sec while file system performance ranged from 2.6 million bytes/sec to 7 million bytes/sec as the number of clients ranged from 1 to 5) One reason for the very high relative CPU cost at the higher end of performance is that as the 100 Mbs network approaches saturation, the system used more CPU to achieve the next increment of throughput. If we restrict ourselves to not exceeding 70% of network utilization, then the database can use up to 2.5 times as much CPU as the file system per unit of throughput.
    NOTE WELL: The extra CPU cost factors pertain only to media retrieval aspect of the workload. They do not apply to the entire system workload. See example.
    Example: A file based media asset system uses 10% of a single CPU simply to serve media data to requesting clients. If we were to store the media in an Oracle database and retrieve content from the database then we could expect to need 20-25% of a single CPU to serve content at the same throughput rate.

  • Performance Impact with OR concatenation / Inlist Iterator

    Hello guys,
    is there any performance impact with using OR concatenations or some IN-Lists?
    The function of both is the "same":
    1) Concatenation (OR-processing)
    SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
    - Which are then ‘concatenated’
    2) Inlist Iterator
    SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
    - Every value executed seperately
    - Same as concatenation of 3 “OR-red” values
    So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
    Thanks and Regards
    Stefan

    The note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
    The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
    The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
    Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
    Quick example to demonstrate the difference between concatenation and iteration:
    drop table t1;
    create table t1 as
    select
         rownum     id,
         rownum     n1,
         rpad('x',100)     padding
    from
         all_objects
    where
         rownum <= 10000
    create index t1_i1 on t1(id);
    execute dbms_stats.gather_table_stats(user,'t1')
    set autotrace traceonly explain
    select
         /*+ use_concat(t1) */
         n1
    from
         t1
    where
         id in (10,20,30,40,50,60,70,80,90,100)
    set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
       1    0   CONCATENATION
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       5    4       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       7    6       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       8    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       9    8       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      10    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      11   10       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      12    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      13   12       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      14    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      15   14       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      16    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      17   16       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      18    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      19   18       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      20    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      21   20       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
       1    0   INLIST ITERATOR
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Performance Impact When Using SNC Communication

    Hello,
    Does anybody know if and how much performance impact there is if we use SNC for communication between the SAP Server and SAPGUI?
    I think there are two areas that may be impacted; Network and server CPU.
    For network load, I did find a part in "Front-End Network Requirements for SAP Business Solutions" document saying "overhead of roughly 350 bytes per user interaction step" but it does not specify the type of encryption.  I wonder if there is any other info on this?
    For CPU impact, how much overhead should I consider for sapgui access?
    I see no field for this in the quicksizer and I can't seem to find any white papers on this subject.
    Thank you in advance.

    >
    Peter Adams wrote:
    > Ken,
    >
    > if you plan to use SAPcryptlib for SNC between SAP servers, then you should use a SAPcryptolib-compatible solution for the SNC communication between SAPGUI and SAP server, and there is only one vendor who can provide this. Let me know, if you need help finding it. My contact information is in my SDN business card.
    Just so Kan is clear - It is not legal to use the SAP cryptolib provided by SAP for SNC between SAP GUI and SAP servers, so if x.509 is the desired mechanism you need to purchase additional software from the company which Peter works for to provide SAP GUI SNC-based SSO. I think instead, Kan might be using the free SAP supplied SNC Kerberos library, which is why I asked him to confirm this in my last post. I doubt he is interested to buy any third party software.
    > As to the performance discussion: first of all, yes, there will be a small performance impact if SNC is used (no matter which type or implementation), but from our experience with many actual SNC implementations, I can state that this is practically not relevant. It is not noticeable by users. There were never any performance discussions with customers. See also SAP Note 1043694.
    I agree with this - the performance impact is not noticed by users, but the system managers who look after the servers where SAP is installed, and the team responsible for the network need to be aware of any differences (if any) when SNC is turned on and when SNC is turned off. I think this is why Kan is asking these questions, not because he is concerned about users noticing any difference when they logon to SAP.
    > Just a first quick comment on certain statements above: Tim's arguments for proving his overall statement are not conclusive from my perspective. Nor do I think his overall statement itself is correct.
    The facts I mentioned are well known facts, e.g. symmetric crypto is far better from performance point of view than asymmetric. I know the examples I have shown which I found when doing a quick google search were not conclusive, but they were shown as initial examples, not necessarily the best examples. This is why I specifically mentioned that if you search in google yourself you will see many more references where comparisons are done between Kerberos (e.g. symmatric) compared with PKI (e.g. asymmetric).
    > First of all, he only selects one aspect of performance - CPU impact of encryption algorithms.
    No, I didn't. Some of the examples I referred to also discuss other differences. I also mentioend other differences such as memory and what protection level is used when configuring SNC.
    > But for a true comparison, you'd have to look at all relevant aspects (latency, network overhead, ...).
    Yes, I agree. No doubts here.
    >Network performance overhead is usuallly worse with Kerberos than with PKI.
    This is not true. When SAP is using SNC, the GSS-API standard is used and so the only network communication involves SAP software sending a standard GSS token from the workstation to the SAP server, and this GSS token is often about the same size, regardless of which mechanism is used, so any network performance differences are not related to the mechanism, but more related to the complexity of the cryptography used on each end (mostly on the server side).
    >Second, you need to look at the specific usage scenario. For example, the first report referenced by Tim is an analysis about different Token Profile mechanism for WS Security, for one specific implementation. This does not allow to draw any conclusion for the SNC use case in general, and for sure not for a specific implemenation. It does not take the overhead for the encryption of the message content into account. Third, Tim associates PKI exclusively with asymmetric encryption. Yes, it is well known that asymmetric algorithms are slower than symmetric ones, but it is also well known that the encryption of the message content (by far the majority of the data) happens with symmetric encryption algorithms in the PKI scenario. With PKI-based SNC, you can even select a symmetric algorithm and use a more performant one that the ones that Kerberos prescribes.
    Kerberos works with many different symmetric algorithms as well, so mentioning that the alg is selectable is not relavent to any comparison.
    > To summarize, I will try and collect facts that will support the opposite point of view. From our practical experience, the performance overhead is not relevant, and criteria like consistency with SAPcryptolib, strength of security, ease of administration, choice of authentication and encryption mechanism, etc. are much more important.
    >
    > Peter

  • Performance impact on oracle 11g database by audit enable

    Hi All,
    Shall we enable audit on some siebel db tables like s_party s_contacts s_order s_quote s_org_ext
    We need to see who deleted account records from oracle tables manually
    Since auditing is not enabled.
    We have given delete privelege to to all users as required by Siebel application.
    So Is this good idea to get Auditing enabled on these selected tables or Is there any performance impact on database.
    Is it good idea to enable audit for these tables espacially in siebel
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for HPUX: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    Hello,
    Ok do it and generate AWR to see how the performance is getting impacted.remember auditing just some tables is not a big matter but auditing everything is the problem that is why fine grained auditing exist.please also remember to clean the audit records regularly because the auditing will be just a problem with the space in case you have many deletes which should not happen in your case.
    Kind regards
    Mohamed

  • Queries Performance impact

    Hi Team,
              We have few queries which were running good until last week; but for the past 3 days these queries were facing severe some performance issues and timeout dumps in the back-end.
    For some selections it is running long and for some selection it is executing quicker and for some selection it is getting time out.
    We made a complete data rebuild for the queries connected data targets (data rebuild from the source) before 3 days; after which  the query performance issue faced.
    No changes were made to the queries or objects for the last 2 months.
    Data Flow -  Query -> Multi Provider -> Infoset -> InfoCube -> DSO -> Datasource (DB Connect).
    Note:
    In Query we have nested aggregation to handle the result rows; but again no changes to it for the past 2 months.
    We have loaded data in one single request at the InfoCube level.
              I mean some 2 million records with different plants in one single request do it have performance impact while reading data?
    Can anyone please throw light on the possible cause for the performance issue?
    Thanks
    Regards
    San

    Hi San,
    As you said that you completely loaded data and then  only your performance issues started, can you please tell whether you are using any BIA  for reporting?.
    If not BIA,  can  you please  delete the DB  staticstics for those Infocubes and then  create the DB  statistics for the same.
    Also you completely rebuild the  data which means  drop and reload, your PSATEMPSPACE OR  your temporarrily file space   might have  completely build. Ask your basis team to check the  space in the tables.
    Regards,
    Rajesh

  • How to handle Integrated Configuration performance impact on AAE/Java AS

    Hi there,
    Recently I have moved  a configuration scenario from standard flow involving both ABAP and Java stacks, to Integrated Configuration usage. Undoubtedly, this will increase the load on AAE/Java stack. However, do you have link to some clear (official - even better) guidelines - what configurational changes should be done on Java side in order to handle the performance impact of such transition?
    Best Regards,
    Lalo

    Hi Lalo,
    In fact, using AAE generates no traffic in ABAP stack at all (it is ommited when processing a message), while the traffic in Java stack should be lower than for normal scenario. The performance should be noticeably better, thanks to smaller number of persistence steps and no costly HTTP connections between stacks. For more details, please refer to this document:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7
    Important quotation from this document:
    Since the Integration Engine is bypassed for local message processing in the AAE, the resource consumption both in memory and CPU is lower. This leads to higher message throughput, and faster response times which especially is important for synchronous scenarios.
    Moreover, have a look at this document, especially its beginning, for details about the architecture of AAE processing:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/70066f78-7794-2c10-2e8c-cb967cef407b
    Hope this helps,
    Greg

  • KMS licensing and performance impacts of Office 2010 OSPPSVC in a Remote Desktop Services Environment?

    I have been directed from the Office Install/Upgrade/Activate forum to ask my question in this forum:
    We have Office 2010 deployed in our 2008R2 RDS environment across 100's of servers.  We have multiple (failover) KMS servers, and there have been no issues with Office licensing.   We have noticed that Event ID: 1003  "The
    Software Protection service has completed licensing status check" comes up repeatedly in the APP logs on the RDS servers, presumably after the end user launches an Office application.   I understand this to be considered normal,
    from what I have read and researched so far. 
    However, we are attempting to understand how this activity affects performance in an RDS environment, where multiple users may be launching Office applications at any given time on the server.   I have seen posts where suggestions were
    made to try and stop the OSPP service when inactive, but that would most likely never happen on an active RDS server with users launching Outlook and other office apps throughout the day. 
    Can someone give any insight as to the need to mitigate any of this activity involved with Office KMS "licensing status checking" specifically in an RDS environment?   If we're missing any performance gains by dialing this
    service back, then I would like to address it.   If this behavior is by design, and in most cases benign (non-performance-impacting) then I would like to know that also.

    Haven't heard from anyone on this.    ??

  • Isolation level and performance impact?

    Hi
    I'm new to BDB JE and building some prototypes to evaluate it.
    Given a simple usecase of storing the following key/value pair <String,List<Event>> mapping a user to his/her list of events, in the db. New events are added for the user, this happens (although fairly rarely) concurrently.
    Using Serializable isolation will prevent any corruption to the list of events, since the events are effectively added serially to the user. I was wondering:
    1. if there are any lesser levels of isolation that would still be adequate
    2. using Serializable isolation, is there a performance impact on updating users non concurrently (ie there's no lock contention since for the majority of cases concurrent updates won't happen) vs the default isolation level?
    3. building on 2. is there performance impact (other than obtaining and releasing locks) on using transactions with X isolation during updates of existing entries if there are no lock contention (ie, no concurrent updates) vs not using transactions at all?
    Thanks!
    Peter

    Have you seen this section of the Getting Started Guide on isolation levels in JE? http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html
    Our default is Repeatable Read, and that could be sufficient for your application depending on your access patterns, and the semantic sense of the items in your list. I think you're saying that the data portion of a record is the list of events itself. With RepeatableRead, you'll always see only committed data, and retrieving that record from a JE database will always return a consistent view of a given list. See http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html#serializable for an explanation of what additional guarantee you get with Serializable.
    2. using Serializable isolation, is there a
    performance impact on updating users non concurrently
    (ie there's no lock contention since for the majority
    of cases concurrent updates won't happen) vs the
    default isolation level?Yes, there is an additional cost. When using Serializable isolation, additional locks are taken on adjacent data records. In addition to the cost of acquiring the lock (which would be low in a non-contention case), there may be additional I/O needed to fetch adjacent data records.
    3. building on 2. is there performance impact (other
    than obtaining and releasing locks) on using
    transactions with X isolation during updates of
    existing entries if there are no lock contention (ie,
    no concurrent updates) vs not using transactions at
    all? In (2) we compared the cost of Serializable to RepeatableRead. In (3), we're comparing the cost of non-transactional access to the default Repeatable Read transaction.
    Non-transactional is always a bit cheaper, even if there is no lock contention. On top of the cost of acquiring the locks, non-transactional operations use less memory and disk space, and execute some transaction setup and teardown code. If there are concurrent operations, even in there is no contention on a given lock, there could be some stress on the lock table latches and transaction tables. That said, if your application is I/O bound, the cpu differences between non-txnal and txnal operations becomes more of a secondary factor. If you're I/O bound, the memory and disk space overhead does matter, because the cache is more efficiently used with non-txnal operations.
    Regards,
    Linda
    >
    Thanks!
    Peter

  • CCMS implementation performance impact

    Dears
    I'm looking for information on the impact on performance of a CCMS implementation.
    Impact on Solution Manager, impact on managed SAP systems. I opened a customer message but SAP comes back with there is no sizing documentation for ccms.
    The purpose is to add a lot of managed SAP system into SAP Solution Manager and enable CCMS + IT Performance Reporting. I would like to know how much impact the RFC access causes.
    If anyone has done a study on this or has information on this please share.
    Kind regards
    Tom

    @Mauricio Yes I do understand the principal behind it. From a logical point of view it should improve performance but then again tests should have been run to prove the point.
    For example: I'm involved in the pilot program for SAPJVM 4.1 and while SAP sais the performance should be the same or better but we all know SAP doesn't have every combination of OSDBSAP system running. It's the same principal, from a logical point of view it should be the same or better but one should do some performance tests to verify if it's the case.
    @Augusto No problem on the misunderstanding, it shows my question wasn't that great and I should have provided more information
    The point is that we have a lot of managed SAP systems to add and apparently SAP doesn't have that many customers that have a big number of managed SAP systems in the Solution Manager. The performance impact of CCMS implementation is neglictible if you only have some managed SAP systems. What is a 3% CPU rise in that case, shouldn't be a problem but now imagine you put in thirty times as much SAP systems then it can be become a problem if you don't foresee additional server resources.
    A lot of those scenario's seem to have been tested with only a few managed SAP systems, doing a full landscape fetch through transaction SMSY takes a long time for example. I do know in Solution Manager 7.1 there is another component in between the SLD and SMSY which can synchronize content faster to avoid long runtime.
    Another example was the landscape verification tool 1.0 which I reviewed in a blog. It didn't have a possibility to refresh single systems (I haven't checked latest version) but that is a problem if you have many managed SAP systems and you want to just check one. In the video demo's it flashes and it's very fast because there are only five managed SAP systems.
    I'm looking forward to Solution Manager 7.1 but these kind of performance impacts should be known really. Doing a small implementation is no problem but customers are really interested in application lifecycle management and so on and are starting to generate demand for a lot of different scenario's. Sizing it is not that hard in small environment but once the environments become larger and a lot of managed SAP systems are involved the issue I see with CCMS for example rises and it becomes very hard to size properly. Perhaps it could be integrated into the quick sizer (number of managed SAP systems and enablement/use of certain scenarios).
    I do know for some scenarios there is sizing documentation available, diagnostics being one of those.

Maybe you are looking for

  • How do i change the background color of a photo?

    I am trying to make the background of a product image completely white and cannot figure out how to do so i would love a "beginner" step by step instruction

  • Getting 404 not found error-while creating classic report - after sql query

    my work space - upgrade while iam trying to develop a page today in work space - under classic report - after i provided the query and when pressed next to go to report attribuites section - iam facing 404 not found issue., this is happening just tod

  • Attachments using Yahoo and Safari

    Have Safari and Yahoo mail. My attachments go as zip file and can't be pulled up properly - even by other mac users. Have a table in pages that needs to be used by another mac user. I also tried to save as an .rtf file - it looks like it takes but it

  • Editing a photo, does it sync?

    I have a large folder of photos on my mac.  When I  set up the new photos app it will ask me if I would like my photos sync'd in the cloud. If I sync these photos and they are available on all my devices.  Now if I make changes to the original file w

  • Sql server does not exist or access is denied.

    Hi, i am trying to connect a dadabase to the server from client pc. and i take the sql error  [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied to the client pc. i using .udl file and ia trying to connect with sql auth