Any performance impact on satellite systems?

Hello,
Our productive solution manager are setup to connect to all the ECC, XI & R/3 systems, as central monitoring system.
So far we only use Solman for monitoring & EWA, not even BPM is configured yet.
in SMSY, it's configured to use our XI SLD, to auto collect all the database, server & system data for solman system landscape.
My question is: If there is downtime on Solman system, will it have any performance (slow transaction) on satellite system, especially on my XI system (as solman access the XI SLD).
The reason I post this question here is: recently we have downtime for solution manager due to SP upgrade, during the downtime, we noticed there are a lot (thousand) IDOCs processing messages to R/3 system stucks in XI for 1 of our application. Users starting to make complaints, and we try to find the root cause in XI, in R/3 ... involve our basis guy & also XI admins, but no one able to find why.
By the time PSM is up (after 2-3 hours), ALL the stuck messages in XI are manage to clear/process at once. And until now, on one able to explain why. They are suspecting my solution manager system downtime has the impact on those satellite system.
Solman is setup to use PXI's SLD.
Anyone has feedback on this, is it coincidently, or ?
Please share if you have knowledge on this topic
Thank you

Hi,
As per your description I would say it is pure coincidential.
When you configure Solution Manager for system monitoring and/or EWA, all Solution Manager does is to collect the relevant data from the sattelite systems. Just this.
So, when solution manager is not available there should be no reason for any impact on the sattelite systems side.
Also,  from your description, I would say your XI is getting SLD information from Solution Manager. This is the only reason I could figure out that IDOCs are getting stucked in XI.
Is it possibel to ask your XI guys to recheck which SLD XI is using?
Regards
Valdecir

Similar Messages

  • DBCC SHRINKFILE with NOTRUNCATE has any performance impact in log shipping?

    Hi All,
    To procure space I'm suggested to use below command on primary database in log shipping and I just want
    to clarify whether it has any performance impact on primary database in log shipping and also is it a recommended practice to use the below command
    in regular intervals in
    case the log is using much space of the drive. Please suggest on this. Thank You.
    "DBCC
    SHRINKFILE ('CommonDB_LoadTest_log', 2048, NOTRUNCATE)"
    Regards,
    Kalyan
    ----Learners Curiosity Never Ends----

    Hi Kalyan \ Shanky
    I was not clear in linked conversation so adding some thing :
    As per http://msdn.microsoft.com/en-us//library/ms189493.aspx
    ----->TRUNCATEONLY is applicable only to data files. 
    BUT
    As per : http://technet.microsoft.com/en-us/library/ms190488.aspx
    TRUNCATEONLY affects the log file.
    And i also tried , it does works.
    Now Truncateonly : Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. target_percent is ignored if specified
    with TRUNCATEONLY.
    So
    1. if i am removing non used space it will not effect log shiping or no log chain will broke.
    2. If you clear unsued space it will not touch existing data. no performance issue
    3. If you clear space and then due to other operation log file will auto grow it will put unnecessary pressure on database to allocate disk every time. So once you find the max growth of log file let it be as any how again it will grow to same size.
    4. Shrinking log file is not recommeded if its again and again reaching to same size . Until unless you have space crunch
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • Any negative impact on Customer system after switching on Delta...

    Hi
    Could u pls let me know that:
    1) In general client switch on their Delta System for Data Extraction or not?
    2) If they switch on Delta, Is their any negative impact on customer system?
    Thanks...

    Hi Harpal,
    First of all if you switch on delta there is no negative impact. The only thing is you need to follow some precautions before you switch on delta.
    1) It is always a good idea to have DSO before you load data to cube in case of delta.
    2) When you are loading delta for first time you need to be careful with delta initialization. Because whatever initialization you do that will remain constant for your future delta.
    And the reason behind the delta mechanism is as your system becomes old then it becomes very difficult to extract Full or All data on daily basis. The extraction time will also increase and there are more chances of load failures.
    Regards,
    Durgesh.

  • Central Performance History from satellite systems

    Hi,
    I'm investigating Central Performance History at the moment but am having problems getting the statistics from the satellite systems from Solution Manager.
    I have our Solution Manager system setup as the CCMS CEN. I monitor about 10 SAP instances using CCMS and a number of non-SAP systems. CCMS works okay and issues alerts. I believe that this means I have the necessary RFCs in place for communication between Solution Manager and satellite systems to collect performance data.
    When I use RZ23N to review the data being collected it only shows the Solution Manager system, it doesn't show any of the other SAP systems.
    How can I configure the other systems to send their performance stats to Solution Manager so I can report centrally?
    SolMan - NW 7.01SP4 on Windows 2003
    Satellite systems - NW 7.0
    I've read through the help pages but them only seem to reference one system, rather than collecting from satellite systems.
    Thanks,
    Gareth

    Hi David,
    Thanks, I hadn't looked at the note.
    Section 2 - history of all connected systems showed me what to do to collect the data. I had seen that in the setup but not understood the implications of it fully. The note explained it much clearly than the SAP help.
    Thanks again,
    Gareth

  • Will there be any performance impact

    Hi All,
        Currently i'm having table employee with 1 millon records.. (emp ID is primary key). In process , i want to insert new employee ID and use for program and deleting it finally(simplyfing changes in current program).. every day this will take 100K trancations.
       I'm planning to commit only after delete. (ie insert -> make some update --> delete the same row --> commit).
    Will this emp IDs added to index memory  and give performance impact though i'm commiting the transaction after deleting the rows?
    database : oracle 10g.
    Thanks!!!

    If I understand you correctly, this sounds like a use case for a global temporary table (with the same structure as your employee table).
    As you insert, update and delete the same row within one single transaction (for the convenience of your code I assume), those row will only ever be visible to the session that (temporarily) inserts them into the table.
    The design you are suggesting has (at least) the following performance impact:
    1) it will inhibit concurrency
         - other sessions reading the table while transient rows are inserted and are being updated may have to clone some data buffers and apply UNDO to get read consistent clones of the buffers being modified.
         - you may cause buffer busy wait events as you modify the blocks belonging to your employee table while other sessions want to read the blocks affected by these modifications (the severity of this depends on how your 100K transactions are spread throughout the day and what activity runs on the database in parallel).
         - you will increase activity on the hash chain latches protecting the buffers of your employee table (the same applies to the severity as for the previous point).
    2) You increase the amount of REDO generated by your code. Using a global temporary table your 100K transactions will also generate some REDO, but significantly less.
    3) Using the global temporary table approach you don't need to delete the rows once you are done with your processing - you simply define your global temporary table as "ON COMMIT DELETE ROWS".
    4) You'll have to do all the work associated with the index maintenance to insert and delete the corresponding index entry (see my post from  Jun 24, 2013 8:16 PM)

  • CIF variants exist in system for last 4 years - Any negative impact ??

    Hi Experts,
    I observed that we have variants of CCR existing in the system for last 4 years!!! Will it make any negative impact on the system in terms of performance ?
    If yes then do you suggest to delete unnecessary Logs?
    Thanks.
    Regards,
    Chandan

    Hi,
    We have not observred measurement of  performance deterioration due to number of variants over a period.
    Variants are stored in VARID database table.
    Please check how many entries are there in this table for the CIF related programs.
    You may check with Basis team , if the table size is okay and also find out the rate of increase.
    This can be a good housekeeping activtiy to delete obsolete variants older than say a few years.
    Regards
    Datta

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Performance Impact for the Application when using ADFLogger

    Hi All,
    I am very new to ADFLogger and I am going to implement this in my Application. I go through Duncan Mill's acticles and got the basic understanding.
    I have some questions to be clear.
    Is there any Performance Impact when using ADFLogger to slower the Appllication.
    Is there any Best Practices to follow with ADFLogger to minimize the negative impact(if exists).
    Thanks
    Dk

    Well, add a call to a logger is a method call. So if you add a log message for every line of code, you'll see an impact.
    You can implement it in a way that you only write the log messages if the log level if set to a level which your logger writes (or lower). In this case the impact is like having an if statement, and a method call if the if statement returns true.
    After this theory here is my personal finding, as I use ADFLogger quite a lot. In production systems you turn the log lever to WARNING or higher so you will not see many log messages in the log. Only when a problem is reported you set the log level to a lower value to get more output.
    I normally use the 'check log level before logging a message' and the 'just print the message' combined. When I know that a message is printed very often, I first check the level. If I assume or know that a message is only logged seldom, I just log it.
    I personally have not seen a negative impact this way.
    Timo

  • Goods Receipts & Issues without any financial impact - (No accounting Docs)

    Hi Experts,
    We have this new scenario to manage some goods which we do not own or manufacture in any of our plants, but only store them in our warehouse for another Company and release them upon a shipment notification from them to one of our common customers. Furthermore our own products against this customers' orders are combined with the non-owned pruducts and shipped as a strategy to cut freight & extra transit cost.
    We intend to have visibilty over these non-owned products but do not want it to have any financial impact in our system as we Receive and Issue them in and out of inventory.
    I have been thinking of consignment, subcontracting etc. but will like to get more ideas on how to do this.
    Your ideas will be appreciated.
    Thx,
    LAN.

    there is no corresponding goods issue movement to a 501 receipt.
    you can issue with any suitable goods issue movement, like goods issue to cost center 201, or goods issue to scrap 551, or 601 goods issue to outbound delivery.
    Hi Jurgen,
    As there is no direct matching GI for 501 receipts other relevant GI Mvt types could be used but in my case I get errors when trying 201-  (Update control of Movement Type is incorrect (entry 201_X_L) msg. # M7226). I have tried to create a new Mvt. Type copying from 201, but the system defaults the Mvt. Indicator field and won't let me add an L on the new custom type.
    When I tried 601 it also refereced the cost & profit centers associated with the plant in the material documents and we do not want any financial relevancy here. Could you suggest a Mvt. Type that could be ideal here.
    Thanks,
    LN.

  • Performance impact using nested tables and object

    Hi,
    Iam using oracle 11g.
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package..
    Will it have any performance impact since all the data is stored in the memory.
    How can i measure the performance impact when the data grows ?
    Regards,
    Oracle User
    Edited by: user9080289 on Jun 30, 2011 6:07 AM
    Edited by: user9080289 on Jun 30, 2011 6:42 AM

    user9080289 wrote:
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package.. Not the best of ideas in general, in PL/SQL. This is not client code that can lay sole claim to most of the memory. It is server code and one of many server processes that need to share the available resources. So capitalism is fine on a client, but you need socialism on the server? {noformat} ;-) {noformat}
    Will it have any performance impact since all the data is stored in the memory.Interestingly yes. Usually crunching data in memory is better. In this case it may not be so. The memory used is the most expensive memory Oracle can use - the PGA. Private process memory. This means each process copy running that code, will need lots of memory.
    If you're not passing the data structures by reference, it means even bigger demands on memory as the data structure needs to be copied into the call stack and duplicated.
    The worse case scenario is that such code consumes so much free server memory, and make such huge demands on having that in pysical memory, it trashes memory management as the swap daemons are unable to keep up with the demand of swapping virtual memory pages into and out of memory. Most CPU time is spend by the swap daemons.
    I have seen servers crash due to this. I have seen a single PL/SQL process causing this.
    How can i measure the performance impact when the data grows ?Well, you need to look at the impact of your code on PGA memory. It is not SQL performance or I/O performance that is a factor - just how much private process memory your code needs in order to execute.

  • Performance Impact with OR concatenation / Inlist Iterator

    Hello guys,
    is there any performance impact with using OR concatenations or some IN-Lists?
    The function of both is the "same":
    1) Concatenation (OR-processing)
    SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
    - Which are then ‘concatenated’
    2) Inlist Iterator
    SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
    - Every value executed seperately
    - Same as concatenation of 3 “OR-red” values
    So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
    Thanks and Regards
    Stefan

    The note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
    The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
    The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
    Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
    Quick example to demonstrate the difference between concatenation and iteration:
    drop table t1;
    create table t1 as
    select
         rownum     id,
         rownum     n1,
         rpad('x',100)     padding
    from
         all_objects
    where
         rownum <= 10000
    create index t1_i1 on t1(id);
    execute dbms_stats.gather_table_stats(user,'t1')
    set autotrace traceonly explain
    select
         /*+ use_concat(t1) */
         n1
    from
         t1
    where
         id in (10,20,30,40,50,60,70,80,90,100)
    set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
       1    0   CONCATENATION
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       5    4       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       7    6       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       8    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       9    8       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      10    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      11   10       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      12    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      13   12       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      14    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      15   14       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      16    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      17   16       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      18    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      19   18       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      20    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      21   20       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
       1    0   INLIST ITERATOR
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Performance impact on oracle 11g database by audit enable

    Hi All,
    Shall we enable audit on some siebel db tables like s_party s_contacts s_order s_quote s_org_ext
    We need to see who deleted account records from oracle tables manually
    Since auditing is not enabled.
    We have given delete privelege to to all users as required by Siebel application.
    So Is this good idea to get Auditing enabled on these selected tables or Is there any performance impact on database.
    Is it good idea to enable audit for these tables espacially in siebel
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for HPUX: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    Hello,
    Ok do it and generate AWR to see how the performance is getting impacted.remember auditing just some tables is not a big matter but auditing everything is the problem that is why fine grained auditing exist.please also remember to clean the audit records regularly because the auditing will be just a problem with the space in case you have many deletes which should not happen in your case.
    Kind regards
    Mohamed

  • Performance impact by turning on Document attribute

    Hello folks,
    I have couple of questions on turning on document attribute for Info-objects and its performance. It would be great if you could answer them.
    i) Is there any performance impact on turning on document attribute and letting 250 users enter comments/annotations. From my understanding, these comments are stored in BDSPHI010 table and do not increase/decrease the data-target size. But during query execution, does it read the comments and cache it or it is read only when they click on the comments icon?
    ii) If there would be an performance impact, can you provide some insights?
    iii) As far as authorizations are concerned, users should have access to S_BDS_D and S_BDS_DS roles to enter/delete comments? Is this true?
    Thanks in advance for ur insights.

    Hello folks,
    I have couple of questions on turning on document attribute for Info-objects and its performance. It would be great if you could answer them.
    i) Is there any performance impact on turning on document attribute and letting 250 users enter comments/annotations. From my understanding, these comments are stored in BDSPHI010 table and do not increase/decrease the data-target size. But during query execution, does it read the comments and cache it or it is read only when they click on the comments icon?
    ii) If there would be an performance impact, can you provide some insights?
    iii) As far as authorizations are concerned, users should have access to S_BDS_D and S_BDS_DS roles to enter/delete comments? Is this true?
    Thanks in advance for ur insights.

  • Any standard RFC there to communicate between satellite system and Solman?

    Hi Gurus,
                        Any standard RFC there to communicate between satellite system and Solman?
                   Thanks in advance.
    Regs,
    BBR.

    following four RFCs are created so as to initiate the communication between the satellite system and the solman system
    _READ, _TRUSTED, _BACK, _TMW
    but, you have to Read the Links and the Notes too to understand their names and their meaning
    http://help.sap.com/saphelp_sm310/helpdata/en/48/647e3ddf01910fe10000000a114084/content.htm
    http://help.sap.com/saphelp_sm310/helpdata/en/b3/dd773dd1210968e10000000a114084/frameset.htm
    http://www.slideshare.net/wlacaze/sap-solman-instguide-initial-customizing-presentation

  • Solution Manager EWA - cannot create session in satellite system

    "Hi,
    I want to configure EWA self service using Solman 4.0. I succesfully did the following :
    a. Maintained SMSY and create the required trusted RFC connections from SM to satellite system. All connection and authorization passed in SM59. I used SAP_ALL/SDCCN_ALL role and assigned objects S_RFC*.
    b. Assigned the system to a logical system.
    c. Created the a new solution and activate "Solutions Monitoring > Earlywatch Alert"
    d. Activated and maintained required RFC in SDCCN in satallite system.
    My problem is that the create EWA alert request (Red Flag with a specific session number ) coulnd be pass to the satelitte system even though all the trusted RFC and authorization is set. When i execute the SESSION_REFRESH in the satellite system, the session is not created.
    Did I miss out any steps. Can any one share any help ?
    FYI, there is no connection to SAPOSS yet, so i did not manage to run RTCCTOOL completed, but i doubt this is required for EWA self service."
    I  have the same problem as Solution Manager EWA - cannot create session in satellite system
    tried all of solutions,  but it does not help... Created the CM (high) for SAP, but get 1 response for 1 week from them:(

    Dear Sapbcer,
    Have you tried the following option :
    Execute SMSY and from the Server entry Execute the "Read System Data Remote" option in change mode.
    Save the data captured and then try using Refresh Session Task from SDCCN of the satellite system. Do select the RFC for Solution Manager while performing this task.
    Hope this helps.
    Regards
    Amit

Maybe you are looking for