Health Check - Best Practice

Other than RSPCM - Process Chain Monitor (all our loads run in a process chain), is there an additional transaction I should be using to make sure that all our Infoproviders are being loaded without error?
Is there really any reason to check that the PSA statuses are green, check short dumps in ST22 and the system log in SM21?

Wardell.,
Run regular checks on the  infocube indices using RSRV and also check the partitioning and is any indices have degenerated...
Also check the health of Aggregates and infocube designs...
Arun
Assign points if useful

Similar Messages

  • Authority Check - Best Practice - Optimum Way

    Hi Experts,
    I want to use authority check in my reports. The requirement is to filter data on the selection screen and execute the query. Error messages are not to be thrown because, a user will find it difficult to enter all the document types/company codes/sales areas etc authorized and remove the ones not authorized from the range.
    I am planning to create range tables and populate it with the authorized values and use it in the select queries.
    I have two concerns:
    1. I will have to build range tables based on the values authorized. This will take some time, keeping in mind that append is an expensive statement.
    2. What if the range table becomes big enough to give me a dump in the select query in some scenario. (What if scenario? Its a rare possibility that some field like this also needs to be authorized)
    What is the best practice or rule of the thumb that you have figured out.
    Thanks,
    Abdullah Ismail.

    Are they asking you to check the authorisations for each of the following?
    1.     Sales Organization
    2. Distribution Channel
    3. Division
    4. Sales Group
    5. Sales Office
    6. Sales Document Type
    7. Sales Country
    8. Material Group(Brands)
    If so that is completely over engineered and good luck with that.  Surely you only need to check at one level of the sales structure, the lowest level I would guess.  Your auths team should be able to guide you here and I cannot imagine they would want that level of auths as it would be a nightmare for them to build it. I suppose you might want one on material group as well.
    Therefore they auths team or functional consultants will need to tell you at what level you are checking for each report, there will only be a small number at each level, (think you will struggle to get near the 12,000 Rob points out would cause an issue with a range) of the sales structure so I would use a range, you wonu2019t have that many appends and it wonu2019t add much to the time of the report.  While for all entries is great you can also use the range where the report may have already used for all entries on a select and better not to have to rebuild the whole report.
    Also I would do the auths check first up and make the field mandatory if they really want it nice and tight so the user has to choose, you can use a PID to make it a bit more friendly.
    If you know the setup is the same each time you could use a standard include and subroutine, or ABAP objects would probably be the best route with a set of standard methods to call.
    Hope that helps,
    Tim

  • Service Model, Health Model, Best Practice (SML)

    Hello
    I am trying to explain to semi-technical people whom do not know SCOM the principle of SCOM when it come to monitoring concepts best practice.
    Therefore what I am looking for please is a set of slides/short video/Q&A etc. which explains the concepts reasoning behind taking the time to workout a Service Model and Health Model at the 'start' of a project (e.g. before installing BusinessAppA)
    so it can be problem monitored and alerts on etc.
    Basically I am trying to get the architects/project managers to think about what I need as a SCOM engineer so I an discover and monitor etc. the Application/System they are proposing to install, rather then picking up this after the event
    Does anyone know of any good resources to explain these concepts to get the message across.
    Thanks All
    AAnotherUser__
    AAnotherUser__

    Hi,
    Please refer to the links below:
    Service Model
    http://technet.microsoft.com/en-us/library/ee957038.aspx
    Health Model Introduction
    http://channel9.msdn.com/Series/System-Center-2012-R2-Operations-Manager-Management-Packs/Mod15
    Health Model
    http://technet.microsoft.com/en-us/library/ff381324.aspx
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • ACE http health probes - best practice for interval and passdetect interval?

    Hi,
    Is there a recommended standard for http health probes in terms of interval and passdetect interval timings, i.e. should the passdetect interval always be less than the interval or visa versa? Can a http probe be 'mis-configured', i.e. return a 'false positive' by configuring an interval timeout thats 'incompatible' with the device it's polling?
    I have a http probe for a serverfarm consisting of two Apache http servers and get intermittent 'server reply timeout' probe failures. I'm keen to ensure that the configuration of the probe isn't at fault so I can be confident that a failed probe indicates a problem with the server and not my configuration.
    The probe is currently configured as below:-
    probe http http-apache
      interval 30
      passdetect interval 15
      passdetect count 6
      request method get url /cs/images/ACE.html
      expect status 200 304
    Any advice on the subject woud be gratefully received.
    thanks
    Matthew

    Hi Gilles,
    Thanks for the advice. In another dicussion (found here https://supportforums.cisco.com/message/462397#462397) a poster has stated that:-
    "(The) "Probe interval" should always be less then (open+recieve) timeout  value. Default open & receive timeouts are 10 seconds."
    Are you able to advise on whether the above is correct and if so, why? I currently have an interval value of 30 that obviously goes against the advice above (which I've interpretted to mean that if you leave the open & receive timeouts at their default settings your probe interval should be less than 20 seconds?).
    thanks
    Matthew

  • Sanity check - Best practice for network configuration

    Basic configuration is this:
    Physical server has two interfaces. Two different networks. Generically we reference them as Front End and Back End
    Back End is dedicated to storage (nfs from netapp or Sun 74xx)
    xenbr0 associated with eth0 (front end)
    xenbr1 associated with eth1 (back end)
    For each Virtual Machine, have been creating two interfaces; vif0 with xenbr0 and vif1 with xenbr1
    The desire is to have all disk I/O use the vif1 -> xenbr1 -> eth1 path. So far it seems to be working that way.
    Questioning the setup because we have seen this sort of error when shutting down a VM
    nfs: server axqntap1 not responding, still trying
    In case it matters, mount options inside the vm are: rw,bg,hard,intr,timeo=600,proto=tcp,vers=3,rsize=32768,wsize=32768
    Any advice, ideas? Are we all wrong with the bridge config? Mount options?
    Thank you - Randall

    Shutdown applications within the guest.
    Either power off from Oracle VM Manager or 'xm shutdown xxx' from the command line
    It is possible one or more files could be open when the shutdown is initiated.
    Have found at least one case of misconfigured IP which would have resulted in the disk access being via the 'Front End' interface rather than the Back End.
    Thanks

  • Best practice on monitoring Endeca health / defining outage

    (This is a double post from the Endeca Experience Management forum)
    I am looking for best practice on how to define Endeca service outage and monitor the health of the system. I understand this depends on your user requirements and it may vary from customer to customer. Specifically what criteria do you use to notify your engineer there is a problem? We have our load balancers pinging dgraphs on an interval. However the ping operation is not sufficient in our use case. We are also experimenting running a "low cost" query to the dgraphs on an interval and using some query latency thresholds to determine outage. I want to hear from people on the field running large commercial web site about your best practice of monitoring/notifying health of the system.
    Thanks.

    The performance metric should help to analyse the query and metrics for fine tuning.
    Here are few best practices:
    1. Reduce the number of components per page
    2. Avoid complex LQL queries
    3. Keep the LQL threshold small
    4. Display the minimum number of columns needed

  • Best practice for checking out a file

    Hello,
    What is the SAP best practice to check out a file? (DTR -> EDit or DTR -> Edit Exclusive?)
    What are pros and cons of checking out a file exclusively?
    Thanks
    MLS

    Thanks Pascal.
    Also, I think if a developer checks out exclusively, makes changes and leaves the company without checking in, the only way is to revert those files in which case all his changes will be gone.

  • What is SAP best practice for SU24 "no check" indicators

    Hi Experts,
    Let's say during testing if we find a t-code needs some authorization objects for end to end execution, and those objects are maintained as "CHECK" "NO" in USOBX_C.
    Please suggest the best practice...
    Should we change the proposal in USOBX_C to "CHECK" "YES" and populate specific values in USOBT_C or we can insert those authorization objects manually in the roles without changing proposals??
    As far as I know, it makes upgrade activities more difficult when there is more customization in these tables....However manual insert of auth objects are not impacted during upgrade and also any SU24 maintenance of custom t-codes.
    Thanks,
    Deb
    Edited by: Julius Bussche on Feb 8, 2012 10:31 AM
    Subject title made more meaingful.

    Normally there is some thought which went into the "no check" flag so you should put some thought into it before testing it to turn it on. It might force you to grant access for that transaction context but have the implication that in other transactions the user can access too much again.
    There are however some authorization objects which were added with support packs with this "no check" in the hope of adding the authority-checks into the code to make them possible, but deliver them as backward compatible with existing roles. This is a work-around for not being able to deliver it as deactivated globally in transaction AUTH_SWITCH_OBJECTS.
    You can find the candidates by sorting the + 12k entries in USOBX_C by object name and finding those which were dealt with in a very liberal way. Check the docs and OSS notes for them.
    Other than that I can only say that a best practice which I believe in is to remove some of the odd things in SU24 immediately after the installation. This means that you later (and as required) only need to add proposals and checks. That is much less error-prone than removing proposals and checks again!
    Cheers,
    Julius

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • What is the best practice for checking if CR2008 runtime is installed?

    I've created our crystal report functions in a .Net exe and it is launched from a Delphi application.
    I need to check if the .Net runtime for CR 2008 is installed on a machine at startup of the delphi application.
    Right now I am checking the registry at
    HKEY_LOCAL_MACHINE\Software\Sap BusinessObjects\Crystal Reports For .Net Framework 4.0\Crystal Reports
    and checking the value for CRRuntime32Version.
    This works great if a user is an admin on the machine, however I'm assuming due to group policies and restrictions this registry is not able to be read for some reason. My prompt continues to show up after installation because it can not get the value of the registry.
    So before I get winded and ramble on, what is best practice to test to see if the runtime has been installed? Is there a particular section of the registry I can check? My next thought is to check for the runtime directory but that might not be efficient as I would hope.

    Registry and folder is about all I can think of. Problem is, you're never guaranteed that something was not installed and then uninstalled and "stuff" like folders are getting left behind (a common occurrence from my experience...). I've also seen registry entries left behind. Perhaps looking for crpe32.dll in the c:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\win32_x86 folder will be best. I've never seen the crpe32.dll orphaned after an uninstall.
    Other than that, when you run the app and there is not runtime, you will get an error and you could trap that, possibly launch the installer... naaa - too goofy...
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup
    Got Enhancement ideas? Try the [SAP Idea Place|https://ideas.sap.com/community/products_and_solutions/crystalreports]

  • Best Practice - Check/React to Conditions Before Sales Order Save

    I am saving, for discussionu2019s sake, what is essentially a sales order. 
    Component: /ALMGT/BT115H_SLSO
    View: SOHOverView
    When the 'Save' is pushed, we want to do some error checking/handling  u2013 very simple stuff, like
    If attr1 is initial and  attr2 > 0
        Issue message
       set attr_status to u2018Situation1u2019 
    endif .
    It seems like I can just redefine the eh_onsave() method, but I wanted to see if there is a more elegant or best practice methodology for doing this.
    Thanks...
    ...Mike
    Weu2019re running  FRM ( [Fundraising  Management|http://www.sap.com/services/portfolio/customdev/brochures/index.epx] ) u2013 SAP Custom Development u2013 on top of CRM 7.0

    Mike,
    Good to see you here again.  The best approach is actually to implement this logic below the UI layer in the oner order layer instead.  Based on the component it looks like you are using a business transaction, which means you can use the BADI ORDER_SAVE to trigger the error.
    Do a search in the CRM General Forum on how to use this.  Now I'm not familiar with that solution, but the techniques for manipulating transactions and using the BADI's available should be the same if the data is being saved as business transactions inside of CRM.
    Take care,
    Stephen

  • Best Practice Analyzer Results: Health Report Error EDS AlertValue Unhealthy.

    I ran the Microsoft Office 365 Best Practices Analyzer Beta 1.0 and I get the following error:
    C:\windows\system32>Get-healthreport -rollupgroup
    servername.. then I got lots of results.. I narrow it to the following!
    PSComputerName          : kaneex13.kanecpas.local
    RunspaceId              : 85204a86-04f3-4779-9cad-3092ebfe3435
    PSShowComputerName      : False
    Server                  : kaneex13.kanecpas.local
    CurrentHealthSetState   : NotApplicable
    Name                    : MaintenanceFailureMonitor.EDS
    TargetResource          :
    HealthSetName           : EDS
    HealthGroupName         : ServiceComponents
    AlertValue              : Unhealthy
    FirstAlertObservedTime  : 2/6/2015 9:12:57 AM
    Description             :
    IsHaImpacting           : False
    RecurranceInterval      : 300
    DefinitionCreatedTime   : 2/6/2015 8:58:03 AM
    HealthSetDescription    :
    ServerComponentName     : None
    LastTransitionTime      : 2/6/2015 9:12:57 AM
    LastExecutionTime       : 2/6/2015 12:38:00 PM
    LastExecutionResult     : Succeeded
    ResultId                : 57636932
    WorkItemId              : 94
    IsStale                 : False
    Error                   :
    Exception               :
    IsNotified              : False
    LastFailedProbeId       : -301690410
    LastFailedProbeResultId : 351526122
    ServicePriority         : 0
    Identity                : EDS\MaintenanceFailureMonitor.EDS\
    IsValid                 : True
    ObjectState             : New
    I try to fix it and this is my findings!!
    https://technet.microsoft.com/en-us/library/ms.exch.scom.eds(v=exchg.150).aspx
    I'm running Exchange 2013 on Server 2012

    Hi,
    Based on my research, it’s a known issue that there will be 1006 error in the application log after we install a new Exchange 2013 server:
    http://social.technet.microsoft.com/Forums/en-US/5ab1a91a-ccd4-49fb-a451-159592fc85d4/msexchangediagnostics-error-1006-logical-to-physical-size-ratio-free-megabytes?forum=exchangesvradmin
    And it can be resolved by setting the value of DriveSpaceTrigger to false:
    http://windowsitpro.com/blog/case-erroneous-disk-space-checker
    In your case, we can firstly try to restart the MS Exchange Diagnostics Service.
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. Please make
    sure that you completely understand the risk before retrieving any suggestions from the above link.
    If you have any question, please feel free to let me know.
    Thanks,
    Angela 
    Angela Shi
    TechNet Community Support

  • Best practices for checked exceptions in Runnable.run

    Runnable.run cannot be modified to pass a checked exception to its parent, so it must deal with any checked exceptions that occur. Simply logging the error is inadequate, and I am wondering if there are any "best practices" on how to deal with this situation.
    Let me give a real-world example of what I'm talking about.
    When writing I/O code for a single-threaded app, I'll break the logic into methods, and declare these methods as throwing an IOException. Basically, I'll ignore all exceptions and simply pass them up the stack, knowing that Java's checked exception facility will force the caller to deal with error conditions.
    Some time later, I might try to improve performance by making the I/O code multithreaded. But now things get tricky because I can no longer ignore exceptions. When I refactor the code into a Runnable, it cannot simply toss IOExceptions to some future unnamed caller. It must now catch and handle the IOException. Of course, dealing with the problem by simply catching and logging the exception is bad, because the code that spawned the I/O thread won't know that anything went wrong. Instead, the I/O thread must somehow notify its parent that the exception occurred. But just how to do this is not straightforward.
    Any thoughts? Thanks.

    My suggestion: don't use Threads and Runnables like this.
    Instead implement Callable which can throw any Exception.
    Then use an ExecutorService to run that Callable.
    This will return a Future object which can throw an ExecutionException on get(), which you can then handle.
    This has the additional advantage that you can easily switch from a single-threaded serialized execution to a multi-threaded one by switching ExecutorService implementations (or even by tweaking the parameters of the ExecutorService implementation).

  • How to check verison of Best Practice Baseline in existing ECC system?

    Hi Expert,
    How to check verison of Best Practice Baseline in existing ECC system such as v1.603 or v1.604?
    Any help will be appriciate.
    Sayan

    Dear,
    Please go to https://websmp201.sap-ag.de/bestpractices and click on Baseline packages then on right hand side you will see that On which release is SAP Best Practices Baseline package which version is applicable.
    If you are on EHP4 then you can use the v1.604.
    How to Get SAP Best Practices Data Files for Installation (pdf, 278 KB) please refer this link,
    https://websmp201.sap-ag.de/~sapidb/011000358700000421882008E.pdf
    Hope it will help you.
    Regards,
    R.Brahmankar

  • Oracle Statistics - Best Practice?

    We run stats with brconnect weekly:
    brconnect -u / -c -f stats -t all
    I'm trying to understand how some of our stats are old or stale.  Where's my gap?  We are running Oracle 11g and have Table Monitoring set on every table.  My user_tab_modifications is tracking changes in just over 3,000 tables.  I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Plus, we have our DBSTATC entries.  A lot of those entries were last analyzed some 10 years ago.  Does the above brconnect consider DBSTATC at all?  Or do we need to regularly run the following, as well?
    brconnect -u / -c -f stats -t dbstatc_tab
    I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    SQL> select count(*) from dba_tab_statistics
      2  where owner = 'SAPR3' and stale_stats = 'YES';
      COUNT(*)
          1681
    I realize that stats last analyzed some ten years ago does not necessarily mean they are no longer good but I am curious if the weekly stats collection we are doing is sufficient.  Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?

    Hi Richard,
    > We are running Oracle 11g and have Table Monitoring set on every table.
    Table monitoring attribute is not necessary anymore or better said it is deprecated due to the fact that these metrics are controlled by STATISTICS_LEVEL nowadays. Table monitoring attribute is valid for Oracle versions lower than 10g.
    > I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Correct, if BR*Tools parameter stats_change_threshold is set to its default. Brconnect reads the modifications (number of inserts, deletes and updates) from DBA_TAB_MODIFICATIONS and compares the sum of these changes to the total number of rows. It gathers statistics, if the amount of changes is larger than stats_change_threshold.
    > Does the above brconnect consider DBSTATC at all?
    Yes, it does.
    > I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    The column STALE_STATS in view DBA_TAB_STATISTICS is calculated differently. This flag is used by the Oracle standard DBMS_STATS implementation which is not considered by SAP - for more details check the Oracle documentation "13.3.1.5 Determining Stale Statistics".
    The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.
    STALE_PERCENT - Determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The valid domain for stale_percent is non-negative numbers.The default value is 10%. Note that if you set stale_percent to zero the AUTO STATS gathering job will gather statistics for this table every time a row in the table is modified.
    SAP has its own automatism (like described with brconnect and stats_change_threshold) to identify stale statistics and how to collect statistics (percentage, histograms, etc.) and does not use / rely on the corresponding Oracle default mechanism.
    > Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?
    No performance issue? No additional and unnecessary load on the system (e.g. dynamic sampling)? No brconnect runtime issue? Then you don't need to think about the brconnect implementation or special settings. Sometimes you need to tweak it (e.g. histograms, sample sizes, etc.), but then you have some specific issue that needs to be solved.
    Regards
    Stefan

Maybe you are looking for

  • E72: how to force handsfree profile instead of hea...

    I just purchased a BlueAnt Supertooth 3 carkit and manged to pair it with my E72. The problem is I noticed that it automatically paired using the "Headset" profile. If my limited understanding is correct, the "Handsfree" profile offers more functiona

  • CS4: Text tool is broken

    When I use the Text tool or selected a text object, the dropdown in the Properties panel where it says Static Text / Dynamic Text / Input Text is not working. For instance, it's on Static Text and I change it to Dynamic Text and it stays on Static Te

  • Issue with js file

    Apex: 3.1.0 IE: 6 Firefox: 3.6 I have an application where the javascript was in the header of the page. Everyting was working fine, but it became too large, so I placed it in a js file, and loaded it as a static file. When I first go into my applica

  • Getting Java Pet Store to run in JBuilder (or any IDE)

    I'm getting a dozen or so errors like: "XMLDocumentUtils.java": method getPublicId() not found in interface org.w3c.dom.DocumentType when I try to import & build Java Pet Store in JBuilder Enterprise. I've told JBuilder to include j2ee.jar, and I can

  • Multiple schemas

    A project here has the following requirement. Want to use multiple schemas in the same database instance with identical structure. That is, from one login we want to determine the schema name to use at runtime and access it using the same mapping des