Best Practice: DAG and AutoDatabaseMountDial Setting

Hi,
I am working with management in my organization with regard to our DAG fail over settings. By default, the fail over setting is set to 'Good Availability' (missing 6 logs or less per
DB). My organization did not feel comfortable with data loss so we changed it to 'Lossless'.
Of course, we had a SAN failure and we lost a DAG member and nothing failed over to the surviving DAG member. Even eventID 2092 reported the same log sequence for many databases yet the
surviving DAG member did not mount. Example:
Database xxxmdb28\EX-SRV1 won't be mounted because the number of lost logs was greater than the amount specified by the AutoDatabaseMountDial.
* The log file generated before the switchover or failover was: 311894
* The log file successfully replicated to this server was: 311894
Only after the SAN was restored and the surviving server came back did another 2092 EventID get logged stating the log sequence was not 311895 (duh! - because the database got mounted again. )We opened a support
case with Microsoft and they suggested no databases mounted because the surviving DAG member could not communicate with the non-surviving member (which is crazy to me because isn't that THE POINT of the DAG??). Maybe there is always a log file
in memory (?) so AutoDatabaseMountDial set to lossless will never automatically mount any database? Who knows.
In any case, we are trying to talk ourselves back into setting it back to 'Good Availability' setting. Here is where we are at now:
     2 member DAG hosting 3000 mailboxes on about 36 databases (~18 active on each node) in same AD site (different physical buildings) w/witness server
     AutoDatabaseMountDial set to lossless
     Transport dumpster is set to 1.5x maximum message size
What level of confidence can we expect we will not have data loss with the properly configured transport dumpster and a setting of 'Good Availability?' I am also open to other suggestions such as changing
our active-active DAG to active-passive by keeping all active copies on one server.
Also, has anyone experienced any data loss with 'Good Availability' and a properly configured transport dumpster?
Thanks for the guidance.

Personally I have not experienced loss in this scenario and have not changed this setting from "Good Availability".  I know that setting the transport dumpster to 1.5x is the recommended setting.  Also there is a Shadow Queue for each transport
server in your environment which verifies the message reaches the mailbox before clearing the message. 
To make an example for mailflow (assuming these are multi-role for this example, it still applies to split role). you have Server1 and Server2 with an external user sending mail to a user on Server1.  The message will pass through all your external
hops, etc. and then get to the transport servers.  If it is delivered to Server1, the message has to be sent to Server2 and then back to Server1 to be delivered to the mailbox so that it hits the shadow queue of Server2.  If the message hit Server2
first, then it would be sent to Server1 and then to the mailbox.
If either of your servers are down for a period of time, then the shadow queue will try to resend the messages again and that is why you wouldn't have any data loss.
Jason Apt, Microsoft Certified Master | Exchange 2010
My Blog

Similar Messages

  • SAP SCM and SAP APO: Best practices, tips and recommendations

    Hi,
    I have been gathering useful information about SAP SCM and SAP APO (e.g., advanced supply chain planning, master data and transaction data for advanced planning, demand planning, cross-plant planning, production planning and detailed scheduling, deployment, global available-to-promise (global ATP), CIF (core interface), SAP APO DP planning tools (macros, statistical forecasting, lifecycle planning, data realignment, data upload into the planning area, mass processing u2013 background jobs, process chains, aggregation and disaggregation), and PP/DS heuristics for production planning).
    I am especially interested about best practices, tips and recommendations for using and developing SAP SCM and SAP APO. For example, [CIF Tips and Tricks Version 3.1.1|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700006480652001E] and [CIF Tips and Tricks Version 4.0|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000596412005E] contain pretty useful knowledge about CIF.
    If you know any useful best practices, tips and recommendations for using and developing SAP SCM and SAP APO, I would appreciate if you could share those assets with me.
    Thanks in advance of your help.
    Regards,
    Jarmo Tuominen

    Hi Jarmo,
    Apart from what DB has suggested. you should give a good reading on the following.
    -Consulting Notes (use the application component filters in search notes)
    -Collective Notes (similar to the one above)
    -Release Notes
    -Release Restrictions
    -If $$ permit subscribe to www.scmexpertonline.com. Good perspective on concepts around SAP SCM.
    -There are a couple of blogs (e.g. www.apolemia.com) .. but all lack breadth.. some topics in depth.
    -"Articles" section on this site (not all are classified well.. see in ECCops, mfg, SCM, Logistics etc)
    -Serivce.sap.com- check the solution details overview in knowledge exchange tab. There are product presentations and collaterals for every release. Good breadth but no depth.
    -Building Blocks - available for all application areas. This is limited to vanilla configuration of just making a process work and nothing more than that.
    -Get the book "Sales and Operations Planning with SAP APO" by SAP Press. Its got plenty of  easy to follow stuff, good perspective and lots of screen shots to make life easier.
    -help.sap.com the last thing that most refer after all "handy" options (incl. this forum) are exhausted. Nevertheless, this is the superset of all "secondary" documents. But the maze of hyperlinks that start at APO might lead you to something like xml schema.
    Key Tip: Appreciate that SAP SCM is largely driven by connected execution systems (SAP ECC/ERP). So the best place to start with should be a good overview of ERP OPS solution overview, at least at the significant level of depth.). Check this document at sdn wiki "ERP ops architecture overview".
    I have some good collection of documents though many i havent read myself. If you need them let me know.
    Regards,
    Loknath

  • Best Practices Methodologies and Implementation training courses

    Does anybody know regarding Best Practices Methodologies and Implementation training courses available?
    Kind regards,
    Adi
    [email protected]

    hi Adi,
    please do gothrough this pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/519d369b-0401-0010-0186-ff7a2b5d2bc0
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e5b7bb90-0201-0010-ee89-fc008080b21e
    hope this helps you please don,t forget to give points
    with regards.
    Vinoth

  • Best Practices Methodologies and Implementation

    Does anybody know regarding Best Practices Methodologies and Implementation training courses available?

    Hi dear,
    please don't post the same question several times...
    Look at Best Practice Course
    Bye,
    Roberto

  • Best Practice for Buy in Set and Dismantle for Sales

    Hi All SAP Masters,
    We have a scenario that when purchasing an item as "set", in this set, it has a few components inside this set (something like a material BOM). Example, a machine which comes with several parts. However, when the user received this set from the supplier, the user would further dismantle certain part(s) from the set/"machine" and sell it separately to the customer as a component/"single item".
    What is the best practice in the SAP process to be adopted?
    Please help. Thank you.
    Warmest Regards,
    Edwin

    If your client  have PP module , then follow this steps
    Consider A is the purchased material and going to dismantle the A into B, and C
    1) create a BOM for B material
        and assign the header material  A as consumption material with + ve qty
       and C component as byproduct and maintain - ve qty in BOM
    2) maintain backflush indicator for A & C in material master MRP2 view
    3) create routing for B and maintain auto GR for final operation
    4) create a  production order for B
    5) confirm the order in Co11n, A  will be consumed in 261 movement, C will be receipt with 531 movement
    B will receipt in 101 movement .
    once the stock is posted into unrestricted you can sale B & C

  • Exchange Best Practices Analyzer and Event 10009 - DCOM

    We have two Exchange 2010 SP3 RU7 servers on Windows 2008 R2
    In general, they seem to function correctly.
    ExBPA (Best Practices Analyzer) results are fine. Just some entries about drivers being more than two years old (vendor has not supplied newer drivers so we use what we have). Anything else has been verified to be something that can "safely be ignored".
    Test-ServiceHealth, Test-ReplicationHealth and other tests indicate no problems.
    However, when I run the ExBPA, it seems like the server on which I run ExBPA attempts to contact the other using DCOM and this fails.
    Some notes:
    1. Windows Firewall is disabled on both.
    2. Pings in both directions are successful.
    3. DTCPing would not even run so I was not able to test with this.
    4. Connectivity works perfectly otherwise. I can see/manage either server from the other using the EMC or EMS. DAG works fine as far as I can see.
    What's the error message?
    Event 10009, DistributedCOM
    "DCOM was unable to communiate with the computer --- opposite Exchange server of the pair of Exchange servers---  using any of the configured protocols."
    This is in the System Log.
    This happens on both servers and only when I run the ExBPA.
    I understand that ExBPA uses DCOM but cannot see what would be blocking communications.
    I can access the opposite server in MS Management Consoles (MMC).
    Note: the error is NOT in the ExBPA results - but rather in the Event Viewer System Log.
    Yes, it is consistent. Have noticed it for some time now.
    Does anyone have any idea what could be causing this? Since normal Exchange operations are not affected, I'm tempted to ignore it, but I have to do my "due diligence" and inquire. 
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

    Hi David,
    I recommend you refer the following article to troubleshoot this event:
    How to troubleshoot DCOM 10009 error logged in system event
    Why this happens:
    Totally speaking, the reason why DCOM 10009 is logged is that: local RPCSS service can’t reach the remote RPCSS service of remote target server. There are many possibilities which can cause this issue.
    Scenario 1:
     The remote target server happens to be offline for a short time, for example, just for maintenance.
    Scenario 2:
    Both servers are online. However, there RPC communication issue exists between these two servers, for example:  server name resolvation failure, port resources for RPC communication exhaustion, firewall configuration.
    Scenario 3:
    Even though the TCP connection to remote server has no any problem, but if the communication of RPC authentication gets problem, we may get the error status code like 0x80070721 which means “A security package specific
    error occurred” during the communication of RPC authentication, DCOM 10009 will also be logged on the client side.
    Scenario 4:
    The target DCOM |COM+ service failed to be activated due to permission issue. Under this kind of situation, DCOM 10027 will be logged on the server side at the same time.
    Event ID 10009 — COM Remote Service Availability
    Resolve
    Ensure that the remote computer is available
    There is a problem accessing the COM Service on a remote computer. To resolve this problem:
    Ensure that the remote computer is online.
    This problem may be the result of a firewall blocking the connection. For security, COM+ network access is not enabled by default. Check the system to determine whether the firewall is blocking the remote connection.
    Other reasons for the problem might be found in the Extended Remote Procedure Call (RPC) Error information that is available in Event Viewer.
    To perform these procedures, you must have membership in Administrators, or you must have been delegated the appropriate authority.
    Ensure that the remote computer is online
    To verify that the remote computer is online and the computers are communicating over the network:
    Open an elevated Command Prompt window. Click Start, point to
    All Programs, click Accessories, right-click
    Command Prompt, and then click Run as administrator. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    At the command prompt, type ping, followed by a space and the remote computer name, and then press ENTER. For example, to check that your server can communicate over the network with a computer named ContosoWS2008, type
    ping ContosoWS2008, and then press ENTER.
    A successful connection results in a set of replies from the other computer and a set of
    ping statistics.
    Check the firewall settings and enable the firewall exception rule
    To check the firewall settings and enable the firewall exception rule:
    Click Start, and then click Run.
    Type wf.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    In the console tree, click Inbound rules.
    In the list of firewall exception rules, look for COM+ Network Access (DCOM In).
    If the firewall exception rule is not enabled, in the details pane click
    Enable rule, and then scroll horizontally to confirm that the protocol is
    TCP and the LocalPort is 135. Close Windows Firewall with Advanced Security.
    Review available Extended RPC Error information for this event in Event Viewer
    To review available Extended RPC Error information for this event in Event Viewer:
    Click Start, and then click Run.
    Type comexp.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    Under Console Root, expand Event Viewer (Local).
    In the details pane, look for your event in the Summary of Administrative Events, and then double-click the event to open it.
    The Extended RPC Error information that is available for this event is located on the
    Details tab. Expand the available items on the Details tab to review all available information. 
    For more information about Extended RPC Error information and how to interpret it, see Obtaining Extended RPC Error Information (http://go.microsoft.com/fwlink/?LinkId=105593).
    Best regards,
    Niko Cheng
    TechNet Community Support

  • Search for ABAP Webdynpro Best practice or/and Evaluation grid

    Hi Gurus,
    Managers or Team Leaders are facing of the development of SAP application on the web. Functional people propose to business people Web applications.  I'm searching for Best practice for Web Dynpro ABAP Development. We use SAP Netweaver 7.0 and an SAP ECC 6.0 SP4.
    We are facing of claims about Webdynpro response time. The business wants to have 3 sec response time and we have 20 or  25 sec.
    I want to communicate to functional people a kind of recommendation document explaining that in certain case the usage of Webdynpro will not be a benefit for the business.
    I know that the transfer of data, the complexity of the screen and also the hardware are one of the keys but I expect some advices from the SDN community.
    Thanks for your answers.
    Rgds,
    Christophe

    Hi,
    25s is a lot. I wouldn't like to use an application with response time that big. Anyway, Thomas Jung has just recently published a series of video blogs about WDA performance tools. It may help you analyzing why your web dynpro application is so slow. Here is the link to the [first part|http://enterprisegeeks.com/blog/2010/03/03/abap-freakshow-u2013-march-3-2010-wda-performance-tools-part-1/]. There is also a [dedicated forum|Web Dynpro ABAP; to WDA here on SDN. I would search there for some tips and tricks.
    Cheers

  • Best practice DataGroup and custom layout

    Hello,
    Some times ago, i made a little TimeLine component in flex 3, the component was very limited in terms of possibility, mainly because of performance issues.
    With flex 4 and virtualisation in layout i thought it was easier to create my component.
    But i'm confronted to some problems.
    I try to extend DataGroup, with a fixed custom Layout, but i have 2 problems :
    -The first one is that  size and position of elements will depends of the dataProvider of the component, so i need to have a direct dependance of the component in my layout.
    -And the main one is depth of the element will depends of data and i don't see a way of updating it in my layout.
    Should i stop using DataGroup and try to implement it directly from UIComponent and IViewPort implementation with a sort of manual virtualisation?
    Should i override the dataProvider setter to sort it the way element depth should be set ?
    Should i use BasicLayout and access directly to property from the DataGroup in itemRenderer to set top left width and height ?
    I'm a little lost and any advice would be a great help.
    Thanks.

    user8410696,
    Sounds like you need a simple resource loading histogram with a threshold limit displayed for each project. I don't know if you are layout saavy but layouts can be customized to fit your exact needs. Sometimes P6 cannot display exactly what you desire but most basic layouts can be tweaked to fit. I believe that you need a combo of resource profiles and columns.
    I'll take some time today and do some tweeking on my end to see if I can modify one of my resource layouts to fit your needs. I'll let you know what I find out.
    talk to you later,

  • Best practice GeoRaster and MapViewer?

    Hi,
    I want to see rasterfiles with using Oracle GeoRaster and MapViewer. I've bineary rasterfiles and aerial photographs(24 BIT).
    Until now I put the data with following parameters into the database:
    -     Oracle_inverleaving_type: BSQ
    -     Oracle_raster_tile_size_x und -y: 2048
    -     Oracle_compression: DEFLATE
    -     Oracle_pyramid_max: null
    -     Oracle_raster_pyramid_resampling: NN for bineary data and CUBIC for aerial photograph
    The bineary rasterfiles could have about 15000x15000 pixels and the aerial photographs about 4000x4000 pixels.
    For the MapViewer configuration of a GeoRaster-Theme I use pyramid-level NULL for aerial photographs and 1 for bineary pictures.
    The MapViewer BaseMaps has a tilesize of 256x256 pixels and as format png.
    Has anybody experience to get the best performance and best quality to show rasterfiles?
    Regards,
    Sebastian

    Hi Jeffrey,
    further I have the problem that the MapViewer (Ver1033p5_B081010) doesn't render maptiles for all zoom-levels with my posted settings. Before the MapViewer P5 exists an I use this version, I rendered the maptiles witch MapViewer of version P3.
    With the latest version of mapviewer it is only possible to rendere maptiles up to zoom-level 3 and then (level zwo, one or zero) it doesn't render this tiles. The mapviewer shows the following error:
    WARNUNG: Failed to fetch tile image.
    Message:Failed to fetch tile image.
    Mon Feb 17 19:39:19 CET 2009
    Severity: 0
    Description:
    at oracle.lbs.mapcache.cache.TileFetcher.fetchTile(TileFetcher.java:209)
    at oracle.lbs.mapcache.cache.Tile.fetchTile(Tile.java:357)
    at oracle.lbs.mapcache.cache.Tile.fetchTile(Tile.java:338)
    at oracle.lbs.mapcache.cache.MapCache.getTile(MapCache.java:217)
    at oracle.lbs.mapcache.MapCacheServer.getMapImageStream(MapCacheServer.java:161)
    at oracle.lbs.mapcache.MCSServlet.doPost(MCSServlet.java:254)
    at oracle.lbs.mapcache.MCSServlet.doGet(MCSServlet.java:209)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:743)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:711)
    at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368)
    at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866)
    at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448)
    at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:216)
    at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:117)
    at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:110)
    at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
    at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
    at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
    at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
    at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
    at java.lang.Thread.run(Thread.java:595)
    17.02.2009 19:39:19 oracle.lbs.mapcache.cache.Tile getOffLineTileImagePath
    FEINER: Waiting for blank tile.
    17.02.2009 19:39:20 oracle.lbs.mapcache.MCSServlet doPost
    SCHWERWIEGEND: LUT has improper length!
    Do you know why the MapViewer shows this message?
    When I used the MapViewer P3 I didn't have any problems with generating maptiles.
    Regards,
    Sebastian

  • Best Practices for loading large sets of Data

    Just a general question regarding an initial load with a large set of data.
    Does it make any sense to use a materialized view to aid with load times for an initial load? Or do I simply let the query run for as long as it takes.
    Just looking for advice on what is the common approach here.
    Thanks!

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • Grid monitroing  best practices do and dont

    Hi
    I would like to know your thought and experience: do and donts , specially on following ...
    * monitoring target host / linux
    *using default template,
    what are pitfalls?
    another question about default template is.. i would like to set default template for target type db, but db could be a standby/rac/stand alone ( i have diffrent type of templates for each of those) how can i set default template based on db type? can i create one template that has all metrics regardless whether db is rac/primary /standby and aply to all by default? is this is how it should be done?
    grid v11g
    thanks

    Hi
    But Grasshopper wants to know what is the minimum amount of
    statistics that correctly describes the data. I realize that this
    "depends", but that is not an answer.If you compare the data with the statistics you know if you have good statistics or not.
    Once you have a good set you have to reduce the gathering load to the minimum while keeping the good statistics.
    the Support Analyst was conceding that Oracle "out of the box"
    does not do a very adequate job of gathering the "minimum
    amount of statistics that correctly describes the data"I don't agree. In fact "out of the box" Oracle gathers:
    - to much statistics (too much histograms)
    - it's to "aggressive" with estimate percent
    You end up with a gathering that is too slow and that provides too much information to the CBO (slowing it down as well).
    Usually if you have a problem with the automatically installed job is because you have a large database and your statistics impacts your business.
    But if we can come up with a policy or method whereby we can
    say with certainty that the extra 5-10% is definitely not an issue
    with statistics, or even decrease those errant queries to <1%,
    I would consider our tuning efforts a success.My motto is "Start with a simple solution! If it doesn't work, elaborate it.", i.e. it's simply overkill implementing a solution that is fine in 100% of the cases.
    In other words I advice to use the default gathering, eventually with few parameters changes, in most of the cases. And only when there's a problem you should switch to manual (and this, eventually, for few tables or schemas).
    There is a lot to understand in the CBO. I agree.
    Regard,
    Chris

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • Best practice approach for seperating Database and SAP servers

    Hi,
    I am looking for a best practice approach/strategy for setting up a distributed SAP landscape i.e separating the database and sap servers. If anyone has some strategies to share.
    Thanks very much

    I can imagine the most easiest way:
    Install a dialog instance on a new server and make sure it can connect nicely to the database. Then shut down the CI on the database server, copy the profiles (and adapt them) and start your CI on the new server. If that doesn't work at the first time you can always restart the CI on the database server again.
    Markus

  • New white paper: Character Set Migration Best Practices

    This paper can be found on the Globalization Home Page at:
    http://technet.oracle.com/tech/globalization/pdf/mwp.pdf
    This paper outlines the best practices for database character set
    migration that has been utilized on behalf of hundreds of customers
    successfully. Following these methods will help determine what
    strategies are best suited for your environment and will help minimize
    risk and downtime. This paper also highlights migration to Unicode.
    Many customers today are finding Unicode to be essential to supporting
    their global businesses.

    Sorry about that. I posted that too soon. It should become available today (Monday Aug 22nd).
    Doug

Maybe you are looking for

  • Account assignment error in PO..?

    hi all Can anybody explain me how to rectify the error "Account assignment mandatory for material ABCDXYZ123 (enter acc. ***. cat.)" THe above material is Non stock materials .. Thanks sap-mm

  • How to create a Sales order with ref to Contract using Function Module

    How to create a Sales order with ref to Contract using Function Module BAPI_SALESDOCU_CREATEFROMDATA ?

  • Process Keys in MM

    Hello, can someone please explaint the concept of process keys in MM. We are implementing Purchasing in BW and I would lke know about Process Keys. If you can point me to some documentation, that would be good. Thanks John

  • Gerber export problem with copper area's

    Hi, Just had a strange thing on a new design... I'm using version 10.01 Ultiboard...  I have placed ground tracks and included them into a copper area, connected to the correct net... So far no problem, but from time tot time the copper area diisapea

  • Error message "Font LucidaGrande contains a bad b/BBox"

    I recently started getting an error message "Font LucidaGrande contains a bad b/BBox" when I attempt to open a pdf file. Obviously that font is corrupted. How do I fix it. I am running Mac OS X 10.7.4? Thanks in advance