Best practice DataGroup and custom layout

Hello,
Some times ago, i made a little TimeLine component in flex 3, the component was very limited in terms of possibility, mainly because of performance issues.
With flex 4 and virtualisation in layout i thought it was easier to create my component.
But i'm confronted to some problems.
I try to extend DataGroup, with a fixed custom Layout, but i have 2 problems :
-The first one is that  size and position of elements will depends of the dataProvider of the component, so i need to have a direct dependance of the component in my layout.
-And the main one is depth of the element will depends of data and i don't see a way of updating it in my layout.
Should i stop using DataGroup and try to implement it directly from UIComponent and IViewPort implementation with a sort of manual virtualisation?
Should i override the dataProvider setter to sort it the way element depth should be set ?
Should i use BasicLayout and access directly to property from the DataGroup in itemRenderer to set top left width and height ?
I'm a little lost and any advice would be a great help.
Thanks.

user8410696,
Sounds like you need a simple resource loading histogram with a threshold limit displayed for each project. I don't know if you are layout saavy but layouts can be customized to fit your exact needs. Sometimes P6 cannot display exactly what you desire but most basic layouts can be tweaked to fit. I believe that you need a combo of resource profiles and columns.
I'll take some time today and do some tweeking on my end to see if I can modify one of my resource layouts to fit your needs. I'll let you know what I find out.
talk to you later,

Similar Messages

  • SAP SCM and SAP APO: Best practices, tips and recommendations

    Hi,
    I have been gathering useful information about SAP SCM and SAP APO (e.g., advanced supply chain planning, master data and transaction data for advanced planning, demand planning, cross-plant planning, production planning and detailed scheduling, deployment, global available-to-promise (global ATP), CIF (core interface), SAP APO DP planning tools (macros, statistical forecasting, lifecycle planning, data realignment, data upload into the planning area, mass processing u2013 background jobs, process chains, aggregation and disaggregation), and PP/DS heuristics for production planning).
    I am especially interested about best practices, tips and recommendations for using and developing SAP SCM and SAP APO. For example, [CIF Tips and Tricks Version 3.1.1|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700006480652001E] and [CIF Tips and Tricks Version 4.0|https://service.sap.com/form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000596412005E] contain pretty useful knowledge about CIF.
    If you know any useful best practices, tips and recommendations for using and developing SAP SCM and SAP APO, I would appreciate if you could share those assets with me.
    Thanks in advance of your help.
    Regards,
    Jarmo Tuominen

    Hi Jarmo,
    Apart from what DB has suggested. you should give a good reading on the following.
    -Consulting Notes (use the application component filters in search notes)
    -Collective Notes (similar to the one above)
    -Release Notes
    -Release Restrictions
    -If $$ permit subscribe to www.scmexpertonline.com. Good perspective on concepts around SAP SCM.
    -There are a couple of blogs (e.g. www.apolemia.com) .. but all lack breadth.. some topics in depth.
    -"Articles" section on this site (not all are classified well.. see in ECCops, mfg, SCM, Logistics etc)
    -Serivce.sap.com- check the solution details overview in knowledge exchange tab. There are product presentations and collaterals for every release. Good breadth but no depth.
    -Building Blocks - available for all application areas. This is limited to vanilla configuration of just making a process work and nothing more than that.
    -Get the book "Sales and Operations Planning with SAP APO" by SAP Press. Its got plenty of  easy to follow stuff, good perspective and lots of screen shots to make life easier.
    -help.sap.com the last thing that most refer after all "handy" options (incl. this forum) are exhausted. Nevertheless, this is the superset of all "secondary" documents. But the maze of hyperlinks that start at APO might lead you to something like xml schema.
    Key Tip: Appreciate that SAP SCM is largely driven by connected execution systems (SAP ECC/ERP). So the best place to start with should be a good overview of ERP OPS solution overview, at least at the significant level of depth.). Check this document at sdn wiki "ERP ops architecture overview".
    I have some good collection of documents though many i havent read myself. If you need them let me know.
    Regards,
    Loknath

  • Best Practices Methodologies and Implementation training courses

    Does anybody know regarding Best Practices Methodologies and Implementation training courses available?
    Kind regards,
    Adi
    [email protected]

    hi Adi,
    please do gothrough this pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/519d369b-0401-0010-0186-ff7a2b5d2bc0
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e5b7bb90-0201-0010-ee89-fc008080b21e
    hope this helps you please don,t forget to give points
    with regards.
    Vinoth

  • Best Practices Methodologies and Implementation

    Does anybody know regarding Best Practices Methodologies and Implementation training courses available?

    Hi dear,
    please don't post the same question several times...
    Look at Best Practice Course
    Bye,
    Roberto

  • User Authorization, best practices for this custom application requirement?

    JDeveloper 12c (12.1.2)
    We want to use external LDAP (active directory) with ADF security to authenticate and authorize users.
    One of our custom application requirements is that there is a single page with many interactive components. It has probably about 15 tables and each table would need to have the following buttons (or similar components):
    - delete: (if certain row is selected) to delete it
    - edit: (if certain row is selected) takes user to 'edit page' where changes can be made
    - create: to create new record for this particular VO (table)
    So let's say that would be 3 x 15 = 45 different actions that single user can possibly perform. Not all users have same 'powers' ie some users can only edit CERTAIN tables, and delete from one or two. Most users can create and edit most VOs etc
    Back when this application was originally developed using (I believe) 10g JDeveloper with UIX, the way it was done is that we maintained a table in database with 'user credentials' as Y or N flags.
    For example: DEL_VO1, EDIT_VO1, ADD_VO1....
    So when user is authenticated we would then pull all these credentials from the DB table and load them into the session variables. Then we would use EL to render or not render certain buttons on the page. For example: rendered="#{sessionScope.appDelVo1 == 'Y'}"
    Moving forward into latest ADF technology, what would be the best practice to achieve described functionality?

    Hi,
    ADF BC could have permissions added to the entity level (includes remove and update). So you can create permissions for the entity (as it doesn't matter for data security how data is accessed. If as a user you are nit allowed to change a database table then this is for tables and forms). You can then use EL to check the permission, thus no need to keep the privileges in the database.
    If a user is allowed to update an entity then you can check this using EL in the UI
    <af:inputText value="#{bindings.DepartmentName.inputValue}"
    readOnly="#{!bindings.DepartmentName.hints.updateable}">
    whatch this for a full coverage of ADF Security: Oracle ADF Security Overview - Oracle JDeveloper 11g R1 and R2
    Frank

  • Any Best Practices for developing custom ABAP reports for Portal?

    Hello,
    The developers on our project are debating the best way to develop custom reports and make them available on the portal.  Of these options that we can think of, can you give any pros & cons, or experiences, or other options?
    - Web-enabled Abap report programs
    - WebDynpro for Abap
    - WebDynpro for Abap using ALV
    - Adobe forms
    Does a "Best Practices" document or blog exist on this topic?
    Thanks,
    Colleen

    Re: Using p_trace=YES

  • Search for ABAP Webdynpro Best practice or/and Evaluation grid

    Hi Gurus,
    Managers or Team Leaders are facing of the development of SAP application on the web. Functional people propose to business people Web applications.  I'm searching for Best practice for Web Dynpro ABAP Development. We use SAP Netweaver 7.0 and an SAP ECC 6.0 SP4.
    We are facing of claims about Webdynpro response time. The business wants to have 3 sec response time and we have 20 or  25 sec.
    I want to communicate to functional people a kind of recommendation document explaining that in certain case the usage of Webdynpro will not be a benefit for the business.
    I know that the transfer of data, the complexity of the screen and also the hardware are one of the keys but I expect some advices from the SDN community.
    Thanks for your answers.
    Rgds,
    Christophe

    Hi,
    25s is a lot. I wouldn't like to use an application with response time that big. Anyway, Thomas Jung has just recently published a series of video blogs about WDA performance tools. It may help you analyzing why your web dynpro application is so slow. Here is the link to the [first part|http://enterprisegeeks.com/blog/2010/03/03/abap-freakshow-u2013-march-3-2010-wda-performance-tools-part-1/]. There is also a [dedicated forum|Web Dynpro ABAP; to WDA here on SDN. I would search there for some tips and tricks.
    Cheers

  • Exchange Best Practices Analyzer and Event 10009 - DCOM

    We have two Exchange 2010 SP3 RU7 servers on Windows 2008 R2
    In general, they seem to function correctly.
    ExBPA (Best Practices Analyzer) results are fine. Just some entries about drivers being more than two years old (vendor has not supplied newer drivers so we use what we have). Anything else has been verified to be something that can "safely be ignored".
    Test-ServiceHealth, Test-ReplicationHealth and other tests indicate no problems.
    However, when I run the ExBPA, it seems like the server on which I run ExBPA attempts to contact the other using DCOM and this fails.
    Some notes:
    1. Windows Firewall is disabled on both.
    2. Pings in both directions are successful.
    3. DTCPing would not even run so I was not able to test with this.
    4. Connectivity works perfectly otherwise. I can see/manage either server from the other using the EMC or EMS. DAG works fine as far as I can see.
    What's the error message?
    Event 10009, DistributedCOM
    "DCOM was unable to communiate with the computer --- opposite Exchange server of the pair of Exchange servers---  using any of the configured protocols."
    This is in the System Log.
    This happens on both servers and only when I run the ExBPA.
    I understand that ExBPA uses DCOM but cannot see what would be blocking communications.
    I can access the opposite server in MS Management Consoles (MMC).
    Note: the error is NOT in the ExBPA results - but rather in the Event Viewer System Log.
    Yes, it is consistent. Have noticed it for some time now.
    Does anyone have any idea what could be causing this? Since normal Exchange operations are not affected, I'm tempted to ignore it, but I have to do my "due diligence" and inquire. 
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

    Hi David,
    I recommend you refer the following article to troubleshoot this event:
    How to troubleshoot DCOM 10009 error logged in system event
    Why this happens:
    Totally speaking, the reason why DCOM 10009 is logged is that: local RPCSS service can’t reach the remote RPCSS service of remote target server. There are many possibilities which can cause this issue.
    Scenario 1:
     The remote target server happens to be offline for a short time, for example, just for maintenance.
    Scenario 2:
    Both servers are online. However, there RPC communication issue exists between these two servers, for example:  server name resolvation failure, port resources for RPC communication exhaustion, firewall configuration.
    Scenario 3:
    Even though the TCP connection to remote server has no any problem, but if the communication of RPC authentication gets problem, we may get the error status code like 0x80070721 which means “A security package specific
    error occurred” during the communication of RPC authentication, DCOM 10009 will also be logged on the client side.
    Scenario 4:
    The target DCOM |COM+ service failed to be activated due to permission issue. Under this kind of situation, DCOM 10027 will be logged on the server side at the same time.
    Event ID 10009 — COM Remote Service Availability
    Resolve
    Ensure that the remote computer is available
    There is a problem accessing the COM Service on a remote computer. To resolve this problem:
    Ensure that the remote computer is online.
    This problem may be the result of a firewall blocking the connection. For security, COM+ network access is not enabled by default. Check the system to determine whether the firewall is blocking the remote connection.
    Other reasons for the problem might be found in the Extended Remote Procedure Call (RPC) Error information that is available in Event Viewer.
    To perform these procedures, you must have membership in Administrators, or you must have been delegated the appropriate authority.
    Ensure that the remote computer is online
    To verify that the remote computer is online and the computers are communicating over the network:
    Open an elevated Command Prompt window. Click Start, point to
    All Programs, click Accessories, right-click
    Command Prompt, and then click Run as administrator. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    At the command prompt, type ping, followed by a space and the remote computer name, and then press ENTER. For example, to check that your server can communicate over the network with a computer named ContosoWS2008, type
    ping ContosoWS2008, and then press ENTER.
    A successful connection results in a set of replies from the other computer and a set of
    ping statistics.
    Check the firewall settings and enable the firewall exception rule
    To check the firewall settings and enable the firewall exception rule:
    Click Start, and then click Run.
    Type wf.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    In the console tree, click Inbound rules.
    In the list of firewall exception rules, look for COM+ Network Access (DCOM In).
    If the firewall exception rule is not enabled, in the details pane click
    Enable rule, and then scroll horizontally to confirm that the protocol is
    TCP and the LocalPort is 135. Close Windows Firewall with Advanced Security.
    Review available Extended RPC Error information for this event in Event Viewer
    To review available Extended RPC Error information for this event in Event Viewer:
    Click Start, and then click Run.
    Type comexp.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    Under Console Root, expand Event Viewer (Local).
    In the details pane, look for your event in the Summary of Administrative Events, and then double-click the event to open it.
    The Extended RPC Error information that is available for this event is located on the
    Details tab. Expand the available items on the Details tab to review all available information. 
    For more information about Extended RPC Error information and how to interpret it, see Obtaining Extended RPC Error Information (http://go.microsoft.com/fwlink/?LinkId=105593).
    Best regards,
    Niko Cheng
    TechNet Community Support

  • Best Practice in using Tracking Layout

    Can anyone tell me how best to use Tracking Layout and how to create the different new layout in tracking layout view.
    Kindly let me know whether the layout as explained below can be shown:
    Toplayout shows the list of projects and the bottom layout should show the list of resources and while selecting the resources should show either histogram or spreadsheet to show the resources based on the usage for all the projects. With that we need to find out which is oveallocated and which resource is free.
    Thanks in advance.
    Regards,
    Olby

    user8410696,
    Sounds like you need a simple resource loading histogram with a threshold limit displayed for each project. I don't know if you are layout saavy but layouts can be customized to fit your exact needs. Sometimes P6 cannot display exactly what you desire but most basic layouts can be tweaked to fit. I believe that you need a combo of resource profiles and columns.
    I'll take some time today and do some tweeking on my end to see if I can modify one of my resource layouts to fit your needs. I'll let you know what I find out.
    talk to you later,

  • Best Practice for Enterprise Repository Layout

    We recently moved to 6i R2 non-versioned from 2.1.2. We have several projects that use the repository for development in various business areas of the agency. We have 100+ application containers and only 5 active projects at this time. Many of the containser were built to hold subject area entity objects and not designed for a project application. As a result we now have a web of shares back and forth between all of these applications.
    My goal is to "clean up" this layout and move the repository in the direction of a limited number of application containers for subject area "enterprise" objects and individual application containers for each project development task.
    One of my main concerns is how best to move several application objects to one application container and then what impact will this limited container structure have on application projects once we turn versioning on later this year. (I would really like to limit or end the use of shares/shortcuts if possible....)
    Any advice for configuring an enterprise repository structure would be very helpful! Also, what impacts or gotcha's I need to be aware of once we have the structure in place and turn on versioning...
    Sorry this is a long winded question :-}
    Thanks ;)
    Laurie

    Laurie,
    There is some good news for you: in Designer 6i, shares are no longer needed to be able to have references between objects ni different folders/application systems.
    Previously in Designer 2.1 and 6.0 you could only have references from Object X to objects in other application systems if these objects were shared into the application system that owned the object X. If you moved Object X to a different Application System Y, all objects that X had references to were shared to Y.
    All of this no longer exists. Object X in Application System A can have references to objects in any other Application System, without having to create shares or short-cuts as I call them in the context of 6i. Existing short-cuts (created by the migration for example) may be removed, especially if the only reason for their existence is to support a reference across application systems. Short-cuts are now only needed for reasons listed in the paper (Interior Designer) you referred to (e.g. quick navigation or clear insight in dependencies).
    Moving objects between application systems will not create Short Cuts in Designer 6i. You also do not have to generate forms again if you moved the module definition or one of the objects used by the module. Moving has no effect whatsoever on the actual contents of objects or values of properties. It only affects the repository structure, the composition of folders and potentially the access users have to objects, since that is organized by folder.
    One thing you do need to be aware of: if you are in the context of a workarea that contains an object X with a reference to object Y, and object Y is not in the workarea, you have a so called External Reference (also known as Dangling Reference). The consequence is that your object X is almost invalid: you can look at it, but you cannot generate it; you may also have a problem updating the object as long as it has these dangling references. In general, only update and generate objects that do not have dangling references. Note: the RON has the External References tool to find out about dangling references. This tool also allows you to include objects into the workarea, to resolve dangling references.
    best regards,
    Lucas

  • FBL3N and customer layout.

    Hi All,
    Why in transaction FBL3N the layout customer is not exist? This option in transaction FBL3N is very important from my analysis. Exist other a few this parameters?
    Requesting you to share your knowledge with me.
    Thanks a lot in advance!!

    Hi,
    You will get the customer & vendor data in reconciliation account.Check the sort key maintain for that in the master.If its vendor / customer number then you will get that in assignment field. No further configuration required.
    But if that is not acceptable then you have to copy the programe of FBL3N & have to modify that as per your requirement.
    But considering cost & time involved I prefer above given standard practice of sort key maintenance which is recommended by SAP.
    Please assign points if found helpful.
    Edited by: Abhijit Joshi on Oct 31, 2008 7:15 AM
    Edited by: Abhijit Joshi on Oct 31, 2008 7:17 AM

  • Best practice for handling custom apps tracks with regards to EP upgrade?

    Hi,
    We've are currently in the progress of upgrading from EP 6 to EP 7.0, and in that context we need to "move" our tracks containg development of custom J2EE components.
    What we've done so far is :
    1.Create a new version 7.00 of each software component we have developed with correct EP 7 dependencies
    2. Create a new version 7.00 of our product in the SLD: Bouvet_EP
    3. Attached the new versions of the SCs to the new product version
    4. Create a new track with the SC of version 7.00 along with relevant dependencies
    My question now is how do we get the EP 6 component source code into the new track, so that we can change dependecies of the DCs and build it again for EP 7.0?
    Should we somehow export the code from the old track, check in and transport ? (how do we then export the code from the track)
    Regards
    Dagfinn

    Hi Dagfinn,
    This is a really interesting thread. I have not encountered this scenario till now. However i can only guess.
    1. Copy the latest sca files generated for all the SC's in your track from one of the subdirectories of JTrans and place those sca files in the inbox directory of target CMS. Check if these sca are available in the Check-in tab. I think this will not work because the SC verion you have defined in SLD for WAS 7.0 is different than the one in SLD for WAS 6.40.
    2. Second and crude method may be you create a SC in the source SLD similar to ones created in target SLD. Create a track for these SC's in the source system. Then create a track connection between the newly created track and existing tracks. Forward all the sources to the target track. Then assemble this SC and copy the sca file and repeat the process above.
    I dont know. Possibly this may click. Notes 877029 & 790922 also give some hints on migration of JDI server.
    Please do keep this thread updated with your progress.
    Regards
    Sidharth

  • Best practice GeoRaster and MapViewer?

    Hi,
    I want to see rasterfiles with using Oracle GeoRaster and MapViewer. I've bineary rasterfiles and aerial photographs(24 BIT).
    Until now I put the data with following parameters into the database:
    -     Oracle_inverleaving_type: BSQ
    -     Oracle_raster_tile_size_x und -y: 2048
    -     Oracle_compression: DEFLATE
    -     Oracle_pyramid_max: null
    -     Oracle_raster_pyramid_resampling: NN for bineary data and CUBIC for aerial photograph
    The bineary rasterfiles could have about 15000x15000 pixels and the aerial photographs about 4000x4000 pixels.
    For the MapViewer configuration of a GeoRaster-Theme I use pyramid-level NULL for aerial photographs and 1 for bineary pictures.
    The MapViewer BaseMaps has a tilesize of 256x256 pixels and as format png.
    Has anybody experience to get the best performance and best quality to show rasterfiles?
    Regards,
    Sebastian

    Hi Jeffrey,
    further I have the problem that the MapViewer (Ver1033p5_B081010) doesn't render maptiles for all zoom-levels with my posted settings. Before the MapViewer P5 exists an I use this version, I rendered the maptiles witch MapViewer of version P3.
    With the latest version of mapviewer it is only possible to rendere maptiles up to zoom-level 3 and then (level zwo, one or zero) it doesn't render this tiles. The mapviewer shows the following error:
    WARNUNG: Failed to fetch tile image.
    Message:Failed to fetch tile image.
    Mon Feb 17 19:39:19 CET 2009
    Severity: 0
    Description:
    at oracle.lbs.mapcache.cache.TileFetcher.fetchTile(TileFetcher.java:209)
    at oracle.lbs.mapcache.cache.Tile.fetchTile(Tile.java:357)
    at oracle.lbs.mapcache.cache.Tile.fetchTile(Tile.java:338)
    at oracle.lbs.mapcache.cache.MapCache.getTile(MapCache.java:217)
    at oracle.lbs.mapcache.MapCacheServer.getMapImageStream(MapCacheServer.java:161)
    at oracle.lbs.mapcache.MCSServlet.doPost(MCSServlet.java:254)
    at oracle.lbs.mapcache.MCSServlet.doGet(MCSServlet.java:209)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:743)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:711)
    at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368)
    at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866)
    at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448)
    at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:216)
    at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:117)
    at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:110)
    at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
    at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
    at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
    at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
    at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
    at java.lang.Thread.run(Thread.java:595)
    17.02.2009 19:39:19 oracle.lbs.mapcache.cache.Tile getOffLineTileImagePath
    FEINER: Waiting for blank tile.
    17.02.2009 19:39:20 oracle.lbs.mapcache.MCSServlet doPost
    SCHWERWIEGEND: LUT has improper length!
    Do you know why the MapViewer shows this message?
    When I used the MapViewer P3 I didn't have any problems with generating maptiles.
    Regards,
    Sebastian

  • Best Practice: DAG and AutoDatabaseMountDial Setting

    Hi,
    I am working with management in my organization with regard to our DAG fail over settings. By default, the fail over setting is set to 'Good Availability' (missing 6 logs or less per
    DB). My organization did not feel comfortable with data loss so we changed it to 'Lossless'.
    Of course, we had a SAN failure and we lost a DAG member and nothing failed over to the surviving DAG member. Even eventID 2092 reported the same log sequence for many databases yet the
    surviving DAG member did not mount. Example:
    Database xxxmdb28\EX-SRV1 won't be mounted because the number of lost logs was greater than the amount specified by the AutoDatabaseMountDial.
    * The log file generated before the switchover or failover was: 311894
    * The log file successfully replicated to this server was: 311894
    Only after the SAN was restored and the surviving server came back did another 2092 EventID get logged stating the log sequence was not 311895 (duh! - because the database got mounted again. )We opened a support
    case with Microsoft and they suggested no databases mounted because the surviving DAG member could not communicate with the non-surviving member (which is crazy to me because isn't that THE POINT of the DAG??). Maybe there is always a log file
    in memory (?) so AutoDatabaseMountDial set to lossless will never automatically mount any database? Who knows.
    In any case, we are trying to talk ourselves back into setting it back to 'Good Availability' setting. Here is where we are at now:
         2 member DAG hosting 3000 mailboxes on about 36 databases (~18 active on each node) in same AD site (different physical buildings) w/witness server
         AutoDatabaseMountDial set to lossless
         Transport dumpster is set to 1.5x maximum message size
    What level of confidence can we expect we will not have data loss with the properly configured transport dumpster and a setting of 'Good Availability?' I am also open to other suggestions such as changing
    our active-active DAG to active-passive by keeping all active copies on one server.
    Also, has anyone experienced any data loss with 'Good Availability' and a properly configured transport dumpster?
    Thanks for the guidance.

    Personally I have not experienced loss in this scenario and have not changed this setting from "Good Availability".  I know that setting the transport dumpster to 1.5x is the recommended setting.  Also there is a Shadow Queue for each transport
    server in your environment which verifies the message reaches the mailbox before clearing the message. 
    To make an example for mailflow (assuming these are multi-role for this example, it still applies to split role). you have Server1 and Server2 with an external user sending mail to a user on Server1.  The message will pass through all your external
    hops, etc. and then get to the transport servers.  If it is delivered to Server1, the message has to be sent to Server2 and then back to Server1 to be delivered to the mailbox so that it hits the shadow queue of Server2.  If the message hit Server2
    first, then it would be sent to Server1 and then to the mailbox.
    If either of your servers are down for a period of time, then the shadow queue will try to resend the messages again and that is why you wouldn't have any data loss.
    Jason Apt, Microsoft Certified Master | Exchange 2010
    My Blog

  • Grid monitroing  best practices do and dont

    Hi
    I would like to know your thought and experience: do and donts , specially on following ...
    * monitoring target host / linux
    *using default template,
    what are pitfalls?
    another question about default template is.. i would like to set default template for target type db, but db could be a standby/rac/stand alone ( i have diffrent type of templates for each of those) how can i set default template based on db type? can i create one template that has all metrics regardless whether db is rac/primary /standby and aply to all by default? is this is how it should be done?
    grid v11g
    thanks

    Hi
    But Grasshopper wants to know what is the minimum amount of
    statistics that correctly describes the data. I realize that this
    "depends", but that is not an answer.If you compare the data with the statistics you know if you have good statistics or not.
    Once you have a good set you have to reduce the gathering load to the minimum while keeping the good statistics.
    the Support Analyst was conceding that Oracle "out of the box"
    does not do a very adequate job of gathering the "minimum
    amount of statistics that correctly describes the data"I don't agree. In fact "out of the box" Oracle gathers:
    - to much statistics (too much histograms)
    - it's to "aggressive" with estimate percent
    You end up with a gathering that is too slow and that provides too much information to the CBO (slowing it down as well).
    Usually if you have a problem with the automatically installed job is because you have a large database and your statistics impacts your business.
    But if we can come up with a policy or method whereby we can
    say with certainty that the extra 5-10% is definitely not an issue
    with statistics, or even decrease those errant queries to <1%,
    I would consider our tuning efforts a success.My motto is "Start with a simple solution! If it doesn't work, elaborate it.", i.e. it's simply overkill implementing a solution that is fine in 100% of the cases.
    In other words I advice to use the default gathering, eventually with few parameters changes, in most of the cases. And only when there's a problem you should switch to manual (and this, eventually, for few tables or schemas).
    There is a lot to understand in the CBO. I agree.
    Regard,
    Chris

Maybe you are looking for