Agent determination best practices

Hi, workflow dev team!
Another question about workflow best practices.
I build a workflow, the first step of each workflow, at least as I see it, is to determine agents for all steps in the workflow and store it in multiline WF-container. After that, to use this WF-container data to execute a step inside of workflow.
The questions are:
1. Suppose, I filled a WF-container with a list of all possible agents and their position (in order to supply more user-friendly approve/decline notifications) for the different steps inside of this workflow (username & position). As I understand, I can access to specific line of multiline container with &AGENTS[&INDEX&]& inside of EXPRESSION field (not via ABAP-code in method of class), but how can I access specific column of the WF-container? E.g. &AGENTS[1]& will return «USJOHN — Junior Manager», is it possible to retrieve username and position from multi line and pass it to the task as two separate container elements?
2. Is it possible to query specific line of multiline container element according to some logical condition inside of EXPRESSION field (not via ABAP-code in method of class) or in order to do this I have to implement that logic inside of ABAP class method?
And the last question, probably, less connected to the questions asked above.
3. If I need to define agents for the task, what is the best approach to do that:
— define the task as General Task and insert the relevant agents as multiline container element via EXPRESSION field;
— to play with Agent Assignment (try to implement agent determination logic there) and not to define the task as general one.
Thanks.

Ronen already had great input, but here are some additional points:
-In general I would try to avoid some kind of "Let's get all the agents at the beginning of the workflow into a multiline container, and try to pick up the correct ones for the individual steps" approach. Of course if your aim is just to display the list of agents somewhere (task description etc.,), then this might be OK approach.
-Rules are better than expressions in agent determination (at least in most cases). If the approvers change during the process, you can (as an WF admin) just click the "Execute agent rule again" functionality, and the system will find the correct new approver for the work item. Otherwise you need to go to manipulate the container element(s) (and first figure out that who is the approver).
Answer for 3: You don't need to use only expressions/container elements. Try to check if you can use rules. In some rare cases you might consider of using the tasks' agent assignment (the same place where you set the task as general task). Let's say that you want to send the work item to all users that have certain authorization role. Now you can set the role as a "restricting entity" at the task level (=same place where you set the general task). This will mean that only the users who have the role are the possible agents for the task. And now if you leave the agent determination part at the workflow template empty (no expressions, no rules etc), you are done!
Regards,
Karri

Similar Messages

  • NSM Event Agent for AD location - Best practice

    Hello,
    We are currently designing our NSM 3.1 for AD implementation and would like some guidance with regard to installing the NSM Event Agent. We have come up with two options:
    The first option is to install the NSM Event Agent on a Domain Controller where new user accounts are provisioned.
    The second option is to install the NSM Event Agent on a server with the other NSM components.
    The argument for option 1 is that NSM will be notified as soon as an account is created.
    The argument for option 2 is that MS best practice is that no other software should be installed on a DC and that the NSM Event Agent will perform a network request to talk to the nearest domain controller to obtain a list of changes since it last connected.
    Is there any preferred option, or does it not matter?
    Regards,
    Jonathan

    On 10/28/2013 7:16 AM, JonathanCox wrote:
    >
    > Hello,
    >
    > We are currently designing our NSM 3.1 for AD implementation and would
    > like some guidance with regard to installing the NSM Event Agent. We
    > have come up with two options:
    >
    >
    > - The first option is to install the NSM Event Agent on a Domain
    > Controller where new user accounts are provisioned.
    > - The second option is to install the NSM Event Agent on a server with
    > the other NSM components.
    >
    >
    >
    > The argument for option 1 is that NSM will be notified as soon as an
    > account is created.
    > The argument for option 2 is that MS best practice is that no other
    > software should be installed on a DC and that the NSM Event Agent will
    > perform a network request to talk to the nearest domain controller to
    > obtain a list of changes since it last connected.
    >
    > Is there any preferred option, or does it not matter?
    >
    > Regards,
    >
    > Jonathan
    >
    >
    Jonathan,
    Unlike eDirectory event monitoring, Active Directory event monitoring is
    accomplished with a polling mechanism. Therefore putting your Event
    Monitor on the domain controller will not significantly increase
    performance. As long as the Event Monitor is in a site with a domain
    controller, it should pick up events as quickly as it can.
    For further reading on AD sites and domain/forest topology we recommend
    reviewing http://technet.microsoft.com/en-us/l.../cc755294.aspx.
    Remember that for AD, NSM requires only one Event Monitor per domain
    (and in fact you'll only be able to authorize one Event Monitor per
    domain through the NSM Admin client.) However, deploying a second Event
    Monitor as a backup may be helpful. When the AD Event Monitor is
    installed and configured for the first time, it first has to build a
    locally-cached replica of the domain it resides in. In a large domain
    this can take a long time, so having a second EM already running, which
    can be authorized immediately if the primary EM goes down, will ensure
    that you catch up with events in AD more quickly.
    -- NFMS Support Team

  • What is best practice for deploying agent(10204) on RAC 9i

    Hello,
    What would be best practice for deploying agent(10204) on RAC 9i? Should the agent be deployed on each node or should the agent be deployed on the cluster file system? What are the advantages/disavantages deploy on individual nodes vs. on cluster file system? Please advice. Thank you in advance.

    Please use agent push application to deploy agent on all the nodes at one shot
    Please refer the obe
    http://www.oracle.com/technology/obe/obe10gemgc_10203/agentpush/agentpush.htm

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • Looking for best practice / installation guide for grid agent for RAC

    I am looking for best practice / installation guide for grid agent for RAC, running on windows server.
    Thanks.

    Please refer :
    MOS note Id : [ID 378037.1] -- How To Install Oracle 10g Grid Agent On RAC
    http://repettas.wordpress.com/2007/10/21/how-to-install-oracle-10g-grid-agent-on-rac/
    Regards
    Rajesh

  • SCOM 2012 Agent - Best Practices with Base Images

    I've read through the
    SCOM 2012 agent installation methods technet article, as well as how to
    install the SCOM 2012 agent via command line, but don't see any best practices in regards to how to include the SCOM 2012 agent in a base workstation image. My understanding is that the SCOM agent's unique identifier is created at the time of client installation,
    is this correct? I need to ensure that this is a supported configuration before I can recommend it. 
    If it is supported, and it does work the way I think it does, I'm trying to find out a way to strip out the unique information so that a new client GUID will be created after the machine is sysprepped, similar to how the SCCM client should be stripped of
    unique data when preparing a base image. 
    Has anyone successfully included a SCOM 2012 (or 2007 for that matter) agent in their base image?
    Thanks, 
    Joe

    Hi
    It is fine to build the agent into a base image but you then need to have a way to assign the agent to a management group. SCOM does this via AD Integration:
    http://technet.microsoft.com/en-us/library/cc950514.aspx
    http://blogs.msdn.com/b/steverac/archive/2008/03/20/opsmgr-ad-integration-how-it-works.aspx
    http://blogs.technet.com/b/jonathanalmquist/archive/2010/06/14/ad-integration-considerations.aspx
    http://thoughtsonopsmgr.blogspot.co.uk/2010/07/active-directory-ad-integration-when-to.html
    http://technet.microsoft.com/en-us/library/hh212922.aspx
    http://blogs.technet.com/b/momteam/archive/2008/01/02/understanding-how-active-directory-integration-feature-works-in-opsmgr-2007.aspx
    You have to be careful in environments with multiple forests if no trust exists.
    http://blogs.technet.com/b/smsandmom/archive/2008/05/21/opsmgr-2007-how-to-enable-ad-integration-for-an-untrusted-domain.aspx
    http://rburri.wordpress.com/2008/12/03/untrusted-ad-integration-suppress-misleading-runas-alerts/
    You might also want to consider group policy or SCCM as methods for installing agents.
    Cheers
    Graham
    Regards Graham New System Center 2012 Blog! -
    http://www.systemcentersolutions.co.uk
    View OpsMgr tips and tricks at
    http://systemcentersolutions.wordpress.com/

  • BEA SWIFT Acknowledgement Agent Best Practices

    Does anyone have any best practices or suggestions for extending the XDAgent class? Currently the default agent is XDSWIFTACKAgent which does not appear to implement error handling (the message logged is "onError" not implemented in agent).
    Can anyone provide any details on the Agent classes?
    Thanks,
    Gordon

        Giogetz,
    Thank you for taking the time to share your experience. I am shocked by the many representatives that you had to speak to regarding your first request. This situation should have never gotten to be complicated and resulting in multiple calls from you. I apologize on behalf of those representatives for dropping the ball. I do agree that the representatives do need to be held accountable and I would like to submit feedback for those that you worked with. I have sent you a direct message. Please reply to the direct message so I can begin submitting feedback.
    AndreaS_VZW
    Follow us on Twitter @VZWSupport

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Agent determination issue

    Hello Experts,
      We have developed one custom workflow a year back and last two months we are seeing the issue with Agent Determination randomly.  We created a position with 3 users and assigned the Position to the Agent Assignment.  Some times the work item sent to other users other than 3 users assigned to the position.  We are not able to trace it, why it's behaving like this.  Any help would be appreciated.
    with best regards
    K. Mohan Reddy

    If this happens randomely then we can only assume
    Please check the wflow log from SWIA whether someone has forwarded the workitem?
    There can be also change in Position assignment, Substitute getting maintained.
    Also check whether the users are maintained in the Task level if it is General forwarding Not allowed task from the task attribute.
    Thanks
    Arghadip

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

  • Best practices with sequences and primary keys

    We have a table of system logs that has a column called created_date. We also have a UI that displays these logs ordered by created_date. Sometimes, two rows have the exact same created_date down to the millisecond and are displayed in the UI in the wrong order. The suggestion was to order by primary key instead since the application uses an oracle sequence to insert records so the order of the primary key will be chronological. I felt this may be a bad idea as a best practice since the primary key should not be used to guarantee chronological order although in this particular application's case, since it is not a multi-threaded environment, it will work so we are proceeding with it.
    The value for the created_date is NOT set at the database level (as sysdate) but rather by the application when it creates the object which is persisted by Hibernate. In a multi-threaded environment, thread A could create the object and then get blocked by thread B which is able to create the object and persist it with key N after which control returns to thread A it persists it with key N+1. In this scenario thread A has an earlier timestamp but a larger key so will be ordered before thread B which is in error.
    I like to think of primary keys as solely something to be used for referential purposes at the database level rather than inferring application level meaning (like the larger the key the more recent the record etc.). What do you guys think? Am I being too rigorous in my views here? Or perhaps I am even mistaken in how I interpret this?

    >
    I think the chronological order of records should be using a timestamp (i.e. "order by created_date desc" etc.)
    >
    Not that old MYTH again! That has been busted so many times it's hard to believe anyone still wants to try to do that.
    Times are in chronological order: t1 is earlier (SYSDATE-wise) than t2 which is earlier than t3, etc.
    1. at time t1 session 1 does an insert of ONE record and provides SYSDATE in the INSERT statement (or using a trigger).
    2. at time t3 session 2 does an insert of ONE record and provides SYSDATE
    (which now has a value LATER than the value used by session 1) in the INSERT statement.
    3. at time t5 session 2 COMMITs.
    4. at time t7 session 1 COMMITs.
    Tell us: which row was added FIRST?
    If you extract data at time t4 you won't see ANY of those rows above since none were committed.
    If you extract data at time t6 you will only see session 2 rows that were committed at time t5.
    For example if you extract data at 2:01pm for the period 1pm thru 1:59pm and session 1 does an INSERT at 1:55pm but does not COMMIT until 2:05pm your extract will NOT include that data.
    Even worse - your next extract wll pull data for 2pm thru 2:59pm and that extract will NOT include that data either since the SYSDATE value in the rows are 1:55pm.
    The crux of the problem is that the SYSDATE value stored in the row is determined BEFORE the row is committed but the only values that can be queried are the ones that exist AFTER the row is committed.
    About the best you, the user (i.e. not ORACLE the superuser), can do is to
    1. create the table with ROWDEPENDENCIES
    2. force delayed-block cleanout prior to selecting data
    3. use ORA_ROWSCN to determine the order that rows were inserted or modified
    As luck would have it there is a thread discussing just that in the Database - General forum here:
    ORA_ROWSCN keeps increasing without any DML

Maybe you are looking for