BPM design question

Hello folks,
I have this requirement  and I have designed a BPM for the same, I would appreciate if you could give me any improvements/suggestions:
Req: Receive a message from Sender A the message has a transaction ID associated to it, Send the message to Receiver B and from then on wait an Hour to receive an acknowledge from Receiver B for that transaction. if no ack is received then shoot a mail to the users saying that the transaction didn't make it through. If success don't do anything, Just end the process.
Design:
1. Recv step (to receive the message/ start the BPM)
2. Send Step (to reciever B to send the message )
3. Block  ---  Block has the following    a. Receive step (to receive the ACK from Receiver B)     b.   Deadline Branch  (with a wait time of 1 hour) Inside the Deadline branch there is a Send Step to send Email followed by a Control step to end the process.
Thank you! I would appreciate a betterment in design

In my opinion, this is not a very good design. Keeping open a BPM instance for 1 hour is not recommended. In case you have hundreds or thousands of such messages coming in, it would badly hit the performance.
You haven't mentioned which kind of system is your receiver system. You may think about the following parameters:
1.  What is taking so much time to send the ack?
2. Could this ack be sent later as an async interface?
Regards,
Prateek

Similar Messages

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Design question: Scheduling a Variable-timeslot Resource

    I originally posted this in general java programming, because this seemed like a more high-level design descussion. But now I see some class design questions. Please excuse me if this thread does not belong here (this is my first time using the forum, save answering a couple questions).
    Forum,
    I am having trouble determining a data structure and applicable algorithm (actually, even more general than the data structure -- the general design to use) for holding a modifiable (but more heavily read/queried than updated), variable-timeslot schedule for a given resource. Here's the situation:
    Let's, for explanation purposes, say we're scheduling a school. The school has many resources. A resource is anything that can be reserved for a given event: classroom, gym, basketball, teacher, janitor, etc.
    Ok, so maybe the school deal isn't the best example. Let's assume, for the sake of explanation, that classes can be any amount of time in length: 50 minutes, 127 minutes, 4 hours, 3 seconds, etc.
    Now, the school has a base operation schedule, e.g. they're open from 8am to 5pm MTWRF and 10am to 2pm on saturday and sunday. Events in the school can only occur during these times, obviously.
    Then, each resource has its own base operation schedule, e.g. the gym is open from noon to 5pm MTWRF and noon to 2pm on sat. and sun. The default base operation schedule for any resource is the school which "owns" the resource.
    But then there are exceptions to the base operation schedule. The school (and therefore all its resources) are closed on holidays. The gym is closed on the third friday of every month for maintenance, or something like that. There are also exceptions to the available schedule due to reservations. I've implemented reservations as exceptions with a different status code to simplify things a little bit: because the basic idea is that an exception is either an addition to or removal from the scheduleable times of that resource. Each exception (reservation, closed for maintenance, etc) can be an (effectively) unrestricted amount of time.
    Ok, enough set up. Somehow I need to be able to "flatten" all this information into a schedule that I can display to the user, query against, and update.
    The issue is complicated more by recurring events, but I think I have that handled already and can make a recurring event be transparent from the application point of view. I just need to figure out how to represent this.
    This is my current idea, and I don't like it at all:
    A TimeSlot object, holding a beginning date and ending date. A data structure that holds list of TimeSlot objects in order by date. I'd probably also hold an index of some sort that maps some constant span of time to a general area in the data structure where times around there can be found, so I avoid O(n) time searching for a given time to find whether or not it is open.
    I don't like this idea, because it requires me to call getBeginningDate() and getEndDate() for every single time slot I search.
    Anyone have any ideas?

    If I am correct, your requirement is to display a schedule, showing the occupancy of a resource (open/closed/used/free and other kind of information) on a time line.
    I do not say that your design is incorrect. What I state below is strictly my views and should be treated that way.
    I would not go by time-slot, instead, I would go by resource, for instance the gym, the class rooms (identified accordingly), the swimming pool etc. are all resources. Therefore (for the requirements you have specified), I would create a class, lets say "Resource" to represent all the resources. I would recommend two attributes at this stage ("name" & "identifier").
    The primary attribute of interest in this case would be a date (starting at 00:00hrs and ending at 24:00hrs.), a span of 24hrs broken to the smallest unit of a minute (seconds really are not very practical here).
    I would next encapsulate the availability factor, which represents the concept of availability in a class, for instance "AvailabilityStatus". The recommended attributes would be "date" and "status".
    You have mentioned different status, for instance, available, booked, closed, under-maintainance etc. Each of these is a category. Let us say, numbered from 0 to n (where n<128).
    The "date" attribute could be a java.util.Date object, representing a date. The "status", is byte array of 1440 elements (one element for each minute of the day). Each element of the byte array is populated by the number designation of the status (i.e, 0,1,2...n etc.), where the numbers represent the status of the minute.
    The "Resource" class would carry an attribute of "resourceStatus", an ordered vector of "ResourceStatus" objects.
    The object (all the objects) could be populated manually at any time, or the entire process could be automated (that is a separate area).
    The problem of representation is over. You could add any number of resources as well as any number of status categories.
    This is a simple solution, I do not address the issues of querying this information and rendering the actual schedule, which I believe is straight forward enough.
    It is recognized that there are scope for optimizations/design rationalization here, however, this is a simple and effective enough solution.
    regards
    [email protected]

  • LDAP design question for multiple sites

    LDAP design question for multiple sites
    I'm planning to implement the Sun Java System Directory Server 5.2 2005Q1 for replacing the NIS.
    Currently we have 3 sites with different NIS domains.
    Since the NFS over the WAN connection is very unreliable, I would like to implement as follows:
    1. 3 LDAP servers + replica for each sites.
    2. Single username and password for every end user cross those 3 sites.
    3. Different auto_master, auto_home and auto_local maps for three sites. So when user login to different site, the password is the same but the home directory is different (local).
    So the questions are
    1. Should I need to have 3 domains for LDAP?
    2. If yes for question 1, then how can I keep the username password sync for three domains? If no for question 1, then what is the DIT (Directory Infrastructure Tree) or directory structure I should use?
    3. How to make auto map work on LDAP as well as mount local home directory?
    I really appreciate that some LDAP experta can light me up on this project.

    Thanks for your information.
    My current environment has 3 sites with 3 different NIS domainname: SiteA: A.com, SiteB:B.A.com, SiteC:C.A.com (A.com is our company domainname).
    So everytime I add a new user account and I need to create on three NIS domains separately. Also, the password is out of sync if user change the password on one site.
    I would like to migrate NIS to LDAP.
    I want to have single username and password for each user on 3 sites. However, the home directory is on local NFS filer.
    Say for userA, his home directory is /user/userA in passwd file/map. On location X, his home directory will mount FilerX:/vol/user/userA,
    On location Y, userA's home directory will mount FilerY:/vol/user/userA.
    So the mount drive is determined by auto_user map in NIS.
    In other words, there will be 3 different auto_user maps in 3 different LDAP servers.
    So userA login hostX in location X will mount home directory on local FilerX, and login hostY in location Y will mount home directory on local FilerY.
    But the username and password will be the same on three sites.
    That'd my goal.
    Some LDAP expert suggest me the MMR (Multiple-Master-Replication). But I still no quite sure how to do MMR.
    It would be appreciated if some LDAP guru can give me some guideline at start point.
    Best wishes

  • Design question for database connection in multithreaded socket-server

    Dear community,
    I am programming a multithreaded socket server. The server creates a new thread for each connection.
    The threads and several objects witch are instanced by each thread have to access database-connectivity. Therefore I implemented factory class which administer database connection in a pool. At this point I have a design question.
    How should I access the connections from the threads? There are two options:
    a) Should I implement in my server class a new method like "getDatabaseConnection" which calls the factory class and returns a pooled connection to the database? In this case each object has to know the server-object and have to call this method in order to get a database connection. That could become very complex as I have to safe a instance of the server object in each object ...
    b) Should I develop a static method in my factory class so that each thread could get a database connection by calling the static method of the factory?
    Thank you very much for your answer!
    Kind regards,
    Dak
    Message was edited by:
    dakger

    So your suggestion is to use a static method from a
    central class. But those static-methods are not realy
    object oriented, are they?There's only one static method, and that's getInstance
    If I use singleton pattern, I only create one
    instance of the database pooling class in order to
    cionfigure it (driver, access data to database and so
    on). The threads use than a static method of this
    class to get database connection?They use a static method to get the pool instance, getConnection is not static.
    Kaj

  • BPM design -help

    Hi all,
    i need you your help in enhancing one existing BPM.
    the initial state was
    file sys-> BPM -> sap R3 (system A) and also sap R3 (system B)
    where the BPM is   --> recv > sync send(A)->send(B)
    file sys send ONE vendor_key in xml format to BPM.
    now sysnc send is conected to R3 A through a RFC_A, which send one key field to RFC_A and get the whole record of a Z-table
    through that very RFC_A
    this whole record is then sent to another RFC_B thorugh send(B), which update a Z-table in the second sap system B.
    if the picture is clear, then the requirement is that:
    this time i have to design a BPM which will receive one xml file from file-sys, in which there will be multiple vendor key
    instead of one.
    the xml message must undergo 1:n transformation in bpm to craete xml messages containing one vendo key each.then parallely calling RFC_a with vendor key , get the reponse record from system R3 A,  and call RFC_B to send the record to system R3 B.

    HI Abhishek,
    Thanks for your response. Pls find below my BPM Design Details
    Block 1
    Step Name : Block1
    Exceptions : runTime
    Transformation
    Step Name : Transformation1
    Exception : System Error - runTime
    Exception Branch
    Exception Handler - runTime
    Control Step
    Setp Name : Control0
    Action : Throw Alert.
    I have followed the blog : https://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/3465. [original link is broken] [original link is broken] [original link is broken]
    Still the alert is not getting triggered.
    Could you pls guide me, whethere is there any change need in my above design.
    Regards
    Mani

  • Reg: BPM design

    Hi All,
    I am bit confused on the solution approach of BPM design and I would like your suggestions in completing my task.
    Requirement :
    scenario is proxy to Jdbc .
    I will get two input files from the source system and I have combine two files information into database table :
    1) first file will come only once and have key field which I have to consider.
    2) second file  may come more than once based on bookings at source side( 3/5/6/8).
    with current design , able to capture only one output from this design . my current design is :
    Fork
    Two receive steps ( correlation enabled in two steps based on key value)
    transformation step
    send step.
    with the current design in bpm ,correlation will come out after the first message satisfies the condition. Now i want to loop in such away based on the second file input  correlation has to work ( there is no input from source to take count on files). pl guide me.
    Regards,
    Suman

    Hi,
    This is what happens: the first receive step starts the bpm. If you have, afterwards, some other receive steps you have to set up the correlation condition.
    Once a message is being sent, the engine looks at all the started BPMs and based on the correlation condition it assigns it to one of them (that's why it is important to have the correlation condition unique per instance).
    From what I understand, you have the first message and then you can get 1 to 4 more messages. So you always have at least two messages: the first one sets the correlation id for the second and further messages. This correlation condition/id is just a field in the message that corresponds. Now after you get the second message, you say that you already know how many other messages will follow, that's why I say put a loop block with the receive (which should have the same correlation as the other).
    Now the contents of the first message will be available to the other messages as long as you keep it in a separate container variable, and you can re-use it in the loop.
    Hope this helps,
    Horia

  • SOA real-time design question

    Hi All,
    We are currently working with SOA Suite 11.1.1.4. I have a SOA application requirement to receive real-time feed for six data tables from an external third party. The implementation consists of five one-way operations in the WSDL to populate the six database tables.
    I have a design question. The organization plans to use this data across various departments which requires to replicate or supply the data to other internal databases.
    In my understanding there are two options
    1) Within the SOA application fork the data hitting the web-service to different databases.
    My concern with this approach is what if organizations keep coming with such requests and I keep forking and supplying multiple internal databases with the same data. This feed has to be real-time, too much forking with impact the performance and create unwanted dependencies for this critical link for data supply.2) I could tell other internal projects to get the data from the populated main database.
    My concern here is that firstly the data is pushed into this database flat without any constraints and it is difficult to query to get specific data. This design has been purposely put in place to facilitate real-time performance.Also asking every internal projects to get data from main database will affect its performance.
    Please suggest which approach should I take (advantage/disadvantage. Apart from the above two solutions, is there any other recommended solution to mitigate the risks. This link between our organization and external party is somewhat like a lifeline for BAU, so certainly don't want to create more dependencies and overhead.
    Thanks

    I had tried implementing the JMS publisher/subscriber pattern before, unfortunately I experienced performance was not so good compared to the directly writing to the db adapter. I feel the organization SOA infrastructure is not setup correctly to cope with the number of messages coming through from external third party. Our current setup consists of three WebLogic Servers (Admin, SOA, BAM) all running on only 8GB physical RAM on one machine. Is there Oracle guideline for setting up infrastructure for a SOA application receiving roughly 600000 messages a day. I am using SOA 11.1.1.4. JMS publisher/subscriber pattern just does not cope and I see significant performance lag after few hours of running. The JMS server used was WebLogic JMS
    Thanks
    Edited by: user5108636 on Jun 13, 2011 4:19 PM
    Edited by: user5108636 on Jun 13, 2011 7:03 PM

  • Workflow design questions: FM vs WF to call FM

    Hereu2019s a couple of workflow design questions.
    1. We have Workitem 123 that allow user to navigate to a custom transaction TX1. User can make changes in TX1.  At save or at user command of TX1, the program will call a FM (FM1) to delete WI 123 and create a new WI to send to a different agent. 
    Since Workitem 123 is still open and lock, the FM1 cannot delete it immediately, it has to use a DO loop to check if the Workitem 123 is dequeued before performing the WI delete.
    Alternative: instead of calling the FM1, the program can raise an event which calls a new workflow, which has 1 step/task/new method which call the FM1.  Even with this alternative, the Workitem 123 can still be locked when the new workflowu2019s task/method calls the FM1.
    I do not like the alternative, which calls the same FM1 indirectly via a new workflow/step/task/method.
    2. When an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow .. step .. task .. method .. FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is to call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object.
    Any recommendation?
    Amy

    Mike,
    Yes, in my first design, the TX can 1. raise a terminating event for the existing workitem/workflow and then 2. raise another event to call another workflow.   Both 1 and 2 will be in FM1. 
    Then the design question is: Should the FM1 be called from TX directly or should the TX raise an event to call a new workflow which has 1 step/task, which calls a method in the Business object, and the method calls the FM1?
    In my second design question, when an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow, which has 1 step/task, which calls a method, which calls the FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is either call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object or raise an event to call a receiver FM (FMx).
    Thanks.
    Amy

  • Method design question...and passing object as parameter to webserice

    I am new to webservice...one design question
    i am writing a webservice to check whether a user is valid user or not. The users are categorized as Member, Admin and Professional. For each user type I have to hit different data source to verify.
    I can get this user type as parameter. What is the best approach to define the method?
    Having one single method �isValidUser � and all the client web service can always call this method and provide user type or should I define method for each type like isValidMember, isValidAdmin ?
    One more thing...in future the requirement may change for professional to have more required field in that case the parameter need to have more attribute. But on client side not much change if I have a single isValidUser method...all they have to do is pass additional values
    isValidUser(String username, String usertype, String[] userAttributes){
    if usertype == member
    call member code
    else if usertype = professional
    call professional code
    else if usertype = admin
    call admin code
    else
    throw error
    or
    isValidMember(String username, String[] userAttributes){
    call member code
    One last question, can the parameter be passed as object in web service like USER object.

    First of all, here is my code
    CREATE OR REPLACE
    TYPE USERCONTEXT AS OBJECT
    user_login varchar2,
    user_id integer,
    CONSTRUCTOR FUNCTION USERCONTEXT (
    P_LOGIN IN INTEGER
    P_ID_ID IN INTEGER
    ) RETURN SELF AS RESULT
    Either your type wont be compiled or this is not the real code..

  • Aggregation level - design  question

    Hi, All
    we are in BI-IP ( Netweaver 2004s SPS16).
    I have a design question for this scenario.
    User needs to plan the amounts for a duration (start period and end period), Jan2008 -Dec2008 (001.2008, 012.2008) = 12000.
    We need to distribute this to the periods equally. 001.2008 = 1000, 002.2008 =1000 .... 012.2008=1000.
    If the user changes the period amounts, it should be reflected back in the duration amount.
    Pl suggest the design for the aggregation levels to achieve this.
    Thanks in advance.
    velu

    Hello Velu,
    As the name also goes, creating an "aggregation level" will only result in "aggregation". What your requirement is, is called disaggregation or distribution - this cannot happen automatically - you will either have to use the feature in input queries or use the distribution planning function or create your own planning function using fox/exit.

  • Centralized WLC Design Question

    Dears,
    In my scenario, i am designing CEntralized WLC deployment. I have 30 AP in Buidling X(200 Users) and 20 AP in Buidling Y(150 Users). I am planning to install HA WLC CLuster where Pimary & Secondary WLC will reside in physically different Data Centers A & B. 
    I have a wireless Design Question and i am not able to get clear answers. Please refer to the attached drawing and answer the following queries:
    If Buidling X users want to talk to building Y Users, then how Control & Data Traffic flow will happen between Buidling X & Y. Would all the traffic will go to Primary WLC from Bldg X APs first and then it will be Re Routed back to Buidling Y APs? Can i achieve direct switching between Bldg X&Y APs without going toward WLC?
    If Building X & Y Users want to access the internet, how would be traffic flow? Would the traffic from X&Y AP will go tunnel all the traffic towards WLC and then it will be routed to internet gateway?is it possible for Bldg X&Y AP to directly send traffic towards Internet Gateway without going to controllers?
    I have planned to put WLC at physically different locations in different DC A & B. Is it recommended to have such a design? What would be the Failver traffic volume if Primary WLC goes down and secondary controller takes over?
    My Reason to go for Centralized deployment is that i want to achieve Centralized Authentication with Local Switching. Please give your recommendations and feedback
    Regards,
    Rameez

    If Buidling X users want to talk to building Y Users, then how Control & Data Traffic flow will happen between Buidling X & Y. Would all the traffic will go to Primary WLC from Bldg X APs first and then it will be Re Routed back to Buidling Y APs? Can i achieve direct switching between Bldg X&Y APs without going toward WLC?
              Traffic flows to the WLC that is the primary for the AP's, then its routed over your network.
    If Building X & Y Users want to access the Internet, how would be traffic flow? Would the traffic from X&Y AP will go tunnel all the traffic towards WLC and then it will be routed to Internet gateway?is it possible for Bldg X&Y AP to directly send traffic towards Internet Gateway without going to controllers?
              The WLC isn't a router, so you would have to put the Internet traffic an a subnet and route.
    I have planned to put WLC at physically different locations in different DC A & B. Is it recommended to have such a design? What would be the Failover traffic volume if Primary WLC goes down and secondary controller takes over?
    Like I mentioned... earlier, the two HA WLC has to be on the same layer 2 subnet in order for you to use HA.  The guide mentions an Ethernet cable to connect both the HA ports on the WLC.
    Thanks,
    Scott
    Help out other by using the rating system and marking answered questions as "Answered"

  • BPM design pattern

    can any provide information on BPM design pattern and give examples scenarios.
    thank you

    hi vicky,
    check this links
    Design Patterns in Exchange Infrastructure
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f0ad6de0-7cd6-2910-f894-dd7fe18b6fbf
    BPM with Patterns explained Part-1
    regards
    reddy

  • Dreamweaver design question

    Hi all. I'm new to the forum and ha da design question. My site took about 3 weeks to complete and after finishing what I though was a pretty error free website I noticed that dreamwever 8 was coming up with numerous errors that matched http://validator.w3.org's scans. My question is this. Why does dreamwever ( regardless of the release ) allow the designer of the website he/she is creating without pointing out the errors as they go along with simple instructions on how to fx them.  As an example My meta tags
    <META NAME="keywords" CONTENT="xxxxxxx">
    <META NAME="description" CONTENT="xxxxxxxx">
    <META NAME="robots" CONTENT="xxxxx">
    <META NAME="author" CONTENT="xxxxxx">
    <META NAME="copyright" CONTENT="xxxxxx">
    all had to be changed over to
    <meta name="keywords" xxxxxxxxxxxxx">
    <meta name="description" CONTENT="xxxxxxx">
    <meta name="robots" CONTENT="xxxxxx">
    <meta name="author" CONTENT="xxxxxxxx">
    <meta name="copyright" CONTENT="xxxxxxxx">
    all because dreamweaver didnt tell me that the <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
       "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    didnt fit the original design. Now my site ( if you wish to view the code ) is www.gamblingwhore.com and if you look at the page source you will see that the code has been corrected on dw 8 but still shows more than 30 errors on http://validator.w3.org. Does dreamwevaer not have the basic tool available to fix these errors without such hassle. Its not just my site either, many sites built in dreamwever can be checked with the http://validator.w3.org website only to find more than 20 -100 different errors.
    Dream weaver creators need to focus on these errors because they hinder seo and they create alot of extra work
    Thank you

    The w3c and XHTML have come a ways since the release of Dreamweaver 8 (I used it in late 2004 and 2005).
    Dreamweaver 8 will build transitional XHTML files as well as old style single tag HTML. It all depends on the personal preferences of the designer.
    Just for kicks, go to say... 20 random websites and see just how many get a green light when you validate them. If its half, you're lucky. This page doesn't even validate;
    Dreamweaver has the menu option (at least in CS3 an CS4) under the Commands menu to "Clean Up HTML" or "Clean Up XHTML" depending on what you're building. I make a point of running that command as I build along with Apply Source Formatting.
    I also use a local validator program to check my code before putting anything.
    That's why they call it WYSIWYG software.
    If it did everything perfectly for everyone every single time, good web designers would find themselves out of work.

  • OSPF Area Addition - Design Question

    Hello,
    I have a design question regarding OSPF. I am looking to add a new ospf area (1). The area will live on two Core routers and two Distribution routers. Can you please look at the attached Pics and tell me which design is better.
    I would like to be able to connect Core-01 to Dist-01 and Core-02 to Dist-02 with a connection between Dist-01 and Dist-02, but this will result in a discontiguous area, correct?
    Thanks,
    Lee

    I would say that the more common design is to have just backbone area links between the core routers. But there is no real issue with having an area 1 link between them...
    If I were you, I would not make the area a totally NSSA. Here are my reasons for that:
    - you will get sub-optimal routing out of the area since you have two ABRs and each distribution router will pick the closest one of them to get out to the backbone even though it may be more optimal to use the other one
    - in an NSSA case, one of the two ABRs will be designated as the NSSA translator, which means that if you are doing summarisation on the ABRs, all traffic destined for these summarised routes will be drawn to the area through that one ABR.
    Paresh

Maybe you are looking for