Biztalk Architecture Question - EDI Solution Setup

I have to setup an EDI Solution .
There are multiple trading partner with multiple transaction(850,810,832 , 867 Transaction).
Please help me in set up the solution and Project structure .
so that in future i will be able to add trading partner and transaction .

Hi Mohit,
For one of my client what we used to do use RoleLink and party for dynamic resolution of the part and send the message.
So we
Receive a message, this message will have ID which resolves the source
Map it to canonical schema.
Have a orchestration process mapped canonical schema
Have a code in orchestration to resolve the destination, using Party
Have RoleLink, connect the RoleLink to send shape.
Configure the Party details with qualifiers to resolve destination details.
Configure the send out to the party
In the send port have outbound map specific to the destination system. In this map you can have maps for EDI.
So when the new party comes in, just create Map, create send port configure it with outboundmap, and the Party with send port.
So coming to project structure.
External schemas-Inbound
Canonical schema
Maps
Orchestration which Common-Orchestration which will have Role-Link
External schemas-Inbound
When the new party comes-in, you can create a new project/assembly for their map (and new project if the outbound schema change).
Obviously you can have many solution, choose the one which fits you better.
If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

Similar Messages

  • Inheritance architecture question

    Hello,
    I've an architecture question.
    We have different types of users in our system, normal users, company "users", and some others.
    In theory they all extend the normal user. But I've read alot about performance issues using join based inheritance mapping.
    How would you suggest to design this?
    Expected are around 15k normal users, a few hundred company users, and even a few hundred of each other user type.
    Inheritance mapping? Which type?
    No inheritance and append all attributes to one class (and leave these not used by the user-type null)?
    Other ways?
    thanks
    Dirk

    sorry dude, but there is only one way you are going to answer your question: research it. And that means try it out. Create a simple prototype setup where you have your inheritance structure and generate 15k of user data in it - then see what the performance is like with some simple test cases. Your prototype could be promoted to be the basis of the end product if the results or satisfying. If you know what you are doing this should only be a couple of hours of work - very much worth your time because it is going to potentially save you many refactoring hours later on.
    You may also want to experiment with different persistence providers by the way (Hibernate, Toplink, Eclipselink, etc.) - each have their own way to implement the same spec, it may well be that one is more optimal than the other for your specific problem domain.
    Remember: you are looking for a solution where the performance is acceptable - don't waste your time trying to find the solution that has the BEST performance.

  • Questions in Solution manager

    Hello friends,
    Can any one answer these questions
    SAP Solution Manager Overview Key points
    -       Scope of Solution Managers capabilities
    -       Work required to implement and configure Solution Manager – i.e. how much is out of box vs. customer/implementation specific?
    -       How it can be customized
    -       Reports / metrics that solution manager offers
    Thanks

    Hello,
    what is Solution Manager? A platform that provides integrated content, tools and methodologies for implementing, supporting, monitoring, upgrading and operating your enterprise SAP solutions.
    My recommendation would be, if you have a chance to attend any of the SAP Solution Manager Seminars www.sapsolutionmanagerseminar.com/ would be the best to do. I am just back from the Amsterdam session. I have mainly worked in SAP Solution Manager projects, but still learned a few things even in areas I have been involved in.
    Can you use SAP Solution Manager out of the box without any customizing? No. You need to execute basic customizing and maintain your landsape at the minimum.
    For example you have to maintain your landscape. Generate RFC connections, setup up monitoring if required. Configure EWA reports if needed. Service Desk and Change Request Management would be large configurations. There are some reports you can use.
    How can it be customized? Check out in transaction SPRO to get some idea, but not everything is in there covered you still need to configure.
    There is a SAP Solution Manager book published since this year. 
    Regards,
    Markus

  • Oracle VM Server for SPARC - network multipathing architecture question

    This is a general architecture question about how to best setup network multipathing
    I am reading the "Oracle VM Server for SPARC 2.2 Administration Guide" but I can't find what I am looking for.
    From reading the document is appears it is possible to:
    (a) Configure IPMP in the Service Domain (pg. 155)
    - This protects against link level failure but won't protect against the failure of an entire Service LDOM?
    (b) Configure IPMP in the Guest Domain (pg. 154)
    - This will protect against Service LDOM failure but moves the complexity to the Guest Domain
    - This means the there are two (2) VNICs in the guest though?
    In AIX, "Shared Ethernet Adapter (SEA) Failover" it presents a single NIC to the guest but can tolerate failure of a single VIOS (~Service LDOM) as well as link level failure in each VIO Server.
    https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/shared_ethernet_adapter_sea_failover_with_load_balancing198?lang=en
    Is there not a way to do something similar in Oracle VM Server for SPARC that provides the following:
    (1) Two (2) Service Domains
    (2) Network Redundancy within the Service Domain
    (3) Service Domain Redundancy
    (4) Simplify the Guest Domain (ie single virtual NIC) with no IPMP in the Guest
    Virtual Disk Multipathing appears to work as one would expect (at least according the the documentation, pg. 120). I don't need to setup mpxio in the guest. So I'm not sure why I would need to setup IPMP in the guest.
    Edited by: 905243 on Aug 23, 2012 1:27 PM

    Hi,
    there's link-based and probe-based IPMP. We use link-based IPMP (in the primary domain and in the guest LDOMs).
    For the guest LDOMs you have to set the phys-state linkprop on the vnets if you want to use link-based IPMP:
    ldm set-vnet linkprop=phys-state vnetX ldom-name
    If you want to use IPMP with vsw interfaces in the primary domain, you have to set the phys-state linkprop in the vswitch:
    ldm set-vswitch linkprop=phys-state net-dev=<phys_iface_e.g._igb0> <vswitch-name>
    Bye,
    Alexander.

  • Architecture question, global VDI deployment

    I have an architecture question regarding the use of VDI in a global organization.
    We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
    Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
    Thanks

    Yes, simply bind individual interfaces for each domain on your web server,
    one for each.
    Ensure the appropriate web servers are listening on the appropriate
    interfaces and it will work fine.
    "Paul S." <[email protected]> wrote in message
    news:407c68a1$[email protected]..
    >
    Hi,
    We want to host several applications which will be accessed as:
    www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
    Is it possible to have a separate Weblogic domain for each application,all listening
    to ports 80 and 443?
    Thanks,
    Paul

  • Architectural question

    Little architectural question: why is all the stuff that is needed to render a page put into the constructor of a backing bean? Why is there no beforeRender method, analogous to the afterRenderResponse method? That method can then be called if and only if a page has to be rendered. It seems to me that an awful lot of resources are waisted this way.
    Reason I bring up this question is that I have to do a query in the constructor in a page backing bean. Every time the backing bean is created the query is executed, including when the page will not be rendered in the browser...

    Little architectural question: why is all the stuff
    that is needed to render a page put into the
    constructor of a backing bean? Why is there no
    beforeRender method, analogous to the
    afterRenderResponse method? That method
    can then be called if and only if a page has to be
    rendered. It seems to me that an awful lot of
    resources are waisted this way.There actually is such a method ... if you look at the FacesBean base class, there is a beforeRenderResponse() method that is called before the corresponding page is actually rendered.
    >
    Reason I bring up this question is that I have to do
    a query in the constructor in a page backing bean.
    Every time the backing bean is created the query is
    executed, including when the page will not be
    rendered in the browser...This is definitely a valid concern. In Creator releases prior to Update 6 of the Reef release, however, there were use cases when the beforeRenderResponse method would not actually get called (the most important one being when you navigated to a new page, which is a VERY common use case :-).
    If you are using Update 6 or later, as a side effect of other bug fixes that were included, the beforeRenderResponse method is reliably called every time, so you can put your pre-rendering logic in this method instead of in the constructor. However, there is still a wrinkle to be aware of -- if you navigate from one page to another, the beforeRenderResponse of both the "from" and "to" pages will be executed. You will need to add some conditional logic to ensure that you only perform your setup work if this is the page that is actually going to be rendered (hint: call FacesContext.getCurrentInstance().getViewRoot().getViewId() to get the context relative path to the page that will actually be displayed).
    One might argue, of course, that this is the sort of detail that an application should not need to worry about, and one would be absolutely correct. This usability issue will be dealt with in an upcoming Creator release.
    Craig McClanahan

  • BPEL/ESB - Architecture question

    Folks,
    I would like to ask a simple architecture question;
    We have to invoke a partner web services which are rpc/encoded from SOA suite 10.1.3.3. Here the role of SOA suite is simply to facilitate communication between an internal application and partner services. As a result SOA suite doesn't have any processing logic. The flow is simply:
    1) Internal application invokes SOA suite service (wrapper around partner service) and result is processed.
    2) SOA suite translates the incoming message and communicates with partner service and returns response to internal application.
    Please note that at this point there is no plan to move all processing logic from internal application to SOA suite. Based on the above details I would like get some recommedation on what technology/solution from SOA suite is more efficient to facilate this communication.
    Thanks in advance,
    Ranjith

    You can go through the design pattern called Channel Adapter.
    Here is how you should design - Processing logic remains in the application.. however, you have to design and build a channel adapter as a BPEL process. The channel adapter does the transformation of your input into the web services specific format and invoke the endpoint. You need this channel adapter if your internal application doesn't have the capability to make webservice calls.
    Hope this helps.

  • Sun Advance Architecture for SAP Solutions

    Can there be any benefits of Sun Advance Architecture for SAP Solutions from a functional point of view...i mean..ya..resource utilization will be there...through resource pooling...(typical GRIG Computing)..but other then end users..basis people...can there be any possiblity that a functional guy can cash on from this functionality?
    Message was edited by:
            Nitesh Nagpal

    Guys you were good!!!every installation things was right!!
    I've the last question about SAP Authentication: when I try to call the SAP auth page i retrieve the follow error:
    HTTP Status 404 - /SAP/jsp/auth/sapsec_logsys.faces
    type Status report
    message /SAP/jsp/auth/sapsec_logsys.faces
    description The requested resource (/SAP/jsp/auth/sapsec_logsys.faces) is not available.
    Apache Tomcat/5.5.20
    I think is a Jco jar error,it's right?
    In the (awesome) ingo blog there is: "put sapjco.jar file into the path C:\program files\...\tomcat55\shared\lib\, but i have not shared folder, than before installation i putted sapjco.jar into C:\program files\...\tomcat55\server\lib\ folder..was wrong?
    thanks a lot guys!

  • Architecture Question...brain teasing !

    Hi,
    I have a architecture question in grid control. So far Oracle Support hasnt been able to figure out.
    I have two management servers M1 and M2.
    two VIP's(Virtual IP's) V1 and V2
    two Agents A1 and A2
    the scenerio
    M1 ----> M2
    | |
    V1 V2
    | |
    A1 A2
    Repository at M1 is configured as Primary and sends archive logs to M2. On the failover, I have it setup to make M2 as primary repository and all works well !
    Under normal conditions, A1 talks to M1 thru V1 and A2 talks to M2 thru V2. No problem so far !
    If M1 dies, and V1 forwards A1 to M2 or
    if M2 dies, V2 forwards A2 to M1
    How woudl this work.
    I think (havent tried it yet) but what if i configure the oms'es with same username and registration passwords and copy all the wallets from M1 to M2
    and A1 to A2 and just change V1 to V2. Would this work ????
    please advice!!

    SLB is not an option for us here !
    Can we just repoint all A1 to M2 using DNS CNAME change ??

  • Running MII on a Wintel virtual environment + hybrid architecture questions

    Hi, I have two MII Technical Architecture questions (MII 12.0.4).
    Question1:  Does anyone know of MII limitations around running production MII in a Wintel virtualized environment (under VMware)?
    Question 2: We're currently running MII centrally on Wintel but considering to move it to Solaris.  Our current plan is to run centrally but in the future we may want to install local instances local instances of MII in some of our plants which require more horsepower.  While we have a preference for Solaris UNIX based technologies in our main data center where our central MII instance will run, in our plants the preference seems to be for Wintel technologies.  Does anybody know of any caveats, watch outs or else around running MII in a hybrid architecture with a Solarix Unix based head of the hybrid architecture and the legs being run on Wintel?
    Thanks for your help
    Michel

    This is a great source for the ins/outs of SAP Virtualization:  https://www.sdn.sap.com/irj/sdn/virtualization

  • Setup of a standby and a streams - architectural question

    Hi,
    I am thinking about a standby (prodstdby) for my Production (prod) database.
    For another part of the company we need to setup streams (HIS), this will copy only a part of the database ( some sort of archiving) to a database called HIS. During the years, HIS will become larger and larger, PROD remains only current data.
    I am thinking to do this with Physical standby for PRODSTDB
    and streams for HIS.
    Or.. and this is my question: could this be easily done with Real Application Cluster?
    We are in 10GR2.
    HIS <----- STREAMS---- PROD
        x                                       x
        x                                       x
        x                                       x
    Standby                          Standby
        x                                       x
        x                                       x
    HISSTDBY                          PRODSTDBY
    Regards
    Edited by: S11 on Nov 17, 2010 9:22 AM
    Edited by: S11 on Nov 17, 2010 9:23 AM
    Edited by: S11 on Nov 17, 2010 9:25 AM

    Hi,
    - easier to install --> no as it'll be something to install more then just a normal database without RAC so if you don't know enough of it it'll be a charge more to install it and use (maintain) it. (But it can be usefull :-) )
    - easier for maintenance --> not especially
    - last but not least: more automated --> not more automated but autamoted as well as with no RAC.
    - and less risk in case of failure --> less risk in case of failure is definitely true for the primary under RAC of course, it's the point of it !
    RAC will just make you gain protection in case of failure in the primary side prod or hist or both. If a node fails then the other(s) take the hand. With or without delay for the apply is nearly the same as logs are shipped.
    Greetings,
    Loïc

  • Biztalk Architecture

    Hi all,
    We need a biztalk solution to process various EDI files received and download the information in Oracle Database. Also the EDI files has to be generated based on the information from the Oracle database. The volume of files expected to be processed per day
    will be huge (in lakhs). Whether polling the data from stored procedures to generate EDI files and performing composite operation to insert records in database will be a better option or defining an XML format which will be used to communicate between the
    biztalk and oracle database will be better??? Also advice if there is any better way than the above two methods.
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

    If I understood your requirement correctly,
    You need to extract EDI from Oracle db by executing around 25 stored procedures and also in another process you need to send message into Oracle db and needs to execute
    25 stored procedures to send data into relevant tables.
    IMO, I would use a separate table for BizTalk’s inbound and outbound in Oracle. And have an internal batch process within Oracle.
    So for your outbound process (from Oracle to BizTalk) where you want to generate the EDI from Oracle for BizTalk:
    Let the batch process populate this BizTalk specific table with data/records whenever they want to send the message to BizTalk with dataset required for EDI. Let this batch
    process to execute the relevant stored procedures (25) to populate the data into this BizTalk specific table.
    BizTalk can poll for this table alone and whenever the data/record is there for BizTalk to process, BizTalk can process and generate the EDI file from that table.
    So for your inbound process ( from BizTalk to Oracle) where you want to send the EDI from BizTalk into Oracle dbs:
    Let BizTalk send the message out to one (or few tables) in Oracle and let the batch process execute the multiple stored procedure to insert the data/record from BizTalk
    specific table to other relevant tables.
    Using this batch process within Oracle, you mitigate the data-cleansing task to database and whereas BizTalk can only be used for data integration across systems. Since
    you also expect high-load, this design will be better as BizTalk is employed just for handling the high-volume message rather than high-data cleansing task. This design also has the advantage of less maintenance issue as executing the multiple stored procedures
    happens within database end, hence less transaction related issues which you can normally expected in database related processes.
    Regards,
    M.R.Ashwin Prabhu
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • EDI Gateway Setup

    Hi All
    I need info regarding EDI Gateways(EDI Outbound Transaction setup)
    I am trying to generate the falt file for Outbound Purchase Order using standard Extract programs. But I am getting empty file.
    I have gone through some documents available on Net. But I could not fix this problem.
    I wanted to know the basic steps we need to perform to get the output file.
    I am using standard catagories only so far. I Defined values for some Categories and attached those categories to some of the columns in the Interface Table.
    But still I am not getting the output file.
    Do you I need to delete the columns from Output Definition to which I have not attached any category.
    When I setup Trading Partner Information, Just I created A dummy name for the parnet header and attached one of the supplier name and supplier site to that partner. Will it validate the partner information somewhere while extracting the data. Is this partner information need to be present somewhere or we can create the dummy names for the testing purpose.
    Please feel free to send info to [email protected]
    I appreaciate your help regarding this.

    If you are looking at e-Commerce Gateway, you should also look at the full Oracle B2B solution, which includes complete integration, EDI, mapping, TPM, AS2 etc. capabilities. It provides a standards-driven, flexible, end-to-end solution that you won't find elsewhere. If you are and e-Business Suite user, this would be an ideal choice.
    John Morris

  • Three tier architecture questions

    Hello,
    My question is in regards to using Toplink in a three tier architecture situation. If I wish to send an object A which has a collection of Bs and B has a collection of C ( A nested object structure with two or more levels of indirection). Is the best solution to have the named query be part of a unit of work so that even if on the client side somebody unknowingly were to make the modification to one of the entity objects ( a POJO) the shared session cache would not be affected ?
    This is assuming the client side HTTP layer and the RMI/EJB layer are on different JVMs.
    Some of the other suggestions I have heard is to retrieve it from the shared session cache directly and if in case I need to modify one or more of the objects do a named query lookup on that object alone and then proceed to register that object in a unit of work and then commit the changes.
    Also the indirection would have to be utilised before the data objects are sent to the Servlet layer I presume ?(That is if I do a a.getAllOfBObjects() on the servlet side I would get a nullpointer exception unless all of B were already instatiated on the server side). Also when the objects are sent back to the server do I do a registerObject on all the ones that have changed and then do a deepMergeClone() before the uow.commit() ?
    Thanks,
    Aswin.

    Aswin,
    If your client is remote to the EJB tier then all persistent entities are detached through serialization. In this architecture you do not need to worry about reading and modifying the shared instance as it never the one being changed on the client (due to serialization).
    Yes, you do need to ensure that all required indirect relationships are instantiated on the server prior to returning them from the EJB call.
    Yes, you do need to merge the changes of the detached instance when returned to the server. I would also recommend first doing a read for the entity being merged (by primary key) on the new UnitOfWork prior to the merge. This will handle the case where you are merging into a different node of the cluster then where you read as well as allowing you to check for the case where the entity no longer exists in the database (if the read returns null then the merge will result in an INSERT and this may not be desired).
    Here is an example test case that does this:
        public void test() throws Exception {
            Employee detachedEmp = getDeatchedEmployee("Jill", "May");
            assertNotNull(detachedEmp);
            // Remove the first phone number
            PhoneNumber phone = detachedEmp.getPhoneNumber("Work");
            assertNotNull("Employee does not have a Work Phone Number",
                          detachedEmp.getPhoneNumber("Work"));
            detachedEmp.removePhoneNumber(phone);
            UnitOfWork uow = session.acquireUnitOfWork();
            Employee empWC = (Employee) uow.readObject(detachedEmp);
            if (empWC == null) { // Deleted
                throw new RuntimeException("Could not update deleted employee: " + detachedEmp);
            uow.deepMergeClone(detachedEmp);
            uow.commit();
         * Return a detached Employee found by provided first name and last name.
         * Its phone number relationship is instantiated.
        public Employee getDeatchedEmployee(String firstName, String lastName) {
            ReadObjectQuery roq = new ReadObjectQuery(Employee.class);
            ExpressionBuilder builder = roq.getExpressionBuilder();
            roq.setSelectionCriteria((builder.get("firstName").equal(firstName)).and(builder.get("lastName").equal(lastName)));
            Employee employee = (Employee)session.executeQuery(roq);
            employee.getPhoneNumbers().size();
            return (Employee)SerializationHelper.serialize(employee);
        }One other note: In these types of application optimistic locking is very important. You should also make sure that the locking field(s) are mapped into the object and not stored only in the TopLink cache. This will ensure the locking semantics are maintained across the detachment to the client and the merge back.
    Doug

  • Scalability and Architecture Question

    I am currently working on an app that will generate a resume
    from a set of user defined input into several different formats
    from an XML file (MS Word, PDF, TXT, HR-XML, and HTML). We are
    thinking that we will write all the files once at publish time and
    then store them (not sure where yet). We are doing this because we
    will be hosting the online version of the resume as a CFM file with
    access to all the other formats of the resume from their online
    resume. We are assuming that there will be many more reads then
    their will be writes over the life of the resume. So we don't want
    to compile these each time a user requests one (that is a Word,
    PDF, HTML, or HR-XML version).
    The question I have now is should we store the files in the
    database or the webserver.
    I would think that it makes sense to store them on the
    webserver. But as this will need to be in a clustered environment
    then I will need to replicate these across the farm as each new
    user creates a resume. So does anyone know if the penalty for
    replicating these across the farm is higher then calling them from
    database. Assuming that the average file size is 50K and on average
    50 files will be called over the life of the resume. Thoughts?

    Originally posted by: fappel.innoopract.com
    Hi,
    RAP doesn't support session switch over at the moment, that's true. But
    it supports load-balancing by using multiple workers. But once a session
    is opened at one worker all requests of that session are dispatched to
    this worker.
    Ciao
    Frank
    -----Ursprüngliche Nachricht-----
    Von: Mike Wrighton [mailto:[email protected]]
    Bereitgestellt: Freitag, 22. August 2008 11:35
    Bereitgestellt in: eclipse.technology.rap
    Unterhaltung: Will RAP work in a load-balanced system?
    Betreff: Will RAP work in a load-balanced system?
    Hi,
    Some of my colleagues were reviewing scalability in our web architecture
    and the question was raised about RAP scalability, in particular the
    issue that since session data is stored in memory and not in a central
    database, RAP sessions would not survive a server switch-over by a load
    balancer. Hope that makes sense?
    I was just wondering if anyone had come across this issue before and
    found a decent solution? It may just be a case of configuring the load
    balancer properly.
    Thanks,
    Mike

Maybe you are looking for