HL7 inbound : recommended architecture?

Hi,
What's the best way to process inbound HL7 messages?
Some details:
1. Inbound messages will all be ADT
2. There's a lot of common processing, for example the PID segment of every message type will be handled the same.
3. HL7 2.3.1, JCAPS 5.1.3
One suggestion is to first unmarshall to HL7_GENERIC_EVT then unmarshall to the specific event type (eg HL7_231_ADT_A01).
But how to do the common processing and keep maintainable code? The problem is the OTD object name is different for every message type. The only obvious solution requires code that is difficult to maintain like:
// Ugly code that is hard to maintain
If A01 then { A01.unmarshall(str); MSH = A01.getMSH(); PID = A01.getPID() ...}
If A02 then { A02.unmarshall(str); MSH = A02.getMSH(); PID = A02.getPID() ...}
If A03 then { A02.unmarshall(str); MSH = A03.getMSH(); PID = A03.getPID() ...}
etc.
Does anyone know if there is some superclass of the HL7 OTD's? If so it's possible to cast the OTD object up? Then for most segments this would work:
// Better.
if A01 then { A01.unmarshall(str); ADT = (Some SuperClass)A01 }
if A02 then { A02.unmarshall(str); ADT = (Some SuperClass)A02 }
MSH = ADT.getMSH()
PID = ADT.getPID()
Another way would be to dynamically create the object. I can get this to work, but only if the OTD is included in the collab properties, (which worries me). Also I cant find the constructor in the documents. Has anyone used this method?
// Probably the best
if A01 then ADT = new HL7_231_ADT_A01()
if A02 then ADT = new HL7_231_ADT_A02()
ADT.unmarshall(str);
MSH = ADT.getMSH()
PID = ADT.getPID()

We generally treat ADT as the "event" and create a generic OTD to match all event types. The downside is you need to make some segments optional which would actually be required for certain events. If the source system sends swap messages such as A17s, you also need to have a group of upto 2 PID, PV1s etc.

Similar Messages

  • Synchronous HL7 Inbound request-reply

    Hello All,
    This is my first please sorry for any mistakes
    I'm currently using,
    Oracle JDeveloper 11g Release 1 (11.1.1.7.0)
    Copyright (c) 1997, 2012, Oracle and/or its affiliates. All rights reserved.
    My task looks very simple
    I need synchronous HL7 Inbound request-reply.
    What I'm trying to do is:
    Using Jdeveloper create SOA composite project
    1) Set up synchronous HL7 Inbound request-reply using "HL7 Adapter" (Inbound type: QRY_T12, Outbound type: DOC_T12)
    2) Set up synchronous BPEL (also tried specify later) and wire it to "Exposed service" created on step 1
    3) Add simple transformation between BPEL Recieve and Reply (setting ACK.1 == "AA")
    Compile: no warnings or errors
    Deploy: no warnings or errors
    Send HL7 message -> getting "IDeliveryService.post() invoked for two-way operation 'request-reply'. This method can only be used to invoke one-way operations which don't return any messages. Please check the WSDL which defines this operation and use the method IDeliveryService.request() to invoke a two-way operation"
    What am i doing wrong?
    Is such thing as HL7 request-reply is possible?
    Will highly appreciate any help, being struggling with this more than a week
    Thanks in advance!
    PS Endpoint Acknowledgement Mode is set to SYNC

    Bob,
    I don't think PS4 (11.1.1.5) has this option available -
    http://docs.oracle.com/cd/E21764_01/integration.1111/e10229/intro_ui.htm#CHDEGEEB
    Please re-check your local setup version.
    Regards,
    Anuj

  • Recommended architecture for CProject Suite

    In a landscape were you have many SAP systems like R3,BW,CRM,ECC6...etc what is the recommended architecture for the Cprojects  & Cfolders? Only one system with Cprojects installed or we can have cprojects installed in all the landscapes(R3,CRM,ECC..etc.

    Hi all,
    one add on: the cProjects FAQ notes (900310 for release 4.0 or 1088160 for release 4.5) contain the installation options, such as:
    cProjects 4.0 is released for installation only on CRM 5.0, CRM 2007, ECC 6.0 or on SAP NetWeaver 2004s.
    cProjects 4.5 is released for installation only on CRM 5.0, ECC 6.0 or on SAP NetWeaver 7.0. Some of the        
    The relevant guides can be found under:
    http://service.sap.com/plm-inst
    - SAP PLM
    - using cProject Suite
    Regards,
    Silvia
    Edited by: Silvia Raczkevy-Eoetvoes on Oct 23, 2008 1:49 PM

  • Recommended architecture for oracle bi ee

    Hi,
    we are trying to decide whether it is better to separate the presentation services from the bi server & host them on separate machines. We would like to know what is the recommended architecture? Is it better to install the webserver,oracle presentation server on one machine & the java host& B1 server on another machine?
    thanks.

    I don't know what Oracle official recommended architecture is.
    However, the latest benchmarks from Oracle are based on BI/PS on each machine: http://rnm1978.wordpress.com/2009/09/18/collated-obiee-benchmarks/
    I'd say that's a pretty strong reason to go with BI/PS on each machine.
    FWIW, we run 2xBI/2xPS but given chance I'd probably redeploy to 4xBI&PS

  • HL7 Inbound - File with Multiple HL7 messages - how to configure?

    I have a file containing multiple HL7 v2 delimited messages, separated form one another with \r\r\n.
    Can I configure the inbound adapter in teh B2B to break up teh file into individual messages and process them through the B2B infrastructure individually?
    Thanks in advance for pointing me at the right documentation / blog / tutorial
    Edited by: Michael.Czapski on 22/07/2010 23:33

    Thanks, Anuj.
    I am using 11g R2, which I though is the current version.
    I don't have a HL7 Batch - I have a file with a bunch of HL7 messages separated by \r\r\n.
    Is there any way to configure the File adapter in the B2B to break up such a file at \r\r\n delimter characters and process each "record" as an individula HL7 message?
    If not, would I have to receive such a file using a File Adapter in the SOA Composite, have the File Adapter break it up and submit each "record" from the SOA Composite to the B2B using the outbound B2B Adapter in order to have them processed? If yes, can you point me at how I could configure teh SOA File Adapter to break up a delimited file into records?
    Thanks in advance
    Michael

  • HL7 Inbound Generic Ack

    Hello,
    We are sending out an HL7 ADT A01 and receiving a generic ACK back. The problem is the incoming ACK always throws an exception:
    HC-50547: Document support for doc ACK-2.3.1-INBOUND not found for endpoint <TP>
    This is using the healthcare console which generates b2b agreements. My document type was actually ACK_01 but I renamed it to "ACK-2.3.1-INBOUND" as for some reason the engine was looking for that hard coded name. But then I got a different error ": Agreement not found for trading partners: FromTP <>, ToTP <> with document type ACK-2.3.1-INBOUND."
    I found a relevant document for 10g B2b here for configuring generic ACK for inbound:
    http://www.oracle.com/technetwork/testcontent/b2b-tn-020-hl7-adv-features-129310.pdf
    Unfortunately I could not delete the 'Event Type' field in the 11g doc editor (field gets greyed out). Is there an equivalent 11g document on how to set this up?
    thanks,
    ed

    You are receiving ACK not ACK_A01 so you need to make the changes in the ecs/xsd files.
    Document editor steps are same. You can follow the same steps and remove the event type from the ACK_A01 ecs files.
    Using generated ACK, Create a ACK document-type and use the same in Endpoint.
    Another way is preseeded doc. Healthcare SOA Suite comes with preseeded document. In that, ACK document is already available.
    Just, you need to add ACK doctype in the Endpoint.

  • What is the recommended architecture for OWSM

    We are currently trying to completely the architecture documentation for a new SOA suite project within our company. Unfortunately we are struggling with the architecture design around the OWSM with regards to security. We are proposing to have 2 instances of the SOA suite sat on a VM ware cluster but this cluster is hosts some very sensitive data.
    Obviously any externally facing web serivces should have the OWSM manager sat in front of. But if we host the OWSM on the same SOA suite instance will this provide enough security for us? Does any one have any ideas about this or should?
    Do we need to host the OWSM on an alternative server to make it securer?

    Hi,
    Seems like what you are trying to accomplish will be possible.
    But still for web services being accessed from outside, it is seen/preferable that a OWSM sits in a DMZ kind of environment. where every request is being intercepted by the WSM before entering into the internal SOA environment.
    thanks

  • HL7 Binding Components Open ESB build 200802120006  Running Inbound Demo

    After Downloading the latest build of Netbeans I've tried to go through the HL7 Inbound Demo here:
    http://wiki.open-esb.java.net/attach/HL7BC/HL7InboundSeqNoDemo.wmv
    All goes fine until I get up to the part where I assign the various Input variables to the output variables in the Mapper. Looking at the BPEL code it generates I see:
    <variables>
            <variable name="SendHL7OperationIn" xmlns:tns="http://www.chw.org/wsdl/sendHL7" messageType="tns:sendHL7OperationRequest"/>
            <variable name="ReceiveHL7OperationIn" xmlns:tns="http://www.chw.org/wsdl/receiveHL7" messageType="tns:receiveHL7OperationRequest"/>
        </variables>
        <sequence>
            <receive name="Receive1" createInstance="yes" partnerLink="PartnerLink1" operation="receiveHL7Operation" xmlns:tns="http://www.chw.org/wsdl/receiveHL7" portType="tns:receiveHL7PortType" variable="ReceiveHL7OperationIn" />
            <assign name="Assign1">
                <copy>
                    <from>$ReceiveHL7OperationIn.part1/MSH</from>
                    <to>$SendHL7OperationIn.part1/MSH</to>
                </copy>
    ...I see the following warning when I validate this BPEL Component:
    WARNING: To: The element "MSH" can correspond to different schema elements. But all of them are qualified. So a namespace prefix requierd. The following namespaces are allowed: {urn:hl7-org:v2xml}. Expression: "$SendHL7OperationIn.part1/MSH"Any Ideas. The BPEL Spec talks about having the ability to copy in namespaces:
    <copy>
    <from>
    <literal xmlns:foo="http://example.com">
    <foo:bar />
    </literal>
    </from>
    <to variable="myFooBarElemVar" />
    </copy>But this is not supported.

    Well, found out that there was a change on how Binding Components processes XML Schema's. And the HL7BC wasn't updated yet, but the fix is in the latest Open ESB build 200802121221 available for download.
    The feedback from SUN was really quick, Open ESB is still in development, but find functionality with build 200802121221 stable enough to use in actual projects.
    Cheers!

  • Architecture for Identity Management

    I have to install the sun identity management platform and configure it.
    Is there a recommended architecture for the installation of components in high availability?
    Thanks in advance

    There is some advice in the documents. I'd look through the Installation Guide and the Deployment Guide.
    At a very high-level, to achieve high-availability, you want to make both the REPO (data tier) layer highly available and make the Application tier (e.g. IdM in an App Server) highly-available.
    There are many questions you have to ask yourself in doing this. Are you making the solution highly-available against the loss of a Data Center, the loss of a Server or some thing else entirely. This also plays into how you want to size the servers.
    Net -- I'd poke through the documents to educate yourself, but would engage someone who has "been there, done that" a few times to make sure you're approaching it correctly and such that you can make simple adjustments in the future based on load predictions.
    Good Luck!

  • Architecture question on adapter engine?

    What is the difference between running a decentral adapter engine and just having J2EE dialog instances?
    Is using a decentral adapter engine a viable method of load balancing?
    Does anyone have an SAP recommended architecture for load balancing with XI, or practical experience with this?
    Thanks
    Jeremy Baker

    Jeremy,
    <i>J2EE dialog instances:</i>
         This is used for load balancing.
    <i>Decentral AE -load balancing:</i>
         I would say, Decentral AE is not for load balancing, please correct me if i'm wrong. Load balancing is the term if the server is down then the processes will be routed to the other servers in order to continue the process without interruption. Am I right, if yes Decentral AE is not for load balancing.
      The min diff btw Central AE and Decentral AE is Central AE will be default once u installed XI. But Decentral AE is installed based on you landscape reqmt. So if you install Decentral AE , then you can route your messages to Central as well as Decentral AE.
    To be much clear, see i'm confguring File-XI-File scenario , in the adapters I can choose either Central/Decentral AE. So that the work load on particular AE will be less. Hence in the above scenario if you chosse Central AE, if its down it will not take Decentral AE to process the message.
    I hope it clears your doubt.
    <i>SAP XI Architecture & load balancing:</i>https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
    Best Regards,
    raj.

  • ISE 1.2 CWA with Multiple PSNs - SessionID Replication / Session Expired

    Hi all.
    I have a (2) Policy Services Nodes (PSNs) in an ISE 1.2 deployment running patch 1. We are using Wireless MAB and CWA on 5760 Wireless LAN Controllers running v3.3.3.
    We are hitting an issue wherein a client first passes MAB and then gets redirected to a CWA custom portal. The client then receives a Session Expired message. This seems to be related to the fact that CWA is technically a 2-stage authentication (MAB by the WLC and then CWA by the client). Specifically, it seems to happen when the WLC makes its MAB RADIUS access-request to PSN-1 and then the client comes in to PSN-2 to complete the CWA. This issue does not happen when only one PSN is in use and all authentication traffic (both MAB RADIUS and CWA) is directed at a single PSN.
    Clients resolve the FQDN in the redirect URL using public DNS and a public DNS zone file (call it cwa-portal.example.com). cwa-portal.example.com has two A records for the two PSN nodes. DNS is responding to queries using DNS round-robin.
    I have the PSNs configured in a Node Group for session information replication between PSNs, but this doesn't seem to make a difference in behavior.
    So I ask:
    What is the recommended architecture for CWA when using more than one PSN? It seems that you would need to keep the two authentication flows pinned together so that they both hit the same PSN when using more than one PSN in a deployment. A load balancer balancing on the SessionID string comes to mind (both the RADIUS MAB request and the CWA URL contain this unique per-client SessionID), but that seems terribly overbuilt for a seemingly simple problem. On the other hand, it also seems like using a Node Group setup should easily be able to replicate client SessionIDs to all nodes in the deployment so that this isn't an issue. I.e., if the WLC authenticates MAB on PSN-1, then PSN-1 should tell the Node Group about it such that when the client CWA's on PSN-2, PSN-2 doesn't respond with a Session Expired message.
    Is there any Cisco documentation that talks about this?
    Possibly related:
    https://supportforums.cisco.com/discussion/12131531/ise-12-guest-access-session-expired
    Justin

    Tim,
    Thanks for your reply and confirming my suspicion. Hopefully a future version of ISE will provide automated SessionID synchronization among PSNs so that front-end finagling in a multi-PSN environment won't be necessary.
    For anyone else with this issue who for whatever reason can't implement a load balancer(s), I built an automated EEM applet running on a "watchdog" switch (3750 running 12.2(55)SEE9) using IPSLA tracking that senses when PSN1 is down and then
    modifies an ASA to change its client-facing NAT statement for PSN1 to PSN2
    modifies the primary and HA wireless LAN controllers to change its MAB RADIUS aaa server group to use PSN2
    reverts the ASA and WLCs to using PSN1 when PSN1 is detected up and running again
    The applet ensures the SessionID authentications stay "glued" together so that both WLCs and the client hit the same PSN for both stages of authentication. It's failover only, not a load balancing solution, but it meets our current project's need for an automated HA environment.
    PM me if you want the code. I'm have a little too much going on ATM to sanitize and post it. :)
    Justin

  • Excel reports with OLAP cube in SharePoint

    I have SQL Server 2008 SSAS OLAP Cube. I have also SharePoint 2010.
    I want to let users to use pivot like reporting feature. Every user makes own reports.
    What are recommended architecture? What tools are needed? What SharePoint 2010 version and features are needed?
    Kenny_I

    Hi Kenny,
    Do you have any update for this issue?
    Best Regards,
    Wendy
    Wendy Li
    TechNet Community Support

  • Data Services and Data Quality Recommnded Install process

    Hi Experts,
    I have a few questions. We have some groups that have requested Data Quality be implemented along with another request for Data Services to be implemented. I've seen the requested for Data Services to be installed on the desktop, but from what I've read, it appears to be best to install this on the server side to allow for more of a central benefit to all.
    My questions are:
    1. Can Data Services (Server) install X.1 3.2 be installed on the same server as X.I 3.1 SP3 Enterprise?
    2. Is the Data Services (CLIENT) Version dependent on if the Data Services (Server) install is completed? Basically can the u201CData Services Designeru201D be used without the Server install?
    3. Do we require a new License key for this or can I use the Enterprise Server license key?
    4. At this time we are not using this to move data in and out of SAP, just using this to read data that is coming from SAP.
    From what I read, DATA Services comes with the SAP BusinessObjects Data Integrator or SAP BusinessObjects Data Quality Management solutions. Right now it's seems we dont have a need for the SAP Connection supplement, but definetly something we would implement in the near future. What would be the recommended architecture? A new Server with tomcat and cmc (seperate from our current BOBJ Enterprise servers)? or can DataServices be installed on the same?
    Thank you,
    Teresa

    Hi Teresa.
    Hope you are referring to BOE 3.1 (Business Objects Enterprise) and BODS (Business Objects Data Services) installation on the same server machine.
    Am not an expert on BODS installation.
    But this is my observation :
    We had recently tested on a test machine BOE BOXI 3.1 SP3 (full build) installation before upgrade of our BOE system.
    We also have BODS in our environment.
    Which we also wanted to check whether we could keep on the same server.
    So on this test machine, which already has BOXI 3.1 SP3 build, when i installed BODS server installation,
    what we observed was that,
    all the menus of BOE went away
    and only menus of BODS were seen.
    May be BODS installation overwrites/ or uninstalls BOE, if it already exists ?
    I dont know.  Though i could not fine any documentation, saying that we cannot have BODS and BOE on the same server machine. But this is what we observed.
    So we have kept BODS and BOE on 2 different machines running independently and we do not see any problem.
    Cheers
    indu

  • JCaps 5.1.3 Sun Solaris CPU performance issue

    Folks,
    We are experiencing a serious CPU performance issue on our Solaris server with HL7 projects deployed.
    The projects consist of the sample HL7 inbound and outbound projects with an additional service sending to a batch local file external for writing journals.
    The performance issue occurs when there is volume of data in the queues/topics. As we continue to deploy additional HL7 projects (usually about 6 interfaces), the CPU increases until it reached 100%.
    This sanapshot is prstat when no date is transmitting through the interfaces (One inbound - one outbound):
    B PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    15598 jre 379M 177M sleep 59 0 2:49:11 3.1% eManager/74
    21549 phs 1174M 1037M sleep 59 0 14:49:00 2.5% is_dm_phs/113
    23090 phs 3456K 3136K cpu1 59 0 0:00:01 0.4% prstat/1
    23102 phs 3792K 3496K sleep 59 0 0:00:00 0.2% prstat/1
    21550 phs 46M 35M sleep 59 0 0:13:27 0.1% stcms.exe/3
    1272 noaccess 209M 95M sleep 59 0 0:26:30 0.1% java/25
    11733 jre 420M 212M sleep 59 0 1:35:40 0.1% java/34
    131 root 4368K 2480K sleep 59 0 0:02:10 0.1% nscd/30
    23094 phs 3064K 2168K sleep 59 0 0:00:00 0.1% bash/1
    This sanapshot is prstat when data is transmitting through the interfaces(One inbound - one outbound):
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    21549 phs 1174M 1037M cpu1 20 0 14:51:20 88% is_dm_phs/113
    15598 jre 379M 181M sleep 59 0 2:49:18 1.3% eManager/74
    21550 phs 46M 35M sleep 49 0 0:13:29 1.2% stcms.exe/3
    23090 phs 3456K 3128K cpu3 49 0 0:00:03 0.4% prstat/1
    1272 noaccess 209M 95M sleep 59 0 0:26:30 0.1% java/25
    11733 jre 420M 212M sleep 59 0 1:35:40 0.1% java/34
    21546 phs 118M 904K sleep 59 0 0:01:21 0.1% isprocmgr_dm_ph/13
    This sanapshot is prstat -L when data is transmitting through the interfaces (One inbound - one outbound):
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/LWPID
    21549 phs 1174M 1037M cpu1 41 0 0:00:45 22% is_dm_phs/13971
    21549 phs 1174M 1037M sleep 51 0 3:31:06 21% is_dm_phs/1394
    21549 phs 1174M 1037M run 51 0 3:14:16 20% is_dm_phs/1296
    21549 phs 1174M 1037M sleep 52 0 3:14:13 19% is_dm_phs/1380
    15598 jre 379M 181M sleep 50 0 1:49:57 3.1% eManager/4
    21549 phs 1174M 1037M sleep 59 0 0:15:36 1.7% is_dm_phs/4
    21550 phs 46M 35M sleep 59 0 0:10:52 1.0% stcms.exe/1
    21549 phs 1174M 1037M sleep 59 0 0:10:45 0.9% is_dm_phs/6
    15598 jre 379M 181M sleep 54 0 0:33:35 0.3% eManager/35
    21549 phs 1174M 1037M sleep 59 0 0:03:34 0.3% is_dm_phs/5
    21550 phs 46M 35M sleep 59 0 0:02:37 0.2% stcms.exe/2
    21549 phs 1174M 1037M sleep 59 0 0:02:17 0.2% is_dm_phs/3
    21549 phs 1174M 1037M sleep 59 0 0:02:17 0.2% is_dm_phs/2
    Solaris 10 server details:
    CPU's (4x900 Sparc III+)
    4096 MB RAM
    SunOS testican 5.9 Generic_118558-39 sun4u sparc SUNW,Sun-Fire-880
    Disk: 6 internal Fujitsu 72GBs
    swapspace on the server:
    total: 4305272k bytes allocated + 349048k reserved = 4654320k used, 10190536k available
    My sysadmin has run statistics (iostat, vmstat, psig, pmap, pfind, pstack, mpstat, etc.) - and has reported that the server is performing fine - with the exception of the CPU. It also looked like the swap space was not being utilized.
    We have increased the MaxPerm value to 512, and increased the heapsize on isprocmgr_dm_phs to -Xmx2048m, and increased the heapsize on the domain to 2048 per KB 103824
    We have also added the -d64 value (specific to Solaris) per the Deployment Guide.
    We increased the value of Maximum Pool size in the JMS clients to 128 - per the deployment Guide.
    We increased the swapspace on the server to 10Gb:
    total: 4305272k bytes allocated + 349048k reserved = 4654320k used, 10190536k available
    We have modified the tcpip and kernal parameters per the Sun Administration server 8.2 performance tuning guide:
    core file size (blocks, -c) unlimited
    data seg size (kbytes, -d) unlimited
    file size (blocks, -f) unlimited
    open files (-n) 8192
    pipe size (512 bytes, -p) 10
    stack size (kbytes, -s) 8192
    cpu time (seconds, -t) unlimited
    max user processes (-u) 29995
    virtual memory (kbytes, -v) unlimited
    None of these modificatons appear to increase performance.
    Any help is appreciated.
    Thanks
    Rich...

    Hi,
    I noticed this behavior with the Alert + SNMP Agents installed but not configured. In this situation, the SNMP agent generates traps for all events, leading to high CPU using, even when nothing was processed. Are you in a similar case?
    Regards

  • Query on Creating and Populating I$ table on different condition

    Hi,
    I have a query on creating and populating I$ table on different condition.In which condition the I$ table is created??And These condition are mentioned below:
    1)*source and staging area* are on same server(i.e target is on another server)
    2)*staging area and Target* are on same server(i.e source is on another server)
    3)*source,staging area and Target* are on *3 different* server
    4)source,staging area and Target are on same server
    Thanks

    I am not very much clear about your question. Still trying my best to clear it out.
    In your all above requirement I$ table will be created.
    If staging same as target ( One database,one user) then all temp tables will be created under this user
    If staging is different than target ( One database,two user (A,B)) then all temp tables will be created under this user A (lets consider) and data will be inserted to the target table that is present in user B
    If staging is different than target ( Two database,two user (A1,A2), not recommended architecture) then all temp tables will be created under this user A1 (database A1) and data will be inserted to the target table that is present in user A2 (database A2)
    If source,staging,target will under one database then No LKM is required,IKM is sufficient to load the data into target. Specifically for this you can see one example given by Craig.
    http://s3.amazonaws.com/Ora/ODI-Simple_SELECT_and_INSERT-interface.swf
    Thanks.

Maybe you are looking for