Design/Architecture questions regarding streams, triggers, JMS, XML

I need to devise a way to enable an application that I am currently developing to act upon a change to a database table and to generate a message to notify an external system of the change.
The scenario I envisage is:
- A 3rd party (a user or a batch job) modifies (INSERT, DELETE, UPDATE) a row in a given table.
- There is a trigger that is fired by the modification which would put a message onto a Stream (thus the notification would be persistent if the database were to go down).
- A Java server process would be running that continually checks the Stream (referenced above) for a message . When the message has been dropped in the queue by the trigger the Java server process would read the message and determine what had changed, and then it would generate an XML message that it would pass onto the external system to notify the change. NOTE : The external system would not have access to the database so the outbound XML message would contain all of the information that describes what has changed.
This sounds simple enough to me, but there is a fair bit of "hand waving" around how this would actually work in practice!
Can anyone provide any assistance with any Oracle infrastructure that I might be able to use to ease any of this? My main area of concern is how the trigger should indicate what has changed when a modification occurs. I could create the trigger to write a message in a simple (but proprietary format - or potentially XML) to the Stream queue, and then the Java server process could read the message (via OJMS) and parse the message and using JDBC determine what modification actually ocurred, but this could be quite a bit of work... Is there a smarter way to do this?
Perhaps I could use XMLDB to allow the trigger to immediately render the change into XML format which would slightly ease the parsing that the Java server process has to do.
Any help would be greatly appreciated!
Thanks,
James

I need to devise a way to enable an application that I am currently developing to act upon a change to a database table and to generate a message to notify an external system of the change.
The scenario I envisage is:
- A 3rd party (a user or a batch job) modifies (INSERT, DELETE, UPDATE) a row in a given table.
- There is a trigger that is fired by the modification which would put a message onto a Stream (thus the notification would be persistent if the database were to go down).
- A Java server process would be running that continually checks the Stream (referenced above) for a message . When the message has been dropped in the queue by the trigger the Java server process would read the message and determine what had changed, and then it would generate an XML message that it would pass onto the external system to notify the change. NOTE : The external system would not have access to the database so the outbound XML message would contain all of the information that describes what has changed.
This sounds simple enough to me, but there is a fair bit of "hand waving" around how this would actually work in practice!
Can anyone provide any assistance with any Oracle infrastructure that I might be able to use to ease any of this? My main area of concern is how the trigger should indicate what has changed when a modification occurs. I could create the trigger to write a message in a simple (but proprietary format - or potentially XML) to the Stream queue, and then the Java server process could read the message (via OJMS) and parse the message and using JDBC determine what modification actually ocurred, but this could be quite a bit of work... Is there a smarter way to do this?
Perhaps I could use XMLDB to allow the trigger to immediately render the change into XML format which would slightly ease the parsing that the Java server process has to do.
Any help would be greatly appreciated!
Thanks,
James

Similar Messages

  • Architecture Question regarding RPD (Best design practice)

    Hello, I need some design help regarding the best way to build a RPD for a Bank. Your help / guidance is greatly appreciated as always:
    Following is data (example)
    Revenue ALL (1 million record)
    Revenue by filter A ONLY (250k)
    Revenue by filter B ONLY (50k)
    Revenue by filter C ONLY (150k)
    Revenue by filter D ONLY (25k)
    Requirement:
    Report Revenue ALL
    Report Revenue % A of ALL = (250k / 1 million) * 100
    Report Revenue % B of ALL = (50k / 1 million) * 100
    Report Revenue % C of ALL = (150k / 1 million) * 100
    Report Revenue % D of ALL = (25k / 1 million) * 100
    Should i build this from a single FACT source or should i have something like the following:
    Source one: Revenue ALL
    Source two: Revenue by filter A ONLY (250k)
    Source three: Revenue by filter B ONLY (50k)
    Source four: Revenue by filter C ONLY (150k)
    Source five: Revenue by filter D ONLY (25k)
    All of these will have ONE common Bank dimension allowing me to join across if needed.
    Essentially, the question is, should i use a single source table containing ALL data or have multiple sources each providing me exactly what i am looking for?
    Thanks,
    Jes

    I would use single source data at ALL level and then filter it as needed.
    user
    100.00 * count (column filterd by ..) / count (column)
    to get your percentages.

  • Question regarding fields triggering delta on ECC side

    Hi gurus
    I wonder does any field change on ECC side triggers delta queue? If not how can I know what fields do trigger deltas?
    For example I use FI_AR_4 extractor, one of the extraction structure fields comes from BSEG.SGTXT. It is long text field. Then it is being changed it doesn't impact any amounts or other key information. Will BSEG.SGTXT field change trigger the delta queue if any other field of the record wasn't changed?

    But this timestamp field is posted then some fields have been changed, i guess...

  • RAC design architecture question

    Hi all RAC Gurus
    I am trying to learn the RAC technology and we are trying to build our first test bed. This exercise is to basically get hands on experience installing and configuring RAC and then test couple of our application to see if they scale well to RAC enivronment. Upper managment has provided two blade servers in a blade center for this purpose. And we are planning to put in two stand alone database into RAC mode.
    My question is since we have only two servers to work with, can we build two RAC clusters one for each database. Will there be any conflict issue if two RAC clusters exist on same machine. I know a simpler solution would be create one cluster with two nodes and have both databases under that cluster. But for learning purpose, since we have two dba's, can each dba build a cluster on the same machine (ofcourse for differnet databases).
    Thanks
    Sriram

    I don't think it is feasible. For training purposed, I think you should use virtual images such as vmware images and then your technical staff can deploy the vmware images and play around with the technology.
    There are few articles available on net installing the Oracle RAC with vmware.
    http://www.oraxperts.com

  • Architecture question regarding record management

    We are designing a contract management system in SharePoint 2013. The contracts are separated by departments and by locations. Users in a particular department/location should only have access to contracts in their own department/location.
    We are thinking to create 1 site collection for each department - this ensures that we stay within the 200 GB content database recommendation. In each department site collection, we will have 1 document library with 1 folder for each location - permission inheritance
    is broken on each folder to restrict access on a location basis. We thought of 1 document library so that we can use out-of-box SharePoint views for reports across locations for executives to see.
    The problem with this design is that there are about 250 different locations. This means, in each department site collection, we will have to create 250 folders in our document library and break inheritance on the 250 folders.
    Another approach would be to give no one access to the document library & use a custom drop off library form to add documents. We can create a web part which elevates permissions and displays only those documents that users are supposed to see. The advantage
    with this approach is much less broken permission inheritance. The disadvantage is that we won't be able to use OOTB views and we'll have to implement our own search.
    Thoughts appreciated!

    You could consider "Audience Targeting". It works well with the Content Query WebPart. You can enable it on one document library and it provides a TargetAudience column allowing you to assign groups/roles to the content. This could get around the
    creation of separate site collections and breaking permission inheritance.
    http://technet.microsoft.com/en-us/library/cc261958(v=office.14).aspx#Section4
    Blog | SharePoint Field Notes Dev Tools |
    SPFastDeploy | SPRemoteAPIExplorer

  • Where to post Architecture question regarding Hyp Plan/Essbase installation

    *1st Installation (uses HypDB1 and Hypapp1 server) ----- PRODUCTION*
    Server HypDB1: <------ Server Hypapp1
    ESSBASE1 Hyp Plan1
    HYP Shared Service1 Studio1
    EAS1 Apache
    ODI Workspace1
    Oracle 11g
    *2nd Installation (uses HypDB2 and Hypapp2 server BUT Shares) ---- PROPOSED ADD ON TO PRODUCTION*
    BUT USES ORACLE11G of  1st Installation
    Server HypDB2: <------ Server Hypapp2
    ESSBASE2 Hyp Plan2
    HYP Shared Service2 Studio2
    EAS2 Apache
    ODI Workspace2
    Is the above possible?

    Another idea I found from this app note.
    http://www.xjtag.com/app-note-16.php
    But there is a note in this app note that speaks to the DONE signal.
    http://www.xjtag.com/app-note-14.php
    "When the programming operation of a Xilinx FPGA completes it will toggle its DONE signal; if this occurs when it is not expected then the PROM or processor that configures the FPGA can automatically re-start the process of programming the FPGA with its functional image (undoing the clearing that has just been done through XJTAG)."
    I couldn't find any info about this in the 7 Series Config User Guide. Does this apply to the 7 Series FPGAs?

  • Architecture question, global VDI deployment

    I have an architecture question regarding the use of VDI in a global organization.
    We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
    Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
    Thanks

    Yes, simply bind individual interfaces for each domain on your web server,
    one for each.
    Ensure the appropriate web servers are listening on the appropriate
    interfaces and it will work fine.
    "Paul S." <[email protected]> wrote in message
    news:407c68a1$[email protected]..
    >
    Hi,
    We want to host several applications which will be accessed as:
    www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
    Is it possible to have a separate Weblogic domain for each application,all listening
    to ports 80 and 443?
    Thanks,
    Paul

  • I need help with a question regarding XML

    I have an exam tomorrow, and I was checking exams from previous years. There is a multiple choice question regarding XML, and I don't know all the correct answers (some are pretty obvious).
    It's plattform independent
    It allows UML representation
    It's a Text Only format
    It's faster to process than native binary formats
    It's a data exchange standard
    It allows specification of the meaning of the data in the document
    It comes from HTML
    .NET and J2EE provide tools to handle it

    In the beginning there was SGML... the newest HTML
    standard is an XML defined language (although most
    sites and pages are still not XML compliant). Soyou
    can think of HTML as a subset of XML but XML isnot
    by any means a subset of HTML.But wasn't XML's format based on the original HTML?
    That is, XML is a generalization of the original
    HTML. So, the "true"/"false" determination depends
    on how you define "It comes from HTML" (I thought it
    was derived from HTML, which would make the
    statement "true" [IMHO], but I could be wrong).No XML is not derived from HTML. Markup languages existed well before HTML. HTML popularized them to a large extent but it was not the first by any means.
    See http://en.wikipedia.org/wiki/Generalized_Markup_Language
    Also Wiki states in it's XML article that XML is a subset of SGML (which through that article) is a descendent of GML
    The language heirarchy from SGML is as follows
    SGML ---- HTML (old spec and deprecated)
           |
           |
           ----- XML ----- XHTML (current XML spec)edit: the tree got screwed up. XML descends from SGML directly.
    Message was edited by:
    cotton.m

  • JMS architecture question for fat client/server.

    Hi. Is JMS suitable for fat client-server architecture where a certain number of fat client applications (like a few hundreds) open connections directly to the JMS provider? Is it going to have scalability problem when the number of connections grow?

    Depending on your JMS provider, this may be a very suitable architecture. The Sun MQ JMS Cluster was architected exactly for this problem. If the number of connections onto a single broker becomes too much of a burden for this broker, it can be put into an MQ cluster and share the number of connections. Of course, the number of connections a broker can handle will be totally dependent on the resources available to it. OS, CPU, memory, other applications running on the same machine, etc....
    TE

  • Technical question regarding xml:lang...

    Greetings,
              I have a bit of a technical question regarding language alternative (Lang Alt), are these valid statements if there is no alternatives (using dc:title as an example):
    1)
    This is a test title
    2)
    This is a test title
    3)
    This is a test title
    Are these values valid, or is there an obligration to put them in an rdf:Alt ?? In my head 1 and 2 are valid, but no 3, am I right?
    Thanks!
    Carl Eric Codère

    Thanks for your quick response!!
    Can you please give little more detailed explanation on what exactly is the entity. Where does that entity be defined and what exactly does an entity represent. Please let me know.
    -Satya

  • Question Regarding MIDI and Sample Accuracy

    Hi,
    I have 2 questions regarding MIDI.
    1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
    2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
    I've looked through the manual, and couldn't find any specific answer to these questions.
    Thanks!
    Message was edited by: Matthew Usnick

    Ok, I've done some experimenting, and here are my results.
    I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
    Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
    I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
    SO.
    Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
    This is very good news, and glad I worked this out!
    (if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
    Message was edited by: Matthew Usnick

  • Questions regarding creation of vendor in different purchase organisation

    Hi abap gurus .
    i have few questions regarding data transfers .
    1) while creating vendor , vendor is specific to company code and vendor can be present in different purchasing organisations within the same company code if the purchasing organisation is present at plant level .my client has vendor in different purchasing org. how the handle the above situatuion .
    2) i had few error records while uploading MM01 , how to download error records , i was using lsmw with predefined programmes .
    3) For few applications there are no predefined programmes , no i will have to chose either predefined BAPI or IDOCS . which is better to go with . i found that BAPI and IDOCS have same predefined structures , so what is the difference between both of them  .

    Hi,
    1. Create a BDC program with Pur orgn as a Parameter on the selection screen
        so run the same BDC program for different Put organisations so that the vendors
        are created in different Pur orgns.
    2. Check the Action Log in LSMW and see
    3.see the doc
    BAPI - BAPIs (Business Application Programming Interfaces) are the standard SAP interfaces. They play an important role in the technical integration and in the exchange of business data between SAP components, and between SAP and non-SAP components. BAPIs enable you to integrate these components and are therefore an important part of developing integration scenarios where multiple components are connected to each other, either on a local network or on the Internet.
    BAPIs allow integration at the business level, not the technical level. This provides for greater stability of the linkage and independence from the underlying communication technology.
    LSMW- No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    Check your procedure using this Links.
    BAPI with LSMW
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI
    For document on using BAPI with LSMW, I suggest you to visit:
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    http://esnips.com/doc/1cd73c19-4263-42a4-9d6f-ac5487b0ebcb/LSMW-with-Idocs.ppt
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI.ppt
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Error in IKM SQL to JMS XML Append

    Hi
    I am doing transformation from Oracle table to XML and send that XML to JMS Queue.
    So i am uing 1) IKM SQL to JMS XML append 2) LKM SQL to SQL
    In IKM SQL to JMS XML append
    set following parameters:
    SYNCHRO_XML_TO_JMS =      true
    INITIALIZE_XML_SCHEMA =      true
    JMS_EXTRACT_MESSAGE = <Root element name>
    when execute interface,
    Getting error in step "Insert into XML (JMS Message)" like
    ODI-1228: Task INBOUND_XML_TEST (Integration) fails on the target JMS_QUEUE_XML connection INBOUND_XML.
    Caused By: java.sql.SQLException: java.sql.SQLException: Parameter not set
         at com.sunopsis.jdbc.driver.JMSXMLPreparedStatement.addBatch(JMSXMLPreparedStatement.java:62)
         at oracle.odi.runtime.agent.execution.sql.BatchSQLCommand.execute(BatchSQLCommand.java:42)
         at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)
         at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)
         at oracle.odi.runtime.agent.execution.DataMovementTaskExecutionHandler.handleTask(DataMovementTaskExecutionHandler.java:84)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:537)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740
    Regards,
    Ankush.

    Hi,
    Is this issue resolved? If so, can you provide the fix?
    I am getting the same error.
    Thanks,
    Ruby
    Edited by: Rubellah Rajakumar on Oct 1, 2012 5:21 PM

  • I want  to design a News like the *LInklist* by XML Form Builder

    Hello:
       Everyone!
       Now I have another problem about XML Form Builder
       I want  to design a News like the LInklist by XML Form Builder
       I want the result,For Example
    SAP News
           1news1
           2news2
           3news3
    who can give me some adviseon about how to disign the Edit, Show and Renderlistitem in XML
    Thanks a lot
    Hope your help!

    Hi,
    In XML Form
    1) Edit form is used for designing the user interface.
    2) Show form is used for displaying the xml form to the end user
    3) RenderListItem is used for  rendering the xml form into a small description like 1News1, 2news2  etc..
        Here you need to display only the heading of the xml form
    After you create the xml form
    Create Layout set  where you need to give the proerties like xml renderer form , no of items to display etc...
    Thanks&Regards,
    Raghu

  • Question regarding ASO application in Essbase 11 version

    Hi All,
    Thanks for the replies to my previous posts.
    I have a question regarding the ASO applications for telecom company built in Essbase 11 version. Please provide your feedback on the design.
    The ASO application has the following number of Dimensions:
    Dimensions     Number of Levels     Number of Level 0 Members     Number of Attribute Dimensions
    Dimension1     2     6.5 million     15
    Dimension2 1     3     
    Dimension3     1     4     
    Dimension4 1      6     
    Dimension5 1     6     
    Dimension6     1     5     
    Dimension7 1     3     
    Dimension8     5     1700     
    Dimension9     2     800     
    Dimension10 2     40000     
    Dimension11 3     750     
    Dimension12 2     34000     
    Dimension13 1      15     
    The number of Measures is 8.
    The outline size is around 2.12 GB.
    The data is mostly sparse. Does this design yield a good performance. Should I change some of the attributes to UDAs to increase the performance.I think Attribute dimensions are more flexible than UDAs but it affects the performance of the retrieval.
    Thanks in advance.
    Kannan.

    In ASO attribute dimensions are treated like regular dimensions, That is to say they are materalized just like a regular dimension. Changing them to UDA's won't buy you the same performance as having them as dimensions will. The one nice thing about Attributes in ASO cubes (and BSO cubes) is you don't clutter us the screen with dimensions that are not used a lot. If your attrubite dimensions are used often in your ASO cube, there would be no performance difference if you made them regular dimensions.

Maybe you are looking for