XI Design Issue- BPM Usage and Performance

Hi All
System A is sending mutiple messages to XI and every message has a node called TEVEN which has line Items. The TEVEN is repeated and based on EId value. The receiver has to be decided, that means a single message can have multiple same Eid which has to be colleted in one set of Message and XI will keep on receiving such messages for 30 minutes and after the same grouping from all messages and their payload being done a file will be created to different Receivers (in case of Eid 1 the receiver will be System A in case of Eid 2 the receiver will be System B)
How do I Achieve this in my BPM - the problem is to go through every message payload and then collect TEVEN header in one single message and keep on doing so for all messages received within 30 minutes and then using file adpater put those files on File Server (The receiving system desires to have only one file and will check every 30 minutes for the file)
Any thoughts on designing this scenarion in XI are welcome. And also regarding comments on designing a BPM to handle this and the performance related with that.
<ns0:TEVEN>
<ns0:EText />
<ns0:EId>0001</ns0:EId>
</ns0:TEVEN>
<ns0:TEVEN>
u2026u2026.
u2026.
</ns0:TEVEN>
BR / Swetank

Hi,
If you have collect the messages till 30 mins and then create a file then i see you have to use BPM only.
You can use the correlation for different  Eid, or you can use the option of Enhanced Receiver Determination.
The help for both is available on SDN.
with regards,
Ravi Siddam

Similar Messages

  • How can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?

    QuestionHow can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?
    AnswerThere are a few simple steps you can take to ensure your hard disk drive is used optimally.
    Use Toshiba HDD Protection
    Many Toshiba laptops come with a program called Toshiba HDD Protection pre-installed. This program helps to protect your hard disk drive from being damaged due to falls or impacts. By default, it should already be enabled. You might be tempted to lower the detection levels in this application, but doing so could cause your hard disk drive to be damaged. Remember that while the application can reduce the chance of damage, you should still avoid allowing the laptop to fall or suffer rapid impacts.
    For more information on this utility, see the following article:
    TOSHIBA HDD Protection
    Optimize the drive
    Windows 8 and Windows 7 optimize hard disk drives automatically through a process called defragmentation. Unless you've disabled this, you don't need to do anything. If you have disabled this and want to run the process, you can still do so.
    In Windows 8, search for "Defrag" at the Windows Start screen and select "Defragment and optimize your drives."
    In Windows 7, search for "Defrag" in the Start Menu's search field and select "Disk defragmenter."
    You can use this tool to optimize your hard disk drives, allowing Windows to find needed files faster.
    Remove items from startup
    Some applications run automatically when Windows starts. This can add additional functionality, but it also decreases the performance of your computer. Sometimes you might want to disable certain programs from starting automatically.
    In Windows 8, search for "Task Manager" at the Start screen. Select the "Startup" tab. Select an application you'd like to disable from starting automatically and then click the "Disable" button in the lower-right.
    In Windows 7, type "msconfig" in the Start Menu's search field and press ENTER. Uncheck the boxes next to applications you'd like to disable from starting automatically.
    You should be sure of the purpose of an application before disabling it from starting automatically. Some applications might be important. If in doubt, you might consider searching on the Web to discover more information about a program. Remember that if you find that you disabled something vital, you can always re-enable it.
    For more information, please see the following video:

    QuestionHow can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?
    AnswerThere are a few simple steps you can take to ensure your hard disk drive is used optimally.
    Use Toshiba HDD Protection
    Many Toshiba laptops come with a program called Toshiba HDD Protection pre-installed. This program helps to protect your hard disk drive from being damaged due to falls or impacts. By default, it should already be enabled. You might be tempted to lower the detection levels in this application, but doing so could cause your hard disk drive to be damaged. Remember that while the application can reduce the chance of damage, you should still avoid allowing the laptop to fall or suffer rapid impacts.
    For more information on this utility, see the following article:
    TOSHIBA HDD Protection
    Optimize the drive
    Windows 8 and Windows 7 optimize hard disk drives automatically through a process called defragmentation. Unless you've disabled this, you don't need to do anything. If you have disabled this and want to run the process, you can still do so.
    In Windows 8, search for "Defrag" at the Windows Start screen and select "Defragment and optimize your drives."
    In Windows 7, search for "Defrag" in the Start Menu's search field and select "Disk defragmenter."
    You can use this tool to optimize your hard disk drives, allowing Windows to find needed files faster.
    Remove items from startup
    Some applications run automatically when Windows starts. This can add additional functionality, but it also decreases the performance of your computer. Sometimes you might want to disable certain programs from starting automatically.
    In Windows 8, search for "Task Manager" at the Start screen. Select the "Startup" tab. Select an application you'd like to disable from starting automatically and then click the "Disable" button in the lower-right.
    In Windows 7, type "msconfig" in the Start Menu's search field and press ENTER. Uncheck the boxes next to applications you'd like to disable from starting automatically.
    You should be sure of the purpose of an application before disabling it from starting automatically. Some applications might be important. If in doubt, you might consider searching on the Web to discover more information about a program. Remember that if you find that you disabled something vital, you can always re-enable it.
    For more information, please see the following video:

  • In the Aurora "about" window it notes it automatically sends Mozilla info like usage and performance, etc. Does disabling telemetry etc in options disable this?

    As the questions says, does disabling the telemetry options in the browser settings disable any information, usage, and performance sharing as it does in Firefox proper, in spite of the blurb in the Aurora "about" window?

    hello Alxlight, the text in the about-dialog is static and should just inform users about the default configuration in the experimental aurora/nightly channels of firefox. when you decide to disable the various settings in ''options > advanced > data choices'' this data won't be submitted to mozilla, despite what the about-dialog says...

  • How to monitor CPU usage and performance on a Hyper-V server with several VM's

    I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system.
    I have set up three Virtual Machines, all running Windows 2008 32 bit.
    Build server, running Team City
    Staging server
    SQL Server, running SQL Server 2005
    I have some troubles with the setup in that the host monitor remains responsive at all times, even though the VM's are seemingly working at 100% cpu and are very sluggish and unresponsive. (I have asked a separate question about that.)
    So the question here is: What is the best way to monitor how the physical CPU's are actually utilized? The reason I am asking is that I am being told that i cannot reliably use the task manager to monitor CPU usage in a VM.

    First, you have to remember that in Hyper-V that the "host" is called a parent partition and it really just like a virtualized guest with special permissions and roles. Just like any other child/guest, when you open up Task Manager, you can not see the CPU
    usage of the other children on the server.
    Ben Armstrong has a good explanation of this here:http://blogs.msdn.com/virtual_pc_guy/archive/2008/02/28/hyper-v-virtual-machine-cpu-usage-and-task-manager.aspx
    To summarize his post, you need to check three things to get an accurate picture of CPU utilization:
    View the CPU usage on each guest - this is available through Hyper-V Manager or Performance Monitor.
    CPU usage due to context switching - this is the perfmon counter called % Hypervisor Run Timeunder Hyper-V Hypervisor Virtual Processor
    Child partition worker process - vmwp.exe running on the parent partition (1 per child). This handles Hyper-V operations like saving
    state.

  • One design issue about merge and equivalent DDL

    Hi All,
    My DB is 11.1. RAC 4 nodes.
    Today I encountered one SQL that did merge and ran very slowly. After tuning, it
    cost more than 2 hrs to finish.
    According to Oracle manual and my own understanding,
    1.MERGE itself can't have inner-partition parallelism. So if the target table is
    non-partitioned, PML doesn't work here.
    2.MERGE genertes redo and can't be suppressed.
    So I think whether an equivalent CTAS works better, considering about daily
    increments are relatively large amount comparing to the target table.
    Pseudo SQL looks like:
    create table new_table
    parallel 4 nologging
    as
    select case
    when target_table_join_columns is null
    then new_line
    when source_table_join_columns is null
    then original_line
    when both are not null
    then updated_line
    from target_table t full outer join source_table s
    on ... join columns;
    (It's just a sample.)
    The advantages I can see is:
    1. Far less redo.
    2. Parallel
    The disadvantages I can see is:
    1. More spaces
    2. Full load not incremental.
    I know I have to benchmark it and consider about all cases. But has anybody encountered the similar issues before? And whether you get benefits from this kind of change?
    Best regards,
    Leon

    Hi,
    you have to set the correct permissions on the HFMAnswers Folder.
    But it depends a bit on your exact requirement....users must be able to save reports under the shared folder or only my folders?
    What you need a least are some groups to make distinction between who can see which reports and who can use which functionality within oracle bi.
    Do you have that clear? This way you can try to setup a generic security framework.
    Kr,
    A

  • An issue of efficiency and performance

    Suppose I'm concerned about effeciency and performance. Let's compare two snippets of code.
    Code Snippet 1:
    for(int i=0; i<10000; i++) {
      String s = "str";
    }Code Snippet 2:
    String s;
    for(int i=0; i<10000; i++) {
    s = "str";
    }My question are:
    (1) Would Snippet 1 be more memory-consuming than Snippet 2, since new String object is repeatedly created?
    (2) Would Snippet 1 be slower than Snippet 2?
    (3) Would the runtime environment automatically optimize the code, so that developer need not worry about it?
    Thanks!

    My question are:
    (1) Would Snippet 1 be more memory-consuming than
    Snippet 2, since new String object is repeatedly
    created?The same amount of "str" String objects are created in both cases. The difference is where the String reference variable s is declared.
    (2) Would Snippet 1 be slower than Snippet 2?No. In a high level language when you declare a variable (like String s) you only control its type and its scope. By placing a variable you're NOT telling the compiler where and when to actually allocate memory.
    (3) Would the runtime environment automatically
    optimize the code, so that developer need not worry
    about it?You decide type and scope of the variable, the rest is up to the compiler. There's a general rule though: Declare a variable as close to where it's used as possible (the narrow scope rule). This actually may help the compiler to optimize the code. So snippet 1 is preferable.

  • Correlations - Usage and performance

    Hello forum users,
    I would like to know until where I can go with BPM using correlation.
    My need is to initiate a BPM with a first message and finish it at the arrival of a second one (different message) using a correlation Identifier. Such a standard case.
    My question is : It is a problem if the second message is able to arrive many weeks after the first. and during this time many and many other instance of the same BPM will be initiate....
    Must I care with performance issue ?
    How ?
    BPM steps in detail :
    Flat file representing a customer sales order is received  => Init BPM
    Syncrhonous BAPI call on R/3. Response give sd document number => Activation of correlation
    BPM send a first (technical) acknowledgement file to original sender.
    Now a second acknowledgement file (more business one) must be send when all the items are ok (It may have been errors in automatic integration...). In order to be in this configuration, user actions may be required and this can take a long time (many days, week)
    When user decide sd documents is full and ok, SAP R/3 send the second message that close the correlation and trigger the end of BPM (generating ack flat file N°2)
    (I imagine the second message as an ORDRSP trigger by an output SAP Control)
    Thanks for your advices.
    JC.

    hi,
    To explain you the correlation in simple terms...take a simple example of BPM with a send step(async request) and receive step(async response). I am sending a PO request using the send step and waiting for a PO response using the recieve step. Assume that i have two instances of this BPM running i.e two PO request's going simultaneously. when i get the response back for these two requests , there will be two recieve steps waiting for the response since there are two instances of BPM running. the response need to be assigned the corresponding requests. This is where correlation comes into picture. I can use PO number as my correlation field. i.e I activate my correlation in the send step and use this correlation in receive step (this is configurable in BPM).
    Example: PO number needs to be part both request and the response message structure.
    BPM instance1:
    send step -> activate correlation -> send message with PO Number1
    Receive step -> use correlation -> receive response message with PO Number 1.
    BPM instance1:
    send step -> activate correlation -> send message with PO Number2
    Receive step -> use correlation -> receive response message with PO Number 2.
    There are many different scenario's whre you can use the correlation..this is one of them...The weblog shows another way of using correlation.
    Also Refer SAP help...
    Correlating Messages
    Use
    You use a correlation to assign messages that belong together to the same process instance. A correlation joins messages that have the same value for one or more XML elements. A correlation is therefore a loose coupling of messages: at design time, it enables you to define which message a receive step must wait for, without knowing the message ID.
    For example, in a process, receivestep_1receives the message purchaseorder, while receivestep_2receives the message salesorder. Receivestep_1creates a correlation that defines that the corresponding sales order must have the same purchase order number. Receivestep_2uses this correlation. This means that an instance of the process processes a purchase order and the corresponding sales order, which has the same purchase order number.
    If it satisfies the relevant correlations, a message can be processed in multiple processes. However, a message is only delivered once per process.
    For more details.. visit the blog by sravya
    /people/sravya.talanki2/blog/2005/08/24/do-you-like-to-understand-147correlation148-in-xi
    Thanks,
    Vijaya

  • Request mechanism to retrieve usage and performance statistics in BPC?

    Hi BPC Gurus,
    We have just completed rolling out BPC 5.1 and were interested in monitoring the health & user adoption (e.g. number of users logged in, how many times they logged in, what kind of reports they ran, how long the EVDRE query for those reports took etc).
    I have scoured through all the available tables for the Appset & the audit database at the SQLServer level and could not find any table that holds information around this. The only information I found was around the logged in statistics for Administrators via table tblLoggedIn and the statistics of who keyed in data using Input templates via the audit tables.
    Is there some roundabout way to get this information at least something to the tune of who logged in and when?
    On the BW side, there is a rich set of information through the use of pre-delivered Statistics cube that provides myriad details which can be used to monitor the health and usage of the system. Are there plans to include this kind of statistical information in BPC 7.0?
    Your help and guidance would be greatly appreciated.
    Thanks,
    Abhay Shanbhag

    Yes, BPC 7.0 has a statistics mechanism.

  • RTF template design issue for header and body section

    Hi All,
    I have a RTF template having header and body sections.
    in header section i have order number, customer number etc.
    in body part i am displaying details for the respective order.
    But the problem is in header part my query is returning multiple row , so my requirement is for each row i.e order, i need to display the details accordingly.
    Ex: Header query returns order number : 1,2,3
    in my page i need below way:
    Order number :1
    a,b,c
    order number :2
    c,d,e
    order number :3
    g,h,g
    I tried to use loop in header section , but it is repeating only for header part not body part please help me on this.
    Thanks
    Deb

    Avinash thanks for your help,
    Actually in my requirement the header level and line level invoice numbers are different, in header level i have different invoices and for each invoice i need to display some other invoices along with some data.
    I have sent you the respective RTF file and Sample XML file to your mail, could you please help me on this, i am struggling whole day on this.
    the link between the header and line section is header_id.
    Thanks in advance
    Deb

  • Printing memory and performance optimization

    Hello,
    I am using JVM 1.3 for a big Java Application.
    Print Preview consumes 1.5MB of JVM's memory and performance is slow.
    Please give your valuable ways to reduce memory usage and performance improvement
    will be appreciated.
    /* print method in ScrollablePanel extends JPanel */
         public int print(Graphics g, PageFormat pf, int pi) throws PrinterException
              double pageHeight = 0;
              double pageWidth = 0;
              Graphics2D g2 = (Graphics2D)g;
              pageWidth = pf.getImageableWidth();
              if (pi >= pagecount)
                   return Printable.NO_SUCH_PAGE;
              g2.translate(pf.getImageableX(),pf.getImageableY());
    // < Print Height manipultion>
              g2.setClip(0,(int)(startHeight[pi]), (int) pageWidth, (int)(endHeight[pi] - startHeight[pi]) );
              g2.scale(scaleX,scaleX);
              this.print(g2);
              g2.dispose();
              System.gc();
              return PAGE_EXISTS;
    /* print preview */
    private void pagePreview()
    BufferedImage img = new BufferedImage(m_wPage, m_hPage, BufferedImage.TYPE_INT_ARGB);
    Graphics g = img.getGraphics();
    g.setColor(Color.white);
    g.fillRect(0, 0, m_wPage, m_hPage);
    target.print(g, pageFormat, pageIndex);
    pp = new PagePreview(w, h, img); // pp is JPanel
    g.dispose();
    img.flush();
    m_preview = new PreviewContainer(); //m_preview is JPanel
    m_preview.add(pp);
    ps = new JScrollPane(m_preview);
    getContentPane().add(ps, BorderLayout.CENTER);
    Best Regards,
    Krish

    Good day,
    As I tried, there are two ways of doing printPreview.
    To handle this problem, add only one page at a time.
    To browse through the page use Prev Page, Nexe Page buttons in the toolbar.
    1) BufferedImage - occupies memory .
    Class PagePreview extends JPanel
    public void paint(Graphics g) {
    g.setColor(getBackground());
    g.fillRect(0, 0, getWidth(), getHeight());
    g.drawImage(m_img, 0, 0, this);
    paintBorder(g);
    This gives better performance, but consumes memory.
    2) getPageGraphics in the preview panel . This occupies less memory, but re-paint the graphics everytime when paint(Graphics g) is called.
    Class PagePreview extends JPanel
    public void paint(Graphics g)
    g.setColor(Color.white);
    RepaintManager currentManager = RepaintManager.currentManager(this);
    currentManager.setDoubleBufferingEnabled(false);
    Graphics2D g2 = scrollPanel.getPageGraphics();
    currentManager.setDoubleBufferingEnabled(true);
    g2.dispose();
    This addresses memory problem, but performance is better.
    Is there any additional info from you?
    Good Luck,
    Kind Regards,
    Krish

  • I'm having major issues with battery life on my iphone 5 . When I'm on 3G my battery on lat 6 hours . But when I'm on edge network I get almost 8 hours usage and full day stand by . That's a. Major difference. But these phones were built for 3G and LTE .

    The battery only last 6hrs complete with basic normal usage . I tried all the suggestions and nothing was helping. I finally tried switching off the 3G to EDGE and what a major difference in battery life . My phones last more than a day . I got so far 8 hrs usages And 25 hours standby . But that's phones were created to handle lte and 3 g why is it killing the battery life ?? How can I resolve this issue ?

    Thanks for the replies. It took a while not hearing anything so thought I was alone. I have done many of the suggestions already. The key here is that it occurs on both phones with apps, and phones still packaged in a box.
    A Genius Bar supervisor also checked his Verizon data usage log and found the same 6 hour incremental use. Suprisingly, he did not express much intrigue over that. Maybe he did, but did not show it.
    I think the 6 hour incremental usage is the main issue here. I spoke with Verizon (again) and they confirmed that all they do is log exactly when the phone connected to the tower and used data. The time it records is when the usage started. I also found out that the time recorded is GMT.
    What is using data, unsolicited, every 6 hours?
    Why does it change?
    Why does it only happen on the iPhone 5 series and not the 4?
    Since no one from Apple seems to be chiming in on this, and I have not received the promised calls from Apple tech support that the Genius Bar staff said I was suppose to receive, it is starting to feel like something is being swept under the rug.
    I woke up the other day with another thought ... What application would use such large amounts of data? Well ... music, video, sound and pictures of course. Well ... what would someone set automatically that is of any use to them? hmmm ... video, pictures, sound. Is the iPhone 5 succeptible to snooping? Can an app be buried in the IOS that automatically turns on video and sound recording, and send it somewhere ... every 6 hours? Chilling. I noted that the smallest data usage is during the night when nothing is going on, then it peaks during the day. The Genius Bar tech and I looked at each other when I drew this sine wave graph on the log print outs during an appointment ...

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • Oracle BPM Workspace and IE 11 Compatibility issues

    Hi all,
    I am using Oracle BPM 10g and when I use IE 9 (till IE 11) or firefox latest versions , chrome,etc.. I get a pop-up with the title "execution" and unable to close the pop-up as well.
    Can you please suggest if there is some configurations I should look into?
    However, the strange thing is when I enable "Compatibility view settings" in IE 9 and higher, there are no issues seen and the workspace is rendered as expected.
    Kindly help.
    Thanks and Regards,
    Alice

    Hi Alice,
    Not the answer you were looking for, but here is the compatibility matrix for Oracle BPM 10g. 
    Oracle BPM Interoperability Matrix
    Looks like IE 9 is not supported for some reason, but versions other versions of IE and Firefox are.  You might have to reach out to Customer Support.
    Dan

  • Comfortable in the usage and options available in the query designer/WAD.

    Dear  Friends
    can any body send me docs regarding Comfortable in the usage and options available in the query designer/WAD.
    Thanks & Regards
    Ramana

    Hi Friend,
    For WAD :
    [http://help.sap.com/saphelp_nw04/helpdata/en/a9/71563c3f65b318e10000000a114084/content.htm]
    [http://help.sap.com/saphelp_nw04/helpdata/en/9f/281a3c9c004866e10000000a11402f/content.htm]
    For Query Designer :
    [http://help.sap.com/saphelp_nw04s/helpdata/en/9d/76563cc368b60fe10000000a114084/frameset.htm]
    [https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/webcontent/uuid/ba95531a-0e01-0010-5e9b-891fc040a66c]
    [https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/605f1751-a701-2a10-b791-9da5ba4f2a64]
    [https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0a81de5-a701-2a10-76bd-d8ec848cd326]
    [https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/505e9d7e-0601-0010-3fa7-b40010bbdec5]
    [http://help.sap.com/saphelp_nw2004s/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm]
    Scroll down to Business Intelligence-> Bi suite Business Explorer-> Query Designer
    Hope this helps.
    Regards
    Hemant Khemani

  • SOA Design issues and other politics

    Hi all,
    I have a requirement for live data feed from external system. I am using SOA11g and JDeveloper 11g. There are two designs, one proposed and other I have in mind to achieve this.
    1) The external system sends XML data in a push model to the exposed SOA Web Service (uses one-way messaging mode) at my end. I then store the message in the database
    a) In this design how do we keep track of all messages that are sent are received. Is there a better solution.
    2) The third party is proposing a Web Service at their end. The application being real-time (i.e any changes at their DB end i.e some DB tables, should be propogated across to our web services using XML messages). I will have to keep sending XML requests on a regular basis (say every 5 seconds). Can I achieve such type of Web Service client using SOA 11g?
    a) Here I have a design issue, that the data feed is live, why do the WS client have to keep sending requests at regular intervals. Why can't the third party send data whenever there is an update/insert at their database end. Third party is coming up with advantages like loose coupling and making the Web Service more generic. I doubt all the claims give that the applications are B2B and we are the other ones who will be using their web services for the time being. Their may be other two organizations later on.
    b) If the first request is not yet returned, will the second request after 5 seconds be blocked.
    This designs and solutions are becoming quite political across organizations, and got to do with who will take the blame for data issues. I just want a proper SOA design for live data feed. Please suggest the advantages and disadvantages of both if anybody has been through this path.
    Thanks
    Edited by: user5108636 on 1/09/2010 18:19

    See if wireless isolation is enabled.
    When logged into your WRT1900AC using local access replace the end of the browser URL with:
    /dynamic/advanced-wireless.html
    Please remember to Kudo those that help you.
    Linksys
    Communities Technical Support

Maybe you are looking for

  • Problem in interactive pdf playing wrong sound file for all

    I have a created an interactive single page PDF with a video file and at least 4 mp3 files. I have added buttons to play one file and stop all others. The buttons work perfectly BUT, the first click of any button actives a certain mp3 file. the same

  • Power Adaptor

    I'm going to bring my MacBook to the Philippines and they use 220v power instead of the American 120v. Will my 3 prong connector work or do I have to buy a converter?

  • PS 50301 error - No Registered Index Version for Information Space

    Hi there, I'm facing an issue with the "SAP Explorer Information Space". When I try to open the Information Space, it opens fine but when its about to load, an error message appears.  "It is not possible to open Information Space. Can't load the Info

  • Is there a way to change the primary computer for iphones?

    I have been using managing my iphone through my itunes on my home PC, but just recently bought a new laptop and would like to make my laptop the primary computer for my iphone. How do I change it?

  • Which catch statement, returns null

    just wanted to if there is any catch statement which returns if the file is null something like catch(NullPointException ex){ }