A suitable Queue

Hi all,
First, sorry for the long messages, but I want to describe my problems in details :)
I am building an application that read messages from a UDP socket. The server that provide those messages is very fast. if my application is slower than the server, then message will be lost (due to limited size for UDP socket buffer).
I can't use threads to read from the server or in process received messages because messages have to be handled in the sending sequence sequential (each message depends on the previous one).
as a solution for this I decided to use a queue and create two threads; one to read messages and insert them in queue, and the other one to read messages from the queue and process them. in this case I guarantee that I receive all messages, and process messages sequentially. (I used LinkedBlockingQueue)
Now I faced another problem, the queue size increased dramatically, I noticed that the reading thread is very fast compared with the processing thread; so you can find that the reading thread insert many messages while the processing thread process one and the BIG problem is the processing thread finish its work but can't get a new messages from the queue. I think because the reading thread is blocking the queue object, and when release it after insertion, the processing thread can't get access because the other thread is too fast and will block it again !!!
thanks for your time.
regards,

LinkedBlockingQueue has a takeLock and a putLock. i.e there is a different lock for read as writing. So one shouldn't block waiting for the other.
If your processing thread cannot keep up with your reading thread you have a serious problem. The queue length should reach some maximum size or you will run out of memory.
It worth noting that just having a fast reader just removes one source of packet loss. You still need a means of requesting packet from the sender. (Note: they can arrive out of order too)
You could use ArrayBlockingQueue which is faster but has a limited size. If this fills up you will need to get the packets again.

Similar Messages

  • Generic Queue not necessarily for delta only

    Hi,
    I am using the R3 transparent table to store the data extracted, and I am using it to create the datasource from RSO2 => 'Extraction from DB view'. Is there a way to write the data from this table to a queue on the R3 side every time the data is extracted?
    Thank you

    Dear Crysral Smith,
    As Sat was explaining:
    1) If your datasource setup (in RSO2) had the suitable
       delta handling mechanism....the R3 system will write
       the delta records to the suitable queue
       "automatically".
    2) You would have chosen a suitable timestamp or date
       or a numeric pointer type field for the sytem to base
       the delta mechanism.
    Check the link:
       Need "Generic Data source using Function module" to handle Delta
    Good luck, BB

  • Asynchrony within a synchronous service

    Hi,
    We currently provide an HTTP request/reply based service. We are looking at Java CAPS with a view to increasing our scalability; in particular, we are aiming to move away from a process/thread per request model. However, we cannot change the MEP of our service; it must look to the outside like a synchronous request/reply black box.
    What we had in mind was some sort of model whereby a "front-end" HTTP listener would accept requests and forward them via JMS queues to one or more sub-processes; once processing is complete, a response message (with some sort of correlation ID) would be written to a suitable queue, from where it would be attached to the relevant HTTP connection as a response. Essentially, we want to access the HTTP response from a different "process" (i.e. thread, JCD, BP, whatever) than that which processed the request.
    From my (brief) investigation so far, I've seen it suggested that the only way to implement an HTTP request/reply scenario with JMS traffic "under the hood" is to have the JCD or BP that receives the original request make a blocking request/reply JMS call to the back end. Unless I am misunderstanding, this would seem to go against our goal to eliminate thread per request situations.
    Can anyone confirm or deny this? Or let me know if I'm thinking about this all wrong?
    Cheers,
    Bob.

    If you do this with a JCD, the entire HTTP request is processed by one thread. However, your scalability requirement can be met using eInsight (BPEL). When invoking a JMS.send() from BPEL, it is done on a different thread from the one that processed your original HTTP request so far. Similarly when you do a JMS.receive() to wait for the correlated response, a new thread is created to process the receive activity in BPEL. While it waits for the response, the BPEL 'instance' resides in memory without any threads associated with it. So you don't have a blocking JMS request-reply. The BPEL engine was architected to take care of this internally, by executing different types of activities on different threads (you tune the total thread usage using the 'work item submit limit' property on the eInsight engine properties). To correlate the JMS reply to the same BPEL 'instance' (not thread), you would define a BPEL correlation key that uses the JMS message's correlationId. Just make sure whoever processes the JMS request and sends a reply, also maps the JMS correlationId property from the original JMS message. The BPEL mapper provides unique number operators (e.g. BPEL instance id) that can be used to set/initialize your correlation.
    Hope this helps,
    Rupesh.

  • Setting format and format options in Output module settings for render queue item

    Hello,
    I am unable to set the "Format" and "Format options" for video in output module settings programatically using After effects apis.
    I referred the after effects cs3 sdk guide for the apis.
    I find apis for all other options in the "outputmodule settings " like :
    AEGP_SetEmbedOptions
    AEGP_SetOutputChannels
    AEGP_SetStretchInfo
    AEGP_SetSoundFormatInfo
    But there is no api listed for setting the options available in the "Format" tab and "Format options" tab.
    The "Format" tab and "Format options" tab is available in the dialog that opens when user clicks on the Output module settings for the render queue item.
    The format tab when clicked shows a drop down list with aff different formats. By default "Video for Windows" is set.
    The drop down list contains following format options
    Adobe Clip Notes
    Adobe Flash video
    Quicktime movie
    Video for Windows
    I need to be able to set the "Quicktime movie" option in the Format tab programmatically and then set the compression type as "Animation" in the compression settings programatically using the api functions available in AE CS3 SDK
    Please suggest the suitable api to do so.
    I need to write my own plugin to export to Quicktime movie using the after effects apis.
    I follow below steps to do so.
    1. AEGP_InsertMenuCommand and add export option to AE with my own plugin name
    2. In the command hook, select active item using AEGP_GetActiveItem
    3. Add it to render queue
    4. Set the output module settings for 0 th output module using
    suites.OutputModuleSuite1()
    5. Use different functions from suites.OutputModuleSuite1() to set the output module settings like EmbedOptions,StrechInfo etc.
    6. Till this step, I am doing it right. But I am not able to find any api for setting Format options using suites.OutputModuleSuite1()
    I also checked all other suites available for setting FormatOptions but no luck.
    Please help.
    Thanks,
    -Namita

    Hi Namita,
    I am experiencing the same problem.
    I am using AE CC SDK, and I am unable to change the outputmodule format to any of the other movie format types (mov, mpg, flv, etc.).
    It is always set to "avi"
    I even compiled and ran the Queuebert example in the SDK, and there I get the same problem: the path extension is always left on ".avi"
    Does anyone know how to change this?

  • JMS Queue connection

    Hi Experts ,
    I have a sticky problem while posting the data from JMS to File System ,
    in CC monitoring i got success message that   adapter is successfully connected to JMS queue but no data is available while checking 'sxmb_moni'
    Note :in this requirement we are doing content conversion in JMS Sender  adapter
    we r not able to get the data into XI even we connected to JMS with all the proper connections of the JMS queue
    the mesage we got in the CC:Sucessfully connected to destination 'jms/queue/SendEscFeedweblogic_DXG'
    help me how to get the data into XI  for Suitable reward
    Thanks
    Shoukath

    hi shoukath,
    Can you check in message monitoring, to check out audit logs and we come to what is the error exactly.
    or even you can check the logs at
    http://<Hostname>:<portnumber>/MessagingSystem/monitor/monitor.jsp
    regards
    Ramesh P

  • Does the priority queue always work?

    Hi 
    I have a 8Mbp of wan link which sometime gets saturated and I have shaped average this to 8Mbps but i am running vocie on this WAN link and have defined priority for voice with 850kbps under voice class. My question is when the link is not fully utilized, Will the packets from priority queue are always dequeued first as compared to packets sent from from other queus or will the QoS will not do anything here since the link utilization is lot less than what is sepecified in shape average. I am asking this to confirm if the priority queue always help to overcome the issue of jitter if either the link is saturated or not?
    Thanks

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    You describe PQ and shaping, but the former is usually a part of doing QoS on L2/L3 switches, and the latter on routers.  What device(s) and their IOS versions and the WAN media are you working with?
    On "routers", interfaces generally have FIFO tx-rings, only when they overflow, are packets placed in CBWFQ queues.  Within CBWFQ, LLQ will be dequeued first, but such packets might have already been queued behind other non-LLQ traffic within the interface tx-ring.  (NB: for routers, with tx-rings, when supporting VoIP, you may want to minimize the size of the tx-ring.)
    Shapers, in my experience, are "interesting".  First, I believe many shapers don't account for L2 overhead, but provider CIRs often do.  So unless you shape slower than the nomimal CIR rate, you can send faster than the available bandwidth.  (Often I've found shaping 10 to 15% slower allows for average L2 overhead.)
    Second, shapers work on averages over time intervals.  For VoIP, you'll often want to insure the shaper is using a small Tc, otherwise it will allow FIFO bursts.  (I've found a Tc of 10ms seems to support VoIP fairly well.)
    Third, I suspect some shapers might have their own queues between the interface and the defined policy queues.  If they do, unknown what their queuing organization is or their supported queuing depths.  If this is the case, makes it difficult to engineer QoS.
    Whenever possible, I've found it beneficial to work to avoid using shapers especially for timing sensitive traffic, like VoIP.  In your case, I would suggest, if possible, obtaining 10 Mbps of WAN bandwidth and somewhere passing the traffic through a physical 10 Mbps interface, with a QoS policy.
    But to more directly answer your question, PQ (or LQ) will dequeue its packets next compared to other "peer" queues.  This should always help VoIP for delay and jitter, but there's more involved whether this is necessary and/or whether it's helpful enough when necessary.
    You ask about when a link is saturated, but a link is 100% saturated everytime a packet is being transmitted.  Often link usage is represented in percentages of usage of possible maximum transmission rate over some time period, but when it comes to QoS, 100% utilization might be just fine while 1% utilization is not.  Much, much more information, about your situation, might be needed to offer truly constructive recommendations.

  • Is JMS based solution is suitable for the following:

    I an new to JMS. Im familiar with its basics (in theory and ran very simple examples).
    I was wondering if a clients server clients system, exchanging text messages and images is suitable for JMS or should I use a non J2EE component. The system should be able to serve about 100 users at the same time with a logic of whom should get certain messages.
    Thanks.

    The main question is number 1; for massive files (over 10-20Mb) you don't want to use 1 message per file; you either need to split it yourself into multiple messages or use some JMS Stream helpers like ActiveMQ has...
    http://activemq.org/JMS+Streams
    Decent JMS providers can handle massive load & high performance, federated networks, load balancing and reliability as well as things like throttling, flow control and so forth along with added features like JMX monitoring of queue depths & throughput rates so questions 2 and 3 can be supported whatever those numbers are - assuming you've got a decent provider and enough hardware :).
    For question 4; you've the option in JMS to choose your quality of service; whether to use persistence or not or to use queues or topics etc.
    I'd maybe make the questions
    1. How big are your images
    2. Do you need point to point or publish/subscribe?
    Though as Steve suggests its always worth testing that your load can be handled correctly by your middleware technologies & hardware.
    James
    http://logicblaze.com/

  • Advanced Queues Snapshot too old error

    I am using the advanced queues to submit work for parallel processes running through the Oracle Job Queue.
    I have attempted running anywhere from 1 to 5 simultaneous processes (in addition the the process which submits them to the Oracle job queue and populates the advanced queues) and I am getting sporadic Snapshot too old errors when the other processes are attempting to dequeue. The Advanced queues are populated before the other processes are submitted to the job queue, so I don't see that there could be conflicts between one process enqueuing while another is dequeuing.
    The reason I am attempting this is to try and gain some performance by running processes in parallel.
    Has anyone else had problems like this? Is this a bug in Oracle 8.1.6? Is there a parameter setting I need to adjust? Are there any suggestions for getting around this problem?

    I don't know what version of the database you are running? I'm only using 8.1.7.4. But where I come from, you add datafiles to the tablespace, not the rollback segment.
    alter tablespace rollback
    add datafile '&lt;blah, blah&gt;'
    size 147m
    autoextend on next 100m maxsize 2047m;
    Make sure that you have a suitable number of rollback segments that are well-sized extents. But mostly, listen the Tom Best, and try and introduce some best practices (no pun intended) to reduce the likelihood of this situation arising.

  • SMQ2 Inbound Queue TIME_OUT dump

    Hi all,
    When we try to run SMQ2 transaction, it is resulting in TIME_OUT error. Hence nor we are able to view the entries, neither we can delete them. We also dont know from where the entries are being received. All the RFC connections are working fine. We are sure that there are large number of entries being CFIed. Please provide us a solution on how to resolve this dump. Is there any thing that should be done with LUWs?
    Thanks in Advance,
    Varun

    23.09.2011     11:01:58     010     C     TIME_OUT               SAPLIRFC     2
    What happened?
    The program "RSTRFCM3" has exceeded the maximum permitted runtime without
    interruption, and has therefore been terminated.
    What can you do?
    Print out the error message (using the "Print" function)
    and make a note of the actions and input that caused the
    error.
    To resolve the problem, contact your SAP system administrator.
    You can use transaction ST22 (ABAP Dump Analysis) to view and administer
    termination messages, especially those beyond their normal deletion
    date.
    Error analysis
    After a certain length of time, the program is terminated. In the case
    of a work area, this means that
    - endless loops (DO, WHILE, ...),
    - database accesses producing an excessively large result set,
    - database accesses without a suitable index (full table scan)
    do not block the processing for too long.
    The system profile "rdisp/max_wprun_time" contains the maximum runtime of a
    program. The
    current setting is 1800 seconds. Once this time limit has been exceeded,
    the system tries to terminate any SQL statements that are currently
    being executed and tells the ABAP processor to terminate the current
    program. Then it waits for a maximum of 60 seconds. If the program is
    still active, the work process is restarted.
    successfully processed, the system gives it another 1800 seconds.
    Hence the maximum runtime of a program is at least twice the value of
    the system profile parameter "rdisp/max_wprun_time".
    How to correct the error
    You should usually execute long-running programs as batch jobs.
    If this is not possible, increase the system profile parameter
    "rdisp/max_wprun_time".
    Depending on the cause of the error, you may have to take one of the
    following measures:
    - Endless loop: Correct program;
    - Dataset resulting from database access is too large:
      Instead of "SELECT * ... ENDSELECT", use "SELECT * INTO internal table
      (for example);
    - Database has an unsuitable index: Check index generation.
    You may able to find an interim solution to the problem
    in the SAP note system. If you have access to the note system yourself,
    use the following search criteria:
    "TIME_OUT" C
    "RSTRFCM3" or "RSTRFCM3"
    "SHOW_FILE_VIEW"
    If you cannot solve the problem yourself, please send the
    following documents to SAP:
    1. A hard copy print describing the problem.
       To obtain this, select the "Print" function on the current screen.
    2. A suitable hardcopy prinout of the system log.
       To obtain this, call the system log with Transaction SM21
       and select the "Print" function to print out the relevant
       part.
    3. If the programs are your own programs or modified SAP programs,
       supply the source code.
       To do this, you can either use the "PRINT" command in the editor or
       print the programs using the report RSINCL00.
    4. Details regarding the conditions under which the error occurred
       or which actions and input led to the error.
    We are getting the error only for this transaction.
    We are facing this issue from Friday.
    We are not able to retrieve the queue list. Instead after 1800 seconds, the transaction is resulting in TIME_OUT error.
    Edited by: Nuravc on Sep 26, 2011 7:42 AM
    Edited by: Nuravc on Sep 26, 2011 8:16 AM

  • Mail stuck in "Mailbox Database ..." queue

    Hi guys
    In our organization we have an Exchange Server 2013 CU6 with both server roles CAS and Mailbox installed (absolutely fresh deployment). Users from AD have successfully been assigned mailboxes. The authoritative domain of the server is set correctly to our
    local domain. The DNS MX record to the Exchange-host exists and works well (tested with nslookup and setting q=MX). For testing purposes all the receive connectors allow connections from anonymous users.
    I've tried to send an SMTP mail via telnet connecting to CAS via port 25. The connection is established and the sender and receiver are accepted by the server but when I submit the message (line containing only a single a full stop) I get the error:
    451 4.7.0 Temporary server error. Please try again later. PRX2
    I have sent the same massage to the Transport Service on the Mailbox Server via port 2525 and the message was accepted, but it was not delivered to the mailbox of the recipient. Instead it just got stuck in a queue named "Mailbox Database ...".
    Its status is "Ready".
    I have restarted all possible services and rebooted the system but that did not help. Do you have any idea what could be the matter? Thank you!
    Cheers Dimitri

    Hi JCvanDamme,
    I recommend you refer to the following article and it may give you some hints :
    http://blog.kloud.com.au/2013/11/22/exchange-2013-dns-settings-cause-transport-services-to-crash/
    Note: Microsoft is providing this information as a convenience to you. The sites are
    not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from
    the above link.
    Best regards,
    Niko Cheng
    TechNet Community Support

  • Direct Delta, queued , Non-serialized V3 Update

    could someone out here help me in understanding the Direct Delta, queued , Non-serialized V3 Update...
    i mean a detailed level explanation.... or just in case u have any document then kindly pass it on.... to [email protected]

    Hi,
    Update Methods,
    a.1: (Serialized) V3 Update
    b. Direct Delta
    c. Queued Delta
    d. Un-serialized V3 Update
    Direct Delta: With this update mode, the extraction data is transferred with each document posting directly into the BW delta queue. In doing so, each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues.
    Queued Delta: With this update mode, the extraction data is collected for the affected application instead of being collected in an extraction queue, and can be transferred as usual with the V3 update by means of an updating collective run into the BW delta queue. In doing so, up to 10000 delta extractions of documents for an LUW are compressed for each DataSource into the BW delta queue, depending on the application.
    Non-serialized V3 Update:With this update mode, the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. They are kept there as long as the data is selected through an updating collective run and are processed. However, in contrast to the current default settings (serialized V3 update), the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue.
    Note: Before PI Release 2002.1 the only update method available was V3 Update. As of PI 2002.1 three new update methods are available because the V3 update could lead to inconsistencies under certain circumstances. As of PI 2003.1 the old V3 update will not be supported anymore.
    (serialized) V3
    • Transaction data is collected in the R/3 update tables
    • Data in the update tables is transferred through a periodic update process to BW Delta queue
    • Delta loads from BW retrieve the data from this BW Delta queue
    Transaction postings lead to:
    1. Records in transaction tables and in update tables
    2. A periodically scheduled job transfers these postings into the BW delta queue
    3. This BW Delta queue is read when a delta load is executed.
    Issues:
    • Even though it says serialized , Correct sequence of extraction data cannot be guaranteed
    • V2 Update errors can lead to V3 updates never to be processed
    direct delta
    • Each document posting is directly transferred into the BW delta queue
    • Each document posting with delta extraction leads to exactly one LUW in the respective BW delta queues
    Transaction postings lead to:
    1. Records in transaction tables and in update tables
    2. A periodically scheduled job transfers these postings into the BW delta queue
    3. This BW Delta queue is read when a delta load is executed.
    Pros:
    • Extraction is independent of V2 update
    • Less monitoring overhead of update data or extraction queue
    Cons:
    • Not suitable for environments with high number of document changes
    • Setup and delta initialization have to be executed successfully before document postings are resumed
    • V1 is more heavily burdened
    queued delta
    • Extraction data is collected for the affected application in an extraction queue
    • Collective run as usual for transferring data into the BW delta queue
    Transaction postings lead to:
    1. Records in transaction tables and in extraction queue
    2. A periodically scheduled job transfers these postings into the BW delta queue
    3. This BW Delta queue is read when a delta load is executed.
    Pros:
    • Extraction is independent of V2 update
    • Suitable for environments with high number of document changes
    • Writing to extraction queue is within V1-update: this ensures correct serialization
    • Downtime is reduced to running the setup
    Cons:
    • V1 is more heavily burdened compared to V3
    • Administrative overhead of extraction queue
    Un-serialized V3
    • Extraction data for written as before into the update tables with a V3 update module
    • V3 collective run transfers the data to BW Delta queue
    • In contrast to serialized V3, the data in the updating collective run is without regard to sequence from the update tables
    Transaction postings lead to:
    1. Records in transaction tables and in update tables
    2. A periodically scheduled job transfers these postings into the BW delta queue
    3.This BW Delta queue is read when a delta load is executed.
    Issues:
    • Only suitable for data target design for which correct sequence of changes is not important e.g. Material Movements
    • V2 update has to be successful
    Hope this helps
    regards
    CSM Reddy

  • Multiple Consumers on a set of Queues

    Is there any better way of implementing this other than polling all the queues?
    or is there are any 3rd party library, which can do this already.
    Regards
    Gopal

    Sure, you use wait/notify. The consumer syncrhonizes on a suitable monitor, then checks all the queues, and waits if they are all empty. Producers for any of the queues notify all the consumers.
    You'd probably want the consumer threads to check the queues cyclically, rather than starting at the beginning each time they find an object so that they don't favour one queue excessively.

  • Weblogic 6.1 + linux + oracle: no suitable driver

    Hi folks
    I've seen a few postings on this, but am still stuck...would
    appreciate any ideas!
    here's the situation:
    -redhat 7.1, sun jdk 1.3.1_02, oracle 8.1.7
    -classes12.zip is in the classpath
    -config.xml contains:
    <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
    Name="oraclePool" Password="xxxxxxxxxxxx"
    Properties="UserName=test" Targets="myserver"
    TestConnectionsOnRelease="false"
    TestConnectionsOnReserve="false" TestTableName="KEYVALUE"
    URL="jdbc:oracle:thin@harrylocal:1521:cseatdev"/>
    -errormsg on weblogic start (from log)
    <Error> <JDBC> <harry.menta.es> <myserver> <ExecuteThread: '9' for
    queue: 'default'> <system> <> <001060> <Cannot startup connection pool
    "oraclePool" No suitable driver>
    -snippet from standalone java pgm that connects to oracle
    successfully:
    DriverManager.registerDriver(new
    oracle.jdbc.driver.OracleDriver());
    Connection conn = DriverManager.getConnection
    ("jdbc:oracle:thin:@harrylocal:1521:cseatdev","test","xxxxxxxxxx");
    thanks for any thoughts
    josh

    This should have been posted in weblogic.developer.interest.jdbc. Next
    time ;-)
    But, your URL in the connection pool declaration is:
    URL="jdbc:oracle:thin@harrylocal:1521:cseatdev"/>
    and, your URL in the standalone java code is:
    Connection conn = DriverManager.getConnection
    ("jdbc:oracle:thin:@harrylocal:1521:cseatdev","test","xxxxxxxxxx");
    Your are missing a ':' between the 'thin' and the '@' sign in the
    offending URL. Add the ':' and you will be fine.
    Bill
    jjmerrow wrote:
    Hi folks
    I've seen a few postings on this, but am still stuck...would
    appreciate any ideas!
    here's the situation:
    -redhat 7.1, sun jdk 1.3.1_02, oracle 8.1.7
    -classes12.zip is in the classpath
    -config.xml contains:
    <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
    Name="oraclePool" Password="xxxxxxxxxxxx"
    Properties="UserName=test" Targets="myserver"
    TestConnectionsOnRelease="false"
    TestConnectionsOnReserve="false" TestTableName="KEYVALUE"
    URL="jdbc:oracle:thin@harrylocal:1521:cseatdev"/>
    -errormsg on weblogic start (from log)
    <Error> <JDBC> <harry.menta.es> <myserver> <ExecuteThread: '9' for
    queue: 'default'> <system> <> <001060> <Cannot startup connection pool
    "oraclePool" No suitable driver>
    -snippet from standalone java pgm that connects to oracle
    successfully:
    DriverManager.registerDriver(new
    oracle.jdbc.driver.OracleDriver());
    Connection conn = DriverManager.getConnection
    ("jdbc:oracle:thin:@harrylocal:1521:cseatdev","test","xxxxxxxxxx");
    thanks for any thoughts
    josh[att1.html]

  • Message storage in PI while it in queue ?

    Hi All,
    If we process the message into XI queue and it holds there. where it  stores the data during the scheduled status or waiting status?
    Thanks in advance
    kartikeya

    Hi,
    Even though the message has got struck up in the queuue also, this will come to MONI and we can see the message status as To be delivered or Deliver status what ever it may be.
    We have to install Oracle or any other suitable Database at the time of XI Installation for this purpose only( To store Messages)
    Regards
    Seshagiri

  • Queue based publication items

    has anyone used these and what for?
    (sort of) managed to get these to work as a replacement for all of our fast refresh publication items, and get real time synchronisation without the MGP process running, but had to put in some serious work arounds to get it to do fast refreshes and getting it set up is complicated
    I would be interested in kowing what other people have done with them to see if there is anything that i have missed

    2 hour compose with the 400+ users was the situation when we went live in August 2006, and whilst not perfect, that was ok. We also never had any problem with mobile blocking users when it started the compose cycle. current situation is upto 12 hour compose cycle with 5 minutes of blocking all users when it starts - this is not acceptable
    Database is however overly generic, and in some cases uses inappropriate structures (example status table looks to be growing at the rate of a million records a month, with only a hundred or so of those records of any interest). We also started using streams for extranet services, and the two do not look to play nicely together.
    no more options for tuning (did everything when we went live), partitioning may help but difficult with the database structures, most of the data is specific to our users, most of the data that could be shared is within complete refresh items to bypass the MGP process already.
    We could (and may still do, as the implementation of the queue based stuff is going to be very complicated) stick with the MGP process and do tricks like create our own logging triggers to stop 'uninteresting' records being logged in the CVR$ tables and C$ALLL_SID_LOGGED_TABLES, and then delete unnecessary data from the CVR$ tables to speed up the process logs phase (sure it is not recommended but it works), use MyCompose to work around the poor execution plans put together by the default compose process, and DML call outs to impove the apply processing, but my fear is we will be back in the same boat in 18 months time.
    As 90%+ of our synchronisations are done unattended between about 7:30 pm and 10pm, a slower but more real time sync option fits in the business model fairly well, and to my mind why pre-prepare data 12 times a day (assuming 2 hour cycle) with all of the continual performance overhead, when they only download it once when noone else is working?
    The queue based stuff is a bit cumbersome and an odd mixture of java, PL/SQL and DDL scripts (plus a few cheats inserting data directly into the repository tables, and updating other things), and the development effort required should not be underestimated.
    anyone considering it needs to be clear about what they are trying to achieve (the examples are very limited to stuff you could do with complete refresh items easier), and needs a good understanding of the internals. Definitely not suitable if synchronisation time is a priority (eg: if using GPRS all of the time). I should be able to get the average sync time down to 3-4 minutes, but compared with the 20-30 seconds currently that is still a big jump

Maybe you are looking for

  • Hi my new ipad will not charge in mains or PC??

    hi I have had my brand new ipad for about a month and i can't seem to get it to charge it charges when it wants so to speak, I've tryed it in the mains in about 10 different plug sockets and on 3 PC's one been a laptop and it doen't charge when you w

  • Go back selection-screen from level2 of cl_salv_hierseq_table

    Hi, Problem is to go back on selection screen in one go using BACK , CANCEL or EXIT button cl_salv_hierseq_table is being called to display output list. Same class cl_salv_hierseq_table is being used to display new list if user wants to see. All outp

  • SOA Design issues and other politics

    Hi all, I have a requirement for live data feed from external system. I am using SOA11g and JDeveloper 11g. There are two designs, one proposed and other I have in mind to achieve this. 1) The external system sends XML data in a push model to the exp

  • Sending the enhanced Infotype 0002 outside SAP via an IDOC

    Hello Friends, We have added couple of Z fields to the standard 0002 Iinfotype table PA0002. I can see the custom fields in the trans : PA30 screen for infotype 002. Now, we want to send these custom fields value outside SAP via an IDOC. Here are the

  • Inconsistant currency Information

    FI/CO Interface: Inconsistant currency Information