Queue efficiency

So I'm working on improving the efficiency/speed of my producer-consumer system.
I have two producers and one consumer (which among other things, writes files to the disk).
I currently have both producers feeding the one queue which is then processed by my currently single consumer.
Each queue item is a cluster of multiple data types and contains everything needed to "consume" it.
There are certain cases where more complex queue items get placed which take longer to process and slow down the consumer process.
I was thinking of adding a second consumer loop to run in parallel with the first to take the weight off the single loop so to speak.
My question is, would it be more efficient to have both consumers dequeueing from the same queue, or would it be more efficient to have each producer feeding its own queue. For the sake of the exercise assume that I can guarantee that the complex queue elements will only come from a specific producer.
Solved!
Go to Solution.

Hornless.Rhino wrote:
So I'm working on improving the efficiency/speed of my producer-consumer system. I have two producers and one consumer (which among other things, writes files to the disk). I currently have both producers feeding the one queue which is then processed by my currently single consumer. Each queue item is a cluster of multiple data types and contains everything needed to "consume" it. There are certain cases where more complex queue items get placed which take longer to process and slow down the consumer process. I was thinking of adding a second consumer loop to run in parallel with the first to take the weight off the single loop so to speak. My question is, would it be more efficient to have both consumers dequeueing from the same queue, or would it be more efficient to have each producer feeding its own queue. For the sake of the exercise assume that I can guarantee that the complex queue elements will only come from a specific producer.
From a performance perspective, the Queue primitives won't care if you create two queues or use one queue in two consumers. To have multiple slaves (consumers) processing from one queue is perfectly acceptable, so long as the order of consumption is not important.
If you create management code to hand off the 'complex' jobs to a dedicated secondary consumer then you are not taking full advantage of the multiple-slave framework. It all depends on the ratio of simple-to-complex jobs, and also the time it takes to complete the jobs. For example, if the complex jobs come in once every 10,000 simple jobs, then a dedicated second consumer for the complex jobs will be largely idle and therefore under-utilised. If however, the ratio is more like 1 complex job for every two or three simple jobs, then you could find the primary consumer is largely under-utilised.
The best balance is to allow both consumers to dequeue all job types and therefore both be working at maximum capacity.
Thoric (CLA, CLED, CTD and LabVIEW Champion)

Similar Messages

  • CPU schedulers compared (bfs vs cfs)

    Abstract
    Con Kolivas’ Brain Fuck Scheduler (bfs) was designed to provide superior desktop interactivity and responsiveness to machines running it.[1]  However, it was not implicitly designed to provide superior performance.  The purpose of this study was to evaluate the Completely Fair Scheduler (cfs) in the vanilla Linux kernel and the bfs in the corresponding kernel patched with the ck1 patchset.  Seven (7) different machines were used to see if differences exist and, to what degree they scale using performance based metrics.  Again, these end-points were never factors in the primary design goals of the bfs.  Results were encouraging. 
    Kernels patched with the ck1 patch set including the bfs outperformed the vanilla kernel using the cfs at nearly all the performance-based benchmarks tested. Further study with a larger test set could be conducted, but based on the small test set of 7 PCs evaluated, these increases in process queuing, efficiency/speed are, on the whole, independent of CPU type (mono, dual, quad, hyperthreaded, etc.), CPU architecture (32-bit and 64-bit), 64 bit) and of CPU multiplicity (mono or dual socket).
    Moreover, several "modern" CPUs (Intel C2D and Ci7) that represent common workstations and laptops, consistently outperformed the cfs in the vanilla kernel at all benchmarks.  Efficiency and speed gains were small to moderate.
    Link to complete study
    http://repo-ck.com/bench/cpu_schedulers_compared.pdf
    [1] http://ck.kolivas.org/patches/bfs/bfs-faq.txt
    Comments are welcomed.
    Last edited by graysky (2012-10-20 20:07:46)

    Thaodan wrote:Will it replace cfs?
    AFAIK, Con Kolivas (creator of BFS) has no intentions of bringing it to the mainline: http://ck.kolivas.org/patches/bfs/bfs-faq.txt
    Are you looking at getting this into mainline?
    LOL.
    No really, are you?
    LOL.
    Really really, are you?
    No. They would be crazy to use this scheduler anyway since it won't scale to
    their 4096 cpu machines. The only way is to rewrite it to work that way, or
    to have more than one scheduler in the kernel. I don't want to do the former,
    and mainline doesn't want to do the latter. Besides, apparently I'm a bad
    maintainer, which makes sense since for some reason I seem to want to have
    a career, a life, raise a family with kids and have hobbies, all of which
    have nothing to do with linux.

  • Most efficient way to constantly read, queue and parse multi-size​d RS232 data (multi-thr​eaded)

    I've tried tackling this problem a few different ways, and figured it was time to get some others' advice.  My system essentially works, although it looks like a hackjob and not entirely confident.
    My RS-232 connection has the following properties/constraints:
    -Will be getting unsolicited data at a high data rate.  (ya thats subjective, but assume near constant at whatever baud rate its set at up to 115200)
    -Different segment lengths of data receiving
    -Two stop bytes (0x10 0x03), while start byte is 0x10.  (Byte stuffing/packing implemented)
    -There is a size byte within a packet (3rd one in), however currently relying on stop bytes only.
    I have tried the ComCallback within CVI only to find that it is VERY slow at processing events compared to implementing it manually in its own thread.  In addition, it can only trigger 1 stop bit, not 2.  Triggering on size is sometimes okay, but I found that it was possible it would get triggered on only part of the data, the when its called the qlength be larger, and then sometimes I would only read part of a data packet, and half of my segment was still in the queue.  And sometimes, I would get semaphore locks and lots of waiting, and ya it was a mess (hence you will see lots of CMT locks commented out)
    I tried implementing a FIFO type queue (copied below), but I have very little experience in doing this, and may not be very efficient in the way I implemented and could definately use some advice in this area.  Also perhaps in how thread safe I have everything.
    I thought about a circular buffer, but since data can be different segment lengths, it kind of makes it difficult to cleanly wrap around and read.  I think its still possible, just may require additional checks which I haven't seen done anywhere when google searching. (and made me think a FIFO queue was better).
    So if anyone has any good suggestions or examples, that would be great.  Using Labwindows 2010.
    //in Main before GUI loaded
    programRunning = 1;
    CmtScheduleThreadPoolFunction (DEFAULT_THREAD_POOL_HANDLE, ComCallback, NULL, &funcID);
    /* Function Used to parse Com data */
    static int ComCallback (void *callbackData)
    static int bufLen = 0;
    int strLen,qLen=0;
    unsigned char tempBuf[1024];
    int packet_length = 0;
    char temp_string[250];
    unsigned char *start_ptr;
    int i;
    int start_offset;
    while(programRunning) {
    //First lets copy all data from com port to output buffer
    if( com_open) qLen = GetInQLen (comPort);
    if( qLen <= 0 ) {
    ProcessSystemEvents();
    } else {
    if( qLen > 1024) qLen = 1024-bufLen; //set max length to read
    strLen = ComRd (comPort, tempBuf, qLen);
    //CmtGetLock(lock);
    memcpy(readBuf+bufLen,tempBuf,strLen);
    bufLen += strLen;
    //CmtReleaseLock(lock);
    //Try to read until we hit stop bytes, or until we think we have at least 1 or 2 packets to process
    if(tempBuf[strLen-2] != 0x10 && tempBuf[strLen-1] != 0x03 && bufLen < 100 ) )
    goto skip;
    //ensure start pointer is at beginning command (0x10 0xAA) (in event we just started reading in middle of packet???)
    i = 0;
    while ( (readBuf[i] != 0x10) && (readBuf[i+1] != 0xAA) && (i < strLen) )
    //start_ptr++;
    i++;
    start_offset = i;
    parse_some_more:
    start_ptr = readBuf + start_offset;
    //lets try to do one packet at a time.
    for(i = start_offset; i < bufLen-start_offset-1; i++)
    if(readBuf[i] == 0x10)
    if(readBuf[i+1] == 0x03)
    i += 2;
    break;
    //trial of unpacking/unstuffing byte buffer
    else if( readBuf[i+1] == 0x10 ) //two tens in a row, remove it
    memmove( &readBuf[i],&readBuf[i+1],bufLen-i);
    bufLen--;
    //at this point, we should have a full packet. What if we don't....???
    packet_length = i - start_offset;
    //CmtGetLock(lock);
    ParseResponse(start_ptr,packet_length);
    PostPacketToOutputBuffer(start_ptr,packet_length);
    if(start_ptr[start_offset] == 0x10 && start_ptr[start_offset+1] == 0xAA )
    com_Send_Acknowledge(comPort);
    start_offset = packet_length + start_offset;
    if(start_offset < bufLen-1)
    goto parse_some_more;
    //For now, assume everything else we don't care about in buffer, however, probably shouldn't. Should it be bufLen -=start_offset; if so, need to handle partial data better; timeout??
    bufLen = 0;
    //CmtReleaseLock(lock);
    skip:
    return 0;
    Thanks!

    Hi ngay528,
    I think there is a great example for you to use that comes with CVI which can be found by clicking Find Examples on the splash screen or Help >> Find Examples in your project. From there, click into Optimizing Applications >> Multithreading. In that folder there is a project that is called BuffNoDataLoss that shows how to create a thread safe queue and setup a producer/consumer type program. In this example the data is a random sine wave but could be adapted to your RS232 data. If you have any questions concerning this example please let me know but this should be a great starting point.
    Patrick H | National Instruments | Software Engineer

  • Strange problems about my projects(efficiency,event structure,queue)

    hey,guys,i am a labview user from china,so forgive me about the chinese words of my projest. i found something wrong with my work. .
    there are two functions in my "While loop", the first one depends on "queue" and  is to get information from UART, the second one is "event structure" to send information through UART. my problems:
    1. i can not stop while loop by pressing "停止“, and i find two ways to stop it. first, i press"停止”,then, i press any buttons that can trigger "event structure"twice, at last, while loop can stop.the second way, i press buttons for "event structure" one time, then "停止”, and then the same button for"event". it also works
    2.my work does not run efficiently. you guys can try. once you press buttons for "event structure", it responses very slowly,and if you press many times,the whole program seems like suspending.i have added a LED named test at the front panel, it should light once you press "运行“. and it also shows how slow the whole program is.
    i have attached my work, and my main vi is "串口通信 主VI".
    can any body give me some suggestions? thanks a lot
    Attachments:
    Uart.zip ‏140 KB

    Your event structure will execute exactly once per loop of the main loop, thus the other three loops must stop before handlig the next event. You will also get a race condition between the top loop and the events when writing.
    I'd put the top loop as the timeout case in the event and place the loop around the event for starters.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Efficiently add to queue from dll

    Hi, is there an efficient way to add an array to a queue from a C++ dll? I need to transfer approximately 500 MB/s to labview and would prefer to directly enqueue the data from the C++ code over polling a circular buffer from Labview.

    If you maintain in your circular buffer not just the C pointer but actual LabVIEW array handles you could swap out those handles in your data collection call. Something like this:
    typedef struct {
         int32 numElm;
         uInt8 elm[];
    } LabVIEWArray, **LabVIEWArrayHdl;
    typedef struct {
        LabVIEWArrayHdl handles[];
    } CircularBuffer;
    MgErr GetNextBuffer(LabVIEWArrayHdl *array)
          // Determine if there are any elements in the circular buffer
          int32 idx = NextDataAvailableIndex();
          if (idx >= 0)
                // Copy next available buffer handle
                LabVIEWArrayHdl temp = CircularBuf->handles[idx];
                // Store the passed in handle in its place
                CircularBuf->handles[idx] = *array;
                // pass the handle back to LabVIEW
                *array = temp;
                return noErr;
          return noDataErr;
    And in your callback you do something like this
    callback(...., void *data, int len, .....)
          MgErr err;
          int32 idx = NextDataInsertIndex();
          if (idx < 0)
               Buffer full!!!!!   
          LabVIEWArrayHdl handle = CircularBuf->handles[idx];
          err = NumericArrayResize(uB, 1, &handle, len);
          if (!err)
                MoveBlock(data, (*handle)->elm, len);
                (*handle)->numElm = len;
                CircularBuf->handles[idx] = handle;
    This does not contain the code about initialization of the circular buffer and maybe almost as importantly about deallocation. For initialization you simply should make sure that the array of handles is all initialized to NULL. NumericArrayResize() is smart enough to allocate a new handle for NULL values, and resizes non NULL handles. For deallocation you need to walk the array and for any element that is not NULL call the DSDisposeHandle() memory manager function. It also does not show the critical section handling that is clearly needed here in order to synchonize the data get and the callback function.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Disk IO efficiency in Bridge CS5 and Camera Raw 6.0

    After watching Microsoft Process Monitor (ex-sysinternals) while processing a large batch of DNG's that in Camera Raw were being converted to a different Process (2010 to 2003) and a different a different Camera Profile, I have the following questions:
    Why read files in sequential chunks far smaller than we could?  Why not 128KB or 64KB chunks for the DNG header read instead of 4KB?  Why not at least 256KB chunks for the main DNG read?   Why not 32KB chunks for the Camera Profiles, if we're going to read them over and over at all?
    Why read all the profiles for every single photo, when we can possibly use at most two (the original one, and the new one), and more often only one?  More to the point, if we're working on a large batch of photos on a machine with 6GB of free RAM, I would suggest loading them into RAM and leave them there.
    Why multiple Create/Close sets on Defaults\Preferences during large batch processing?
    Why so many CreateFile...CloseFile sets for the same main DNG file; we're operating on the same file through this process.  Why close it when we're just going to open it right up again?
    Summary over one file's processing
    00:49:58.72 CloseFile A
    00:49:58.72 Mess with Defaults\Preferences.xmp
    00:49:58.72 CreateFile B
    ...               ReadFile B in 4KB chucks out to just past Offset 167,936
    00:49:58.72 WriteFile B - write 11,187 bytes at Offset 780
    00:49:58.72 CloseFile B
    ...               Mess with some Adobe\CameraRaw\Database CreateFile, ReadFile, CloseFile stuff for very little time.
    00:49:58.72 CreateFile B
    00:49:58.72 ReadFile B in 64KB chunks out to just past Offset 19,165,184
         00:49:59.12 The last ReadFile; essentially, reading at ~45.84MiB/s for 0.4 seconds
    00:49:59:12 Start reading all the CameraProfiles\Adobe Standard\Canon EOS 50D xxx.dcp in 4KB chunks for 55-110KB or so: the order is Adobe Standard, Camera Faithful, Camera Landscape, Camera Neutral, Camera Portrait, Camera Standard,
        00:49:59.13 The last ReadFile for the profiles
    00:49:59.13 CreateFile B
    00:49:59.13 ReadFile B in 4KB chunks twice, at Offset 0 and Offset 53,248
    00:49:59.13 CloseFile B
    ...               Mess with some Adobe\CameraRaw\Database CreateFile, ReadFile, CloseFile stuff for very little time.
        00:49:59.13 Last CloseFile on the above
    ??? CPU use, I assume
    00:49:59.78 CloseFile B
        ??? The previous operation on B appears to also b a CloseFile.  I assume I'm failing to interpret Process Monitor; I must have missed an operation.
    00:49:59.78 CreateFile, CloseFile on Defaults\Preferences.xmp - Read Attributes
    00:49:59.78 CreateFile Cache\Index.dat
    00:49:59.78 ReadFile Cache\Index.dat in 4KB chunks then write it in 4KB chunks, about 24KB worth
    00:49:59.78 CloseFile Cache\Index.dat
    ??? CPU use, I assume
    00:50:01.40 CreateFile B
    00:50:01.40 WriteFile B in 256KB chunks
    00:50:01.42 CloseFile B
    00:50:01.43 Another Read Attributes Create/Close set on Defaults\Preferences.xmp
    00:50:01.43 CreateFile C
    repeat pattern
    Total: 2.71 seconds.
    The sum of the two long ??? CPU segments: 2.27 seconds
    The time not part of the two long ??? CPU segments: 0.44 seconds, or about 20% of the total for that file.
    Of that, changing the main DNG file read from 64KB chunks to 256KB chunks could have reduced that 20%, to about .35 seconds.  On a batch of 1000 photos, that's a savings of almost 6 minutes.  A hundredth of a second here, a hundredth of a second there, and pretty soon you're talking about real time.
    Note that SQLIO, with an outstanding IO queue of only 1 (as this appears to generate), with only 1 thread, dealing with only one 4GB file, for 120 seconds, rates the SSD which stores the DNG's at:
    4KB: 48.10MB/s
    64KB: 173.85MB/s  -- much, much faster
    256KB: 222.95MB/s  -- still significantly faster than 64KB
    While my SSD doesn't take up much more time doing the smaller reads, slower or more heavily used drives will make the lower efficiency more visible to the end user, particularly for batches.

    I don't know the specifics, but keep in mind this software is built to work on multiple completely different operating systems.  There might be some "minimum common denominator" stuff going on.
    -Noel

  • JMS Wrappers can't cache JNDI lookups when using secured queues

    Hi All!
    We are working on a jms client, inside a webapp(servlets), using Weblogic 9.2 and Weblogic 10.3.
    As we want to use secured queues and keep being efficient we tryed to use Weblogic JMS Wrappers, that should work according to the docs:
    Enhanced Support for Using WebLogic JMS with EJBs and Servlets
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jms/j2ee.html
    But we are facing a problem:
    When we define a JMS Wrapper and try to cache JNDI lookups for the QueueConnectionFactory and Queue, as the docs recommend for efficiency, the connection to the queue is ignoring the user/pwd.
    The JMS Wrapper is using <res-auth>Application</res-auth>.
    We are creating the connection using createQueueConnection(user, pwd) from QueueConnectionFactory and after several tests it seems that the user and password are ingored unless a jndi lookup is made in the same thread, as if when there are not any thread credentials present user and password are ignored for the connection...
    so the question is:
    That behaviour goes against Weblogic JMS Wrapper documentation, doesn't it?
    Is there then any other way to access efficiently secured queues using a servlet as a client? (iit's not an option for us to use mdbs, or ejbs).
    If it helps, this seems related to this still opened spring-weblogic issue: SPR-2941 --> http://jira.springframework.org/browse/SPR-2941 and SPR-4720 --> http://jira.springframework.org/browse/SPR-4720
    Thanxs
    And here goes our DDs and code to reproduce:
    First in pretty format:
    web.xml --> http://pastebin.com/f5f85e8d4
    weblogic.xml --> http://pastebin.com/f2fbe10cc
    Client code --> http://pastebin.com/f586d32d9
    And now emmebded in the msg:
    web.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <weblogic-web-app
      xmlns="http://www.bea.com/ns/weblogic/90"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://www.bea.com/ns/weblogic/90
      http://www.bea.com/ns/weblogic/90/weblogic-web-app.xsd">
        <description>WebLogic Descriptor</description>
        <resource-description>
            <res-ref-name>jms/QCF</res-ref-name>
            <jndi-name>weblogic.jms.ConnectionFactory</jndi-name>
        </resource-description>
    </weblogic-web-app>weblogic.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
          <display-name> QCFWrapperCredentialsTest </display-name>
          <description> QCFWrapperCredentialsTest  </description>
          <servlet id="Servlet_1">
             <servlet-name>QCFWrapperCredentialsTest</servlet-name>
             <servlet-class>QCFWrapperCredentialsTest</servlet-class>
             <load-on-startup>1</load-on-startup>
          </servlet>
          <servlet-mapping id="ServletMapping_1">
             <servlet-name>QCFWrapperCredentialsTest</servlet-name>
             <url-pattern>/Test</url-pattern>
          </servlet-mapping>
         <resource-ref>
            <res-ref-name>jms/QCF</res-ref-name>
            <res-type>javax.jms.QueueConnectionFactory</res-type>
            <res-auth>Application</res-auth>
            <res-sharing-scope>Shareable</res-sharing-scope>
        </resource-ref>
    </web-app>And our test client:
    import java.io.*;
    import java.util.Properties;
    import javax.jms.*;
    import javax.naming.*;
    import javax.servlet.http.*;
    public class QCFWrapperCredentialsTest extends HttpServlet {
        QueueConnectionFactory factory = null;
        Queue queue = null;
        String jndiName = "java:comp/env/jms/QCF";
        String queueName= "jms/ColaEntradaConsultas";
        String user = "usuarioColas";
        String pwd = "12345678";
        String userjndi = "usuarioColas";
        String pwdjndi = "12345678";
        String serverT3URL="t3://127.0.0.1:7007";
        public void init() {
            setupJNDIResources();
        private void setupJNDIResources(){
            try {
                Properties props = new Properties();
                props.put("java.naming.factory.initial",
                        "weblogic.jndi.WLInitialContextFactory");
                props.put("java.naming.provider.url",serverT3URL );
                props.put("java.naming.security.principal", userjndi);// usr
                props.put("java.naming.security.credentials", pwdjndi);// pwd
                InitialContext ic = new InitialContext(props);
                factory = (QueueConnectionFactory) ic.lookup(jndiName);
                queue = (Queue) ic.lookup(queueName);
            } catch (NamingException e) {
                e.printStackTrace();
        public void service(HttpServletRequest req, HttpServletResponse res) {
            res.setContentType("text/html");
            Writer wr = null;
            try {
                wr = res.getWriter();
                //Comment this out, do a lookup for each request and it will work
                //setupJNDIResources();
                String user = this.user;
                String pwd = this.pwd;
                //read users and passwords from the request in case they are present
                if (req.getParameter("user") != null) {
                    user = req.getParameter("user");
                if (req.getParameter("pwd") != null) {
                    pwd = req.getParameter("pwd");
                wr.write("JNDI  User: *" + userjndi + "* y pwd: *" + pwdjndi + "*<p>");
                wr.write("Queue User: *" + user + "* y pwd: *" + pwd + "*<p>");
                //Obtain a connection using user/pwd
                QueueConnection conn = factory.createQueueConnection(user, pwd);
                QueueSession ses = conn.createQueueSession(true,
                        Session.SESSION_TRANSACTED);
                QueueSender sender = ses.createSender(queue);
                TextMessage msg = ses.createTextMessage();
                msg.setText("Hi there!");
                conn.start();
                sender.send(msg);
                ses.commit();
                sender.close();
                ses.close();
                conn.close();
            } catch (Exception e) {
                e.printStackTrace();
                try {
                    wr.write(e.toString());
                } catch (Exception e2) {
                    e2.printStackTrace();
            finally{
                try {
                    wr.close();
                } catch (IOException e) {
                    e.printStackTrace();
    }Edited by: user2525402 on Feb 9, 2010 7:14 PM

    Thanks Tom,
    Quite a useful response .-)
    Leaving aside the fact that weblogic behaviour with jms wrappers and secured queues seems to not be working as the docs says...
    Talking about workarounds:
    Both workarounds you suggest works, but as you already noted, creating a new JNDI context just to inject credentials into the threads is overkill when high performance is needed.
    I also found more information about the same issue here: http://sleeplessinslc.blogspot.com/2009/04/weblogic-jms-standalone-multi-threaded.html
    And he suggest the same workaround, injecting credentials
    So I tried the second approach, successfully, injecting credentials into the thread using the security API.
    This way, using JMS wrappers and injecting credentials into the thread we get the best performance available, caching resource using wrappers and using credentials in a somewhat efficient way.
    Now the test snippet looks like this:
    import java.io.*;
    import java.security.PrivilegedAction;
    import java.util.Properties;
    import javax.jms.*;
    import javax.naming.*;
    import javax.security.auth.Subject;
    import javax.security.auth.login.LoginException;
    import javax.servlet.http.*;
    import weblogic.jndi.Environment;
    import weblogic.security.auth.Authenticate;
    public class JMSWrapperCredentialsTest extends HttpServlet {
        QueueConnectionFactory factory = null;
        Queue queue = null;
        String jndiName = "java:comp/env/jms/QCF";
        String queueName= "jms/ColaEntradaConsultas";
        String user = "usuarioColas";
        String pwd = "12345678";
        String userjndi = "usuarioColas";
        String pwdjndi = "12345678";
        String serverT3URL="t3://127.0.0.1:7007";
        public void init() {
            setupJNDIResources();
        private void setupJNDIResources(){
            try {
                Properties props = new Properties();
                props.put("java.naming.factory.initial",
                        "weblogic.jndi.WLInitialContextFactory");
                props.put("java.naming.provider.url",serverT3URL );
                props.put("java.naming.security.principal", userjndi);// usr
                props.put("java.naming.security.credentials", pwdjndi);// pwd
                InitialContext ic = new InitialContext(props);
                factory = (QueueConnectionFactory) ic.lookup(jndiName);
                queue = (Queue) ic.lookup(queueName);
            } catch (NamingException e) {
                e.printStackTrace();
        public void service(HttpServletRequest req, HttpServletResponse res) {
            final HttpServletRequest fReq=req;
            final HttpServletResponse fRes=res;
            PrivilegedAction action = new java.security.PrivilegedAction() {
                public java.lang.Object run() {
                    performRequest(fReq,fRes);
                    return null;
            try {
                Subject subject=createSingleSubject(serverT3URL,user,pwd);
                weblogic.security.Security.runAs(subject, action);
            } catch (Exception e) {
                e.printStackTrace();
        public void performRequest(HttpServletRequest req, HttpServletResponse res) {
            res.setContentType("text/html");
            Writer wr = null;
            try {
                wr = res.getWriter();
                //Comment this out, do a lookup for each request and it will work
                //setupJNDIResources();
                String user = this.user;
                String pwd = this.pwd;
                //read users and passwords from the request in case they are present
                if (req.getParameter("user") != null) {
                    user = req.getParameter("user");
                if (req.getParameter("pwd") != null) {
                    pwd = req.getParameter("pwd");
                wr.write("JNDI  User: *" + userjndi + "* y pwd: *" + pwdjndi + "*<p>");
                wr.write("Queue User: *" + user + "* y pwd: *" + pwd + "*<p>");
                //Obtain a connection using user/pwd
                QueueConnection conn = factory.createQueueConnection(user, pwd);
                QueueSession ses = conn.createQueueSession(true,
                        Session.SESSION_TRANSACTED);
                QueueSender sender = ses.createSender(queue);
                TextMessage msg = ses.createTextMessage();
                msg.setText("Hi there!");
                conn.start();
                sender.send(msg);
                ses.commit();
                sender.close();
                ses.close();
                conn.close();
            } catch (Exception e) {
                e.printStackTrace();
                try {
                    wr.write(e.toString());
                } catch (Exception e2) {
                    e2.printStackTrace();
            finally{
                try {
                    wr.close();
                } catch (IOException e) {
                    e.printStackTrace();
        private Subject createSingleSubject(String providerUrl, String userName, String password) {
            Subject subject = new Subject();
            // Weblogic env class
            Environment env = new Environment();
            if(providerUrl!=null)
                env.setProviderUrl(providerUrl);
            env.setSecurityPrincipal(userName);
            env.setSecurityCredentials(password);
            try {
              // Weblogic Authenticate class will populate and Seal the subject
              Authenticate.authenticate(env, subject);
              return subject;
            catch (LoginException e) {
              throw new RuntimeException("Unable to Authenticate User", e);
            catch (Exception e) {
              throw new RuntimeException("Error authenticating user", e);
    }Thanks a lot for the help

  • Java priority queue

    Java provides PriorityQueue, and I have gone through its API.
    The implementation of PriorityQueue in Java does not provide method for increase or decrease key,
    and there must be a reason for it.
    But, when i go through books on data strucutre, a lot of them talk about the increase/decrease key function
    of the PriorityQueue.
    So I am just wondering, why is it that increase/decrease function not provided in PriorityQueue. I cannot come up with a reason for it, but i think there must be. Does anybody have any thought on this. Or is it just
    because the designers thought its not needed?
    I checked the source for the Priority Queue and the heapify() method was declared private.

    lupansansei wrote:
    I have used Java's priority queue an I have written my own but I have never come accros the terms "increase or decrease key". Do you mean something like 'upheap' or 'downheap' in relation to a 'heap' implementation meaning move an entry to it correct position if the key changes? If so then one should make the 'key' immutable so that the functions are not needed.
    Yes, i mean 'upheap' or 'downheap' by increase or decrease key. Sorry
    maybe my choice of words was not correct.
    I couldn't get what you mean by 'key' immutable. Can you please explain it. If the key cannot change (i.e. it is immutable) then there is no need to ever change the position of an element.
    >
    Correct. Since the PriorityQueue does not need to implemented using a 'heap' there is no need for the heapify() method to be exposed. If one implemented using a balanced tree or a skip list the heapify() method would not be applicable.I am using PriorityQueue and i need to update the priority of the elements and i was wondering whether to implement the whole queue
    myself or look for a better way of using the PriorityQueue class.
    Do you have any suggestions for efficiently updating the priority of element?I have a priority queue implementation where elements know they are in a heap and know where they are in the heap. By doing this I can modify 'keys' and then move a value to it's correct place in the queue in a very short time. The limitations this feature imposes on the elements and the possibility of corrupting the heap means I don't often use this these days. It is far too error prone.
    These days in my simulations I normally remove an element from the queue, process the element and then create new elements and insert them back in the queue. This sometimes takes 2 lots of Log(n) operations where my specialized priority queue just takes Log(n) operations. The code is just so much more maintainable and I accept the hit.

  • Efficient method to read a Setup file ? Config VIs ?

    Hello All:
    I am developing a project with a large setup file which is linked to a global variable in my code. Each section in the setup file maps to a cluster in my global variable. 
    Typically I use config VIs to read and modify the setup file from my code. Of late I've been using the openG config VIs since they are easy to work with clusters. Since speed is a major concern, is there an efficient way to do this OR an alternative to config VIs ?
    Kudos always welcome for helpful posts

    The real answer to this problem is to use reference objects - objects you can get to anywhere that contain arbitrary data that can be easily modified.  I can recommend two - LV2 globals and single element queues.  Both are discussed, with code samples, in this thread.  I have also attached a short and amusing tutorial on large program development which addresses many of the issues you are seeing (LV 7.1 and 7.0 formats).
    For complex configuration files, you can't beat one of the free generic hierarchical file systems.  I usually use HDF5, but there are others.  You can find a LabVIEW API for an older version of HDF5 here.  Note that the learning curve is fairly steep and the VIs are not multi-thread safe, so don't try to use them in two places at once.  If you do, you will get errors at best and corrupt your file at worst.
    Let us know if you have any more problems.
    This account is no longer active. Contact ShadesOfGray for current posts and information.
    Attachments:
    LargeGUIApplicationsInLabVIEW.zip ‏711 KB
    LargeGUIApplicationsInLabVIEW_70.zip ‏735 KB

  • Paperq: a tool for managing a reading queue of academic literature

    paperq is a command-line tool for managing a reading queue of academic literature.   It's usage is simple: you add files to the queue and then you run it without arguments to open the next file in the queue. The queue part of the code itself is rather simple but paperq also offers some other nice features:
    Add files (-a option)
    Remove a file (-r option)
    Display info on a file. Given a BibTeX file,  print the bibliographic information, otherwise print the file location (-i option)
    List all files (or bibliographic information) in the queue (-l option)
    Create an archive (tar.gz) of the papers in the queue, prepending the file names with the queue position (-z option)
    Operate on any file in the queue, instead of the head, via the -n option
    Peek at a file (open it, but don't remove it from the queue) via the -p option
    Print a file (-t option)
    Configurable file-opening command (xdg-open %s, by default)
    Documentation is available in the README file or on the website (see below). A man page is also included.
    I've been using it myself now that I finally stopped using Mendeley. I find it to be quite handy, so I've packaged it up to be shared with others.
    Screenshot showing the bibliographic info:
    Website
    AUR
    Last edited by jakobcreutzfeldt (2014-01-28 09:15:50)

    I just uploaded version 1.1.1.  I somehow missed a bug in which the first word of the author list was being chopped off. It's fixed.
    marttt wrote:Thanks very much for this! I can finally ditch some of my ugly scripts now. Have you considered adding other bibliographic data formats in the future? (Here is an interesting solution with YAML and Pandoc and, correspondingly, proper UTF-8 support.)
    Glad to hear it could be of use to you! Honestly, this also just started out as a quick script but little by little I started adding more handy features.
    As for other bibliographic formats, it could be possible. I'm mainly familiar with BibTeX, so I would need to see how the other formats look. 
    In all honesty, the bibliographic info printing could be greatly expanded: only journal articles are supported at the moment, and mathematical expressions in the titles aren't yet supported (i.e. greek characters).  It might be more efficient in the long run to write a dedicated biblio parsing program that loads some pre-written BibTeX-parsing library rather than all the crazy sed work that's going on in there right now. If that's the case, then it's just a  matter of loading some library for a different biblio file format in order to support it.

  • Data structure for simulation of message queue

    Hello,
    I have undertaken a project of simulating the point to point and publish/subscribe protocols of message queueing. This whole project would be done just in Java. There won't be any system level programming. Which data structure in Java would be the most efficient one for the message queue?

    Hello,
    I have undertaken a project of simulating the point to point and publish/subscribe protocols of message queueing. This whole project would be done just in Java. There won't be any system level programming. Which data structure in Java would be the most efficient one for the message queue?

  • Need Expert's Advice - How to use LabView Efficiently and to increase Readability

    My application is fairly complex. It is a real world testing applications that simultaneously controls 16 servo motors running various stress testing routines asynchronously and all at the same time. The application includes queues, state machines, sub VI's, dynamically launched VI's, subpanels, semaphores, XML files, ini files, global variables, shared variables, physical analog and digital interfaces and industrial networking. Just about every technique and trick that LabView 2010 has to offer and the kitchen sink as well.
    Still I am not happy with the productivity that LabVIEW 2010 has provided, nor the readability of my final product.
    Sometimes there are too many wires. Much of my state machines have a dozen or more wires just going from input to output, doing nothing, just because one or two states in the machine need that variable in some state. Yeah, I could spend alot of time bundling and unbundling and rebundling those values, but I don't think that would improve things much.
    We have had a long discussion about the use or misuse of Local variables in this forum and I don't want to repeat that here. I use them sparingly where I think it is relatively safe to do so. I also have a bug whenever I try and copy some code that contains one or more local variables. On Pasting the code with local variables, the result is something other than what I expected, I am not sure what. I have to undo the paste and rebuild the code one object at a time.
    I am also having trouble using trouble using Variable Property Nodes. When I cut and paste them, they often loose their reference object and I have to go back into the code and redo the Link To on each one. That wastes alot of time and effort.
    Creating subVIs is often not appropriate when the code makes many references to objects on the Front Panel. Some simple code will turn into a bunch of object references and dereferences which also tends to take alot of work to clean up and often does not help overall readability in many cases. I use subVIs when appropriate, but because of the interface overhead, not as often as I would like to. My application already has over 150 sub VIs.
    The LabView Clean Up Diagram function often works poorly. It leaves way too much empty space between objects, making my diagrams 3 to 4 24" screens wide. That is way too much and difficult to navigate effectively. The Clean Up function puts objects in strange places relative to other objects used nearby. It does a poor job routing wires and often makes deciphering diagrams more difficult rather than easier.
    My troubleshooting strategies don't work well for large diagrams and complex applications. The diagrams are so complex that execution highlighting may take 20 minutes for a single pass. Probes help, but breakpoint aren't of much use, because single stepping afterwards often takes you to somewhere else in the same diagram. I can't follow the logic well doing this.
    Using structures, I may have Case structures nested 5 to 10 levels deep inside some Event Structure inside a While Loop. Difficult to work with and not very readable.
    All and all, I can make it work, but I am not happy about the end result.
    I am hoping to benefit from some expert advice from those that are experienced in producing large complex applications efficiently, debugging efficiently and producing readable diagrams that they are proud of.
    Can anyone offer their advice on how best to use the LabView features to achieve these results in complex applications? I hope that you can help show me the light.

    I'm not an expert but I'm charged out as one at work.
    I am off today so I'll share some thoughts that may help or possibly inspire others to chime. I have tried to continually improve my code in those areas and would greatly welcome others sharing their approaches and insights.
    Note:
    I do refactoring services to help customers with this situation. What I will write does not represent what we do in a code review since our final delverable is a complete final design and that is beyond the scope of this reply.
    I'll comment on your points.
    dbaechtel wrote:
    My application is fairly complex. ...
    While watching Olympic figure skating competion slow-motion replays, I learned how the subtleties of how the launching skate is planted while entering a jump can make the difference between a good jump and a bad one.
    In software, we plant our foot when we turn from the design to the development. I have to admit that there where a couple of times when I moved from design to development too early and found myself in a situation like you have described.
    How to know when design is done?
    Waterfall says "cross every 't' and dot every 'i' " while Agile says "code now worry about design latter" and Bottom-up "says "demo working why bother designing" (Please feel free to coment on these over-simplifications gang).
    My answer is not much more helpful for those new to LabVIEW. 
    My design work is done when my design diagrams are more complicated than the LabVIEW diagrm they describe.
    dbaechtel wrote:
     simultaneously controls 16 servo motors running various stress testing routines asynchronously and all at the same time. The application includes ...and the kitchen sink as well.
    Have you posted any design documents you have? These will help greatly in letting us understand your application. More on diagrams latter.
    Anytime I see multiple "variations on a theme", I think LVOOP (LabVIEW OOP ) . I'll spare you the LVOOP sales pitch but will testify that once you get your first class cloned off and running as a sibling (or child) you'll appreciate how nice it is to be able to use LVOOP.
    Discalimer:
    If you don't already have an OOP frame of mind, the learning curve will be steep.
    dbaechtel wrote:
    Still I am not happy with the productivity that LabVIEW 2010 has provided, nor the readability of my final product.
    Sometimes there are too many wires....going from input to output, doing nothing,... spend alot of time bundling and unbundling and rebundling those values, but I don't think that would improve things much.
     Full disclaimer:
    I used to be of the same opinion and even used performance arguements to make my point. I have since, changed my mind.
    Let me illustrate (hopefully). This link (if it works for you, use lefthand pane to navigate hierachy) shows an app I wrote from about 10 years ago when I was in my early days of routing wires. Even the "Main" VI started to suffer from too many wires as this preview from that links shows.
    Clustering related data values using Type Definitions   is the first method I would would urge. This makes it easier to find the VIs that use the Type def via the browse relationships>>>callers. If I implement my code correctly, any problem I believe is associated with a particualr piece of data that is a Type def has to be in one of the VIs that use that type def therefore easier to maintain.
    When I wrote "related data" I am refering data normalization rules (which my wife knows and I picked-up from her and I claim no expertise in this area) where only values that are used together are grouped. E.G. Cluster named File contains "Path" and "Refnum" but not "PhaseOfMoon". This works out nicely with first creating sub-VI since all of the data related to file operations are right there whe I need it and it leads into the next concept ...
    When I look at a value in a shift register on the diagram taking up space that is only used in a small sub-set of states, I concider using an Action Engine . This moves the wire from the current diagram into the Action Engine (AE), and cleans up the diagram. The AE brings with it built-in protection so provided I keep all of the opearations related to the the Type def inside the AE I am protected when I start using multiple threads that need at that data (trust me, it may not make a difference now but end users are clever). So that extra wire is effective encapsualted and abstracted away from the diagram you are looking at.
    But I said earlier that I would not sell LVOOP so I'll show you what LVOOP based LV apps look like to contrast what I was doing ten years ago in that earlier link. This is what the top level VI looks like.
     And this is the Analysis mode of that app.
    I suspose I should not mention that LVOOP has wizards that automatically create the sub-VI (accessors) that bundle/unbundle the clusters, should I?
    Continuing...
    dbaechtel wrote:
    We have had a long discussion about the use or misuse of Local variables...I also have a bug whenever I try and copy some code...
    If you can simplify the code and duplicat ethe bug. please do so. We can get it logged and fixed.
    dbaechtel wrote:
    I am also having trouble using trouble using Variable Property Nodes....
    That sounds like a usage issue. Posting code to illustrate the process will et us take a shot at figuring out what is happening. 
     dbaechtel wrote:
    Creating subVIs is often not appropriate... My application already has over 150 sub VIs.
    "Back in the day..." LV would not even try to create a sub-VI that involved controls/indicators. I use sub-VIs to maintain a common GUI often but I do it on purpose and when I find myself creating a sub-VI that involves a control/indicator, I hit ctrl-z immediately! 
    I figure a way around it (AE ?) and then do the sub-VI.
    Judging by your brief explanation and assuming you do a LVOOP implementation, I would estimate that app need 750-1500 VIs. 
     dbaechtel wrote:
    The LabView Clean Up Diagram function often works poorly.... 
    THe clean-up works fine for how I use it. After throwing together "scratch code" and debugging the "rats nest" I'll hit clean-up as a first step. It guess good enough on simple digrams and in some cases inspires me to structure the diagram in a different way that I may not have thought about. If I don't like, ctrl-z.
    Good deisgn and modualr implementaion led to smaller diagrams that just don't need thrre screens.
     dbaechtel wrote:
    My troubleshooting strategies don't work well for large diagrams and complex applications....Can anyone offer their advice on how best to use the LabView features to achieve these results in complex applications? I hope that you can help show me the light.
    Smaller diagrams single step faster since the sub-VI run full speed. I cringe thinking about a 3-screen diagram with multiple probes open ( shivver!).
    Re: Nestested structres
    Sub-VIs (wink, wink, nudge, nudge)
    If it works you have prven the concept is possible. That is the first step in an application.
    I hope that gives you some ideas.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Which transport protocol is most efficient in JMS adaptor and why..???

    Hi all,
    Which transport protocol is most efficient in JMS adaptor and why..???
    Also can anyone tell me how to check queues in the integration server and in the reciever side....???
    If any one explain it rather than providing any link...i will be delighted...
    Thanks....
    Biplab

    <i>Which transport protocol is most efficient in JMS adaptor and why..???</i>
    U have to select the JMS provider for the JMS adapter under Transport Protocol.
    The selection of JMS provider could be according to ur cost estimation.
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/739c4186c2a409e10000000a155106/frameset.htm
    SONIQ MQ and IBM MQ series r widely used
    <i>Also can anyone tell me how to check queues in the integration server and in the reciever side....???</i>
    smq1 - outbound queues
    smq2 - inbound queues
    Regards,
    Prateek

  • Queues and notifiers - please help?!

    Can someone explain to me how to effectively utilize queues and notifiers? I understand the fundamental difference between the two but I am having a difficult time establishing when I should use one over the other or how to utilize both of them at the same time.
    Essentially what I currently have is a master loop that contains (among other things) an event structure. The event structure contains controls which dictate when notifiers are sent and destroyed. The notifiers “turn on” other loops (all within the same sequence frame as the master loop). These other loops are used for various controls, data logging, etc. Within some of these “sub-loops” I would like to step through a sequence of events – this is where I’m having trouble.
    Here is what I would like to happen: When the user clicks a certain button, a notifier is sent to sub-loop-A and it begins to run. The user then selects from a pull-down menu one of a few different options. Depending on the option selected, a specific set of events occur (whose progress is dictated by both user interaction (pressing buttons) and successful events (data being fed back). I would then like the sequence to “reset” and allow the user to select another option from the same menu – I don’t want to exit all the way back out to the main loop and force the user to re-select sub-loop-A again. However, if from the front panel the user selects sub-loop-B I would like sub-loop-A to exit and sub-loop-B to begin running. I have attached a sample of the basic layout I have so far (in LV 8.2) – I apologize in advance – I’m still learning labVIEW and I’m probably not going about this in the most efficient manner.
    A couple other things to note – I’m trying avoid polling because speed is important. Also, the template I’ve attached is far from complete – it will require additional sub-loops and additional sequence loops (which I have been advised to use que-based state machines - which I'm also not familiar with).
    Any assistance you guys can provide would be great – examples, web links, etc.
    Thanks again!
    -Erik
    Attachments:
    LayoutExample.vi ‏72 KB

    I do see a problem in the operation in that if I hit the stop button, the inner loop of auto starts running like crazy.  I think it is because if the wait for notifier returns an error due to it getting destroyed when the stop button is hit, it still sends out the notifier as the default, which starts the default case structure operating and thus its inner loops.  And they don't stop.
    You should probably get the loops out of the default case.  Any case structure should have a default case that does nothing. 
    A better practice would be in the stop button case, send out a notifier that is for a specific exit case that tells all outer loops to stop.  Then destroy the notifiers.
    In your 4 loops, you have the stop button NOR'ed with the other condition rather than OR'd.  So if you say stop = true, the OR results in a True, but the negation turns that into False and the loop does not stop.  In the top button, you had the enum compared to exit, then that was NOT so the loop would stop immediately if the enum was anything but Exit.  Because the Boolean logic in these loops was convoluted, I think the loops weren't behaving the way they should.
    I made some modifications to clean up the default case and the boolean stop loop logic in each of the loops.  See attached.
    Attachments:
    MultipleLoopsV82 MOD.vi ‏90 KB

  • UCCE Total time spent in queue

    Hi all,
    My customer wants to gather the time all abandoned or answered calls spent in queue from calltype reports.
    Ex.  one call spent 1 minute in queue and then was answered by an agent and  another call spent 1 minute in queue end then abandoned
    For this exemplo, the total queue time is 2 minutes.
    Does anyone knows what field in CT reports could provide this information?
    Thanks

    This is readily available from the Route_Call_Detail table in your HDS.
    where RouterErrorCode = 448 and RouterQueueTime > 0
    would give you the abandoned calls. When RouterErrorCode is zero they have normally been answered. You can put the CallTypeID in here to filter.
    You would need to run specific SQL queries against this table in the HDS and need to control the search using DateTime to make sure your query works efficiently. Be careful.
    Check out the description of the table in the ICM Schema Guide.
    Regards,
    Geoff

Maybe you are looking for

  • ICal Calendars not showing in iTunes

    I have four calendars in iCal but only one of them shows up when I try to sync in iTunes using USB, and when I sync my iPhone (old model), no calendar information actually syncs onto the phone. I'm not using MobileMe and the iPhone & iTunes softward

  • Is Java back in Mountain Lion?

    Hi. I just did an internet restore and I noticed that in my safari plug-in folder there is a java applet and in my safari<help<installed plugins there is java stuff in there too. Spotlight search also shows Java Visual VM. Is this normal in mountain

  • Whem I try to update it tells me I do not have some permmissions

    I currently am running version 3.0.14 and I have not been able to update to newer version. I can download but when I open it a message comes up "Unable to open as you do not have all permissions" I have a Mac with OS X 10.6.5

  • User roles and role mapping

    I've just start as an intern in Change Management team that is helping to implement SD. My two tasks are to "develop SAP user roles specific to the new business processes" and "manage the role to position mapping for provision of security roles." Non

  • RSA tokens and AAA

    I have an RSA ACE sever and would liek to sue it for console port and VTY port access....DOES AAA support this and if so, what does the config look like...I have done it witH ACS, but would like to try it just going directly to the RSA securID server