Grep - filter twice?

I want to be able to filter time output so that I would only output numbers for real time.
Typical output of time is :
user@user-VirtualBox:~$ time gzip -fc -1 rafale.bmp > rafale.bmp.gz
real 0m0.331s
user 0m0.156s
sys 0m0.060s
By using grep, to filter for "real", I get this:
armen@armen-VirtualBox:~$ (time gzip -fc -1 rafale.bmp > rafale.bmp.gz) 2>&1 | grep real
real 0m0.280s
How can I get only the 0.280? without "real"
Last edited by kdar (2012-05-18 14:27:28)

firecat53 wrote:4th item for this google search.
It's second item for me . Google isn't as reliable as you try to imply.

Similar Messages

  • Consequence of applying the same filter twice to one scene

    What would be the consequence of applying the same filter twice (with the exact same settings) to one scene?
    For example, I drop the de-interlace filter twice (assuming the exact same settings on the filter) onto the same scene by accident and don't pick up on it later... would that do anything to the quality of the footage?

    In general, applying the same filter twice to a clip has its effect applied twice. However this depends on the kind of effect: with de-interlace you might get a degraded image since the first pass removes a field and substitutes it with a new field built by interpolation, then the 2nd pass removes a field again. If it is the same as before nothing should happen, but if it is the other one you degrade the image again.
    With broadcast safe I guess you won't gain quality by adding it twice.
    Piero

  • How to adjust broadcast safe filters correctly (Filter currently compromises picture)

    Hi guys,
    Can anyone explain to me how to use the Broadcast Color filter properly?  Before, when I was using FCP7, I just slapped on the Broadcast Safe filter and the edits passes the QC every time.  Now we've moved to Premiere Pro CC and I added the filter twice, once to set the luma and once to set the saturation (both default at 110).  The problem is that it completely changes the picture.  I get banding and the colour is noticeably clamped.  I've been working a lot with animations recently, so dealing with a lot of super saturated and bright colours.  If I set the broadcast filter to both 120 on luma and saturation, it doesn't look too bad.  Does anyone know at what number in the broadcast safe filter, will the become colour become not safe?  Can I get away with 120 on the luma and sat?  Please see images below: 

    I never use that BC Filter!
    Test showed me its un reliable and it degrades the image.  Basically it uses a "keying" type approach and this can leave some very nasty visual remnants ( patches).
    I use the traditional method of working with scopes ( Luma and RGB Parades) and levels  ( Levels Effect)  and processing in various CC Tools.
    To legalise color I work on the individual chroma ( RGB Channels).  Red is usually the culprit that needs fixing.
    The Levels Effect will automatically clip Luma.  I generally drop Whites to 95-98 to be safe.
    Unfortunately in PPro or Speedgrade...there is no soft clipping available . THere is in Resolve.

  • How to get total number of result count for particular key on cluster

    Hi-
    My application requirement is client side require only limited number of data for 'Search Key' form total records found in cluster. Also i need 'total number of result count' for that key present on the custer.
    To get subset of record i'm using IndexAwarefilter and returning only limited set each individual node. though i get total number of records present on the individual node, it is not possible to return this count to client form IndexAwarefilter (filter return only Binary set).
    Is there anyway i can get this number (total result size) on client side without returning whole chunk of data?
    Thanks in advance.
    Prashant

    user11100190 wrote:
    Hi,
    Thanks for suggesting a soultion, it works well.
    But apart from the count (cardinality), the client also expects the actual results. In this case, it seems that the filter will be executed twice (once for counting, then once again for generating actual resultset)
    Actually, we need to perform the paging. In order to achieve paging in efficient manner we need that filter returns only the PAGESIZE records and it also returns the total 'count' that meets the criteria.
    If you want to do paging, you can use the LimitFilter class.
    If you want to have paging AND total number of results, then at the moment you have to use two passes if you want to use out-of-the-box features because LimitFilter does not return the total number of results (which by the way may change between two page retrieval).
    What we currently do is, the filter puts the total count in a static variable and but returns only the first N records. The aggregator then clubs these info into a single list and returns to the client. (The List returned by aggregator contains a special entry representing the count).
    This is not really a good idea because if you have more than one user doing this operation then you will have problems storing more than one values in a single static variable and you used a cache service with a thread-pool (thread-count set to larger than one).
    We assume that the aggregator will execute immediately after the filter on the same node, this way aggregator will always read the count set by the filter.
    You can't assume this if you have multiple client threads doing the same kind of filtering operation and you have a thread-pool configured for the cache service.
    Please tell us if our approach will always work, and whether it will be efficient as compared to using Count class which requires executing filter twice.
    No it won't if you used a thread-pool. Also, it might happen that Coherence will execute the filtering and the aggregation from the same client thread multiple times on the same node if some partitions were newly moved to the node which already executed the filtering+aggregation once. I don't know anything which would even prevent this being executed on a separate thread concurrently.
    The following solution may be working, but I can't fully recommend it as it may leak memory depending on how exactly the filtering and aggregation is implemented (if it is possible that a filtering pass is done but the corresponding aggregation is not executed on the node because of some partitions moved away).
    At sending the cache.aggregate(Filter, EntryAggregator) call you should specify a unique key for each such filtering operation to both the filter and the aggregator.
    On the storage node you should have a static HashMap.
    The filter should do the following two steps while being synchronized on the HashMap.
    1. Ensure that a ConcurrentLinkedQueue object exists in a HashMap keyed by that unique key, and
    2. Enqueue the total number count you want to pass to the aggregator into that queue.
    The parallel aggregator should do the following two steps while being synchronized on the HashMap.
    1. Dequeue a single element from the queue, and return it as a partial total count.
    2. If the queue is now empty, then remove it from the HashMap.
    The parallel aggregator should return the popped number as a partial total count as part of the partial result.
    The client side of the parallel aware aggregator should sum the total counts in the partial result.
    Since the enqueueing and dequeueing may be interleaved from multiple threads, it may be possible that the partial total count returned in a result does not correspond to the data in the partial result, so you should not base anything on that assumption.
    Once again, that approach may leak memory based on how Coherence is internally implemented, so I can't recommend this approach but it may work.
    Another thought is that since returning entire cached values from an aggregation is more expensive than filtering (you have to deserialize and reserialize objects), you may still be better off by running a separate count and filter pass from the client, since for that you may not need to deserialize entries at all, so the cost on the server may be lower.
    Best regards,
    Robert

  • Images with negative pixels value

    Consider a simple black (0) and white (255) image composed by 3 vertical bands : the first is white, the second is black and the third is white. Image size is W columns by H lines. Band width is W1, W2, W3 for bands 1, 2 and 3 respectively (W1+W2+W3=W).
    Consider now a first order edge detecting operation, for example Sobel for vertical edges :
    1 0 -1
    2 0 -2
    1 0 -1
    The corresponding edge image is composed of H identical lines (because the image is only varing horizontally). Each line is 0 everywhere, except on transitions : on column W1, there is a positive value of 4*255 and on collumn W1+W2 there is a negative value of -4*255.
    Using the following java source code, we obtain an image which have only the positive (>=0) part of each line :
      Image img = (new ImageIcon("bands.png")).getImage();
      BufferedImage bimg = toBufferedImage(img); // not part of JDK1.4
      ConvolveOp sobel = new ConvolveOp(new Kernel(3,3,new float[]{ 1, 0,-1,2,0,-2,1,0,-1})); // vertical Sobel filter
      BufferedImage bimg_cont = sobel.filter(bimg, null);To have an edge image that contains the negative part of edges, we do need a specific image format with specific ColorModel and SampleModel. Ideally, we will need a grayscale image (1 sample/pixel) where all elements can represent values from -4*255 to 4*255. So we need a signed type like DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT or DataBuffer.TYPE_DOUBLE.
    Note : it is possible to apply the filter twice, with a mirrored filter, but time to compute will be also twice the normal computational time (very inefficient).
    Now, this is the question : do anyone have a solution to compute a sobel filter (or any other filter type which produces negative values) with a resulting image containing positive and negative values ?

    The problem is the size of the number you have to work with. You can store values ranging from 0 to 255 (the max index for a byte-sized variable) so the total numbers you can store is 256. However, if you want to go from -255 to 255, thats about 512 values, which is twice the size of your max. You would need one more bit on that variable to do that. You could make the far left variable be the positive or negative sign (know as a signed variable) which is how computers do it anyway. The only drawback is it will only range from -127 to 127, which is half as much detail.

  • Some dba and OS related confusions

    Hi All,
    I have few doubts related to database and OS perspective. Could pelase someone let remove my these doubts and clarify below questions:
    Qus1: Can we have two listener.ora file with same default port number 1521?
    Qus2: Why do we create two groups "dba" and "oinstall" while installing oracle database?
    Qus3: If my database is in archive log mode and in case archiver process is killed then redo will be archived or not otherwise what will happen?
    Qus4: If I have catalog database for RMAN metadata. And in case catalog database is crashed. Will I be able to use RMAN backupsets for restore the database?
    Qus5: What is needed to run "some_path/root.sh" script while installing the oracle database at last step?
    Qus6: How can we get to know what are the database running in my server from OS level itself means if any database is shutdown or crashed, can we get to know about those database from OS level itself? If yes then please specify the command?
    Qus7: How can we configure UDP protocol for my listener?
    Qus: If I am updating one table and update query is going on. Inbetween OS gets shutdown then when it is back my transaction will rolledback. Please let me know how it happenes means is there any process to do this or something else?
    Please clarify above questions.
    Regards,
    Michel

    Q1: Can we have two listener.ora file with same default port number 1521?
    A1. If on the same machine then only if they are listening on different virtual IPs, otherwise not on the same machine. As previously stated, you can try this out for yourself.
    Q2: Why do we create two groups "dba" and "oinstall" while installing oracle database?
    A2. For role separation such that if you multiple software installations and/or databases on the same machine, all will have access to the central inventory owned by the primary "oinstall" group, while the other installations/databases can be owned by other secondary groups (i.e. "dba", "dba1", and so on). Taken a step further you can have different oracle software owners with the same primary "oinstall" group but different secondary groups.
    Q3: If my database is in archive log mode and in case archiver process is killed then redo will be archived or not otherwise what will happen?
    A3. I like the idea of testing this out for yourself so I'll leave this to you ;-)
    Q4: If I have catalog database for RMAN metadata. And in case catalog database is crashed. Will I be able to use RMAN backupsets for restore the database?
    A4. This depends on a few things but in the simplest case, assuming you have the control file, only need to recover a datafile, and the backup data is still available (as specified by the CONTROL_FILE_RECORD_KEEP_TIME parameter), then yes.
    Q5: What is needed to run "some_path/root.sh" script while installing the oracle database at last step?
    A5. You must have access to the root account (directly or indirectly via sudo as an example) at the command line.
    Q6: How can we get to know what are the database running in my server from OS level itself means if any database is shutdown or crashed, can we get to know about those database from OS level itself? If yes then please specify the command?
    A6. There are two different questions here, namely check for running databases, and check database status. To check for running databases from the OS there are some options, the most basic is likely the command:
    ps -eaf | grep ora_pmon | grep -v grep
    The above checks for running processes, pipes that output through a 'grep' filter for commands having 'ora_pmon' (Oracle PMON process), which in turn is filtering out the command used to do the check itself (i.e. having 'grep' in the command).
    To check the status you'll need to scrap the alert log file for certain strings. You could also use 'srvctl status database -d <dbname>' if that is available, but it will only give online/offline status.
    Q7: How can we configure UDP protocol for my listener?
    A7. Don't worry about this, it's taken care of auto-magically. Read the Oracle networking documentation for information.
    Q8: If I am updating one table and update query is going on, in between OS gets shutdown then when it is back my transaction will be rolled back. Please let me know how it happens meaning is there any process to do this or something else?
    A9. There's nothing that you need to do, it's handled internally by the database. I highly recommend reading through the concepts documentation for full understanding of this, as well as how things generally work inside the Oracle database.
    Hope this helps.

  • Rendering slowly gobbles up all the memory

    During the FCP render process I watch the Activity Monitor and the memory used slowly increases in a "Two steps forward, one step back" manner.
    When I close FCP 5.0.4 the memory used drops slighty but still stays high.
    The machine eventually runs out of memory and crashes the render process.

    I tried it on another project with the same results.
    Nothing else is running in the background.
    When I launch another app the "Used memory jumps up a little then goes back down to a slightly lower level.
    If it runs long enough to gobble up the memory it will crash every time. I have to babysit the render.
    I think that there are two reasons why it is happening. In one case it looks like I applied the same third party filter twice on a couple of clips. Fixed that. In the others I think that there is still the memory leak.
    I will try searching again for memory leak. A couple of nights ago at 2 AM I did but may have overlooked something.
    When I verify permissions it looks like all the widgets are what get changed. I disabled all but the clock (can't disable that one) with no change in the permissions behavior.

  • Drop outs with incoming phone calls

    My Broadband drops out when I get an incoming phone call. I've changed the filter (twice) but it hasn't resolved the issue. It seems to be OK with outgoing calls, and doesn't always drop with incoming calls - just about 90% of the time. Anyone got any ideas?
    Thanks
    Judy
    Solved!
    Go to Solution.

    then you have a high resistence fault and need to report a phone fault and get it fixed before you will improve your broadband 
    phone faults
    If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post.
    If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

  • Randomly Generated Pixels

    Hi!
    I want to create a script that creates random (or near random) values for every single pixel of a document, similar to the "Add Noise..." filter, but with more control, such as "only b/w", "only grey", "all RGB" and "all RGB with alpha" and maybe even control over the probability distribution. Any idea how this could be tackled? Selecting every single pixel and applying a random color seems like something that would take a script hours...
    Why do I need this?
    I've started creating some filters in Pixel Bender (http://en.wikipedia.org/wiki/Adobe_Pixel_Bender). Since Pixel Bender doesn't really have any random generator (and workarounds are limited) I'm planning on passing on the random numbers through random pixel values. I'm well aware that this can only be used for filters in which Pixel Bender creates images from scratch, but that's the plan.
    Thanks!

    Understanding the details of the Add Noise filter is probably beyond the scope of just a short post.  Here is an approach to start learning what it does.
    - Take a 50% gray level and make it a Smart Object.
    -  Open up the historgram panel (should show a spike right at 50%)
    - Apply noise filter to Smart Object in monochrome building up from small percentages in small increments
    - You will notice that for this option above, you end up with a uniform probability function over the entire tonality spread at 50% applied for uniform distribution.
    There are a variety of ways to manipulate this function, through various blends.
    Please note a couple things
    1) I am using CS5 and though not documented anywhere that I have seen, the Noise Filter does work different than in CS4.  In CS4, if you run the same noise filter twice on two identical objects, my experience is that you get the identical bit for bit result ( a random pattern yet not independent of the next run of the filter).  Manipulating Probability Density Functions (PDFs) per my previous post requires that each run of the Noise Filter starts with a different "seed" so that the result is independent of the previous run.  CS5 does this where succesive runs will create an independent noise result.
    2) PS does not equally randomize R, G, and B.  There are ways to get around this yet wanted to give you a heads up.
    3) There are other ways to generate quick random patterns outside of PS and bring them in (using scripts).   You would need to understand the format of the Photoshop Raw file.  This type of file contains bytes with just the image pixel data.  These types of files are easy to create and then load into PS. From a script (or even faster call a Python script) create this file and then load it into PS as a Photoshop Raw format file and use as an overlay.  There is not question that this is faster than trying to manipulate individual PS pixesl through a script.
    4) Please not the under Color Settings there is an option called Dither.  If this is set, there are  times where PS adds nosie into the image (I leave mine turned off).  If is used in a number of places in PS other than what the documentation implies (more than just when moving between 8 bit color spaces)
    Good luck if you are going after making a plug-in.  I have never invested in that learning curve.  Good luck.

  • ISE 1.1.1 Sponsor Portal - Button "Clear Filter" appeares twice

    I think I discovered a Bug with the Button labels when displaying Guest Users in the Sponsor Portal.
    When using Internet Explorer (tested 7 and 9) both butons are labeled "Clear Filter".
    It apears the same in every language template.
    When using Firefox (16.x) everything is ok.
    I added a screenshot to make clear which buttons are affected.

    FYI
    You can access the Cisco ISE administrative  user interface using the following browsers:
    •Mozilla Firefox 3.6 (applicable for  Windows, Mac OS X, and Linux-based operating systems)
    •Mozilla FireFox 9 (applicable for Windows,  Mac OS X, and Linux-based operating systems)
    •Windows Internet Explorer  8
    •Windows Internet Explorer 9 (in Internet  Explorer 8 compatibility mode)
    Note Cisco ISE GUI is not supported on  Internet Explorer version 8 running in Internet Explorer 7 compatibility mode.  For a collection of known issues regarding Windows Internet Explorer 8, see the  "Known Issues" section of the Release Notes for Cisco Identity Services Engine,  Release 1.1.x

  • Filter Repeating Twice

    Hi,
    I am facing the below issue in OBIEE 11g. I have a join between 2 tables with tdsnum=tdsnum as you can see below.When i try to apply report filter on TdsNum,I can see 2 filters (one on fact and another on Dimension) in  the query generated by obiee.This is causing my report to run very slow.How can i resolve this issue?
    select distinct T2812.TDS_NUM as c1, T77753.TDS_NUM as c2
    from ORG_DIM T2812, SUMMARY_FACT T77753
    where ( T2812.TDS_NUM = T77753.TDS_NUM and T2812.TDS_NUM = 7 and T77753.TDS_NUM = 7 ) order by c1, c2

    Below is the query without applying the filter.
    1)Is Obiee designed in such a way that if we filter on key column the above scenario of filter appearing both for dim and fact.?
    2)if so how to avoid it because performance becoming very slow.
    select distinct T2812.TDS_NUM as c1, T77753.TDS_NUM as c2
    from ORG_DIM T2812, SUMMARY_FACT T77753
    where ( T2812.TDS_NUM = T77753.TDS_NUM) order by c1, c2

  • "filter failed" HP printers fails to work over the network

    I got the common error "filter failed" when using my HP Color Laserjet 1600 through the network. Yet the printer works fine using directly from the computer it is connected via usb.
    This is my home network and I am not here often, but I am fairly sure everything worked out-of-the-box few weeks ago...
    I already tried both suggestion from the wiki, I started and enable avahi-daemon in all computers and the set the permission of the usb port to 0666:
    % lsusb | grep Hew
    Bus 001 Device 003: ID 03f0:3a17 Hewlett-Packard Printing Support
    % ls -l /dev/bus/usb/001/003
    crw-rw-rw- 1 root lp 189, 2 4 set 11.00 /dev/bus/usb/001/003
    I am stuck, any insights? What can I try?
    Last edited by ezzetabi (2013-09-04 09:13:00)

    try making certain that you have the latest drivers installed, which you can caget fron HP. After downloading do to system prefs>>print/fax. hightlight the HO1320n (use the - key). then use the add 9+) key ang select the HP1320n icon to add the nem printer. Set the HP as your your default printer. When you go to print for the first time, make sure that the selected printed is the the HP1320. It should go. Sometimes it need to be re-created when there are multiples.
    Good luck and hope this helps

  • Clicking twice on a Form submit button

    I have problem that I need either help on solving or ideas on a different solution please.
    I have some code in a request scope backing bean that sets a Boolean in a session cope bean to true. This indicates that the form has been submitted within this session.
    So if a user tries to submit the same form in the same session, the form backing bean will check the session bean property, see that it is set to true and the method which handles the Form�s commandButton in the backing bean returns null, with a message saying that the form can not be resubmitted.
    Also within the session bean I set a property that indicates if a credit card payment was made or not, and when the form backing bean sends its message on a form resubmit, it also informs the user if a credit card transaction had actually occurred or not.
    This works fine when you fully submit the form and return to the form page and try to resubmit it. But (there always is a but) when originally submitting the form, if a user clicks the submit button once, and then once again in quick succession (not a double click though) the session form property that gets set to true when the credit card payment is made does not get resolved correctly. It gets resolved as false (this is how it is initialized), even though it was set to true as a result of the first click.
    Only in the case of the clicking twice scenario I get an exception in the log file, which I can�t trace back. The request is directed through a filter �OrderAccessControlFilter� which seems to mask where the exception originally occurred.
    Sorry for the long explanation, I would appreciate any help.
    Below is the exception:
    10-Feb-2005 10:55:16 com.sun.faces.lifecycle.InvokeApplicationPhase execute
    SEVERE: Index: 0, Size: 0
    java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
         at java.util.ArrayList.RangeCheck(ArrayList.java:507)
         at java.util.ArrayList.remove(ArrayList.java:392)
         at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:271)
         at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:381)
         at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:75)
         at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:200)
         at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
         at com.syndero.lingo.order.OrderAccessControlFilter.doFilter(OrderAccessControlFilter.java:199)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
         at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
         at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
         at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
         at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
         at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
         at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
         at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
         at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799)
         at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705)
         at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577)
         at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
         at java.lang.Thread.run(Thread.java:534)
    10-Feb-2005 10:55:16 com.syndero.lingo.order.OrderAccessControlFilter doFilter
    SEVERE: Error in OrderAccessControlFilter:
    javax.servlet.ServletException: Index: 0, Size: 0
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:209)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
         at com.syndero.lingo.order.OrderAccessControlFilter.doFilter(OrderAccessControlFilter.java:199)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
         at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
         at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
         at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
         at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
         at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
         at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
         at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
         at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
         at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799)
         at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705)
         at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577)
         at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
         at java.lang.Thread.run(Thread.java:534)

    Hi,
    I am also facing the same problem. I have 2 buttons on my page, enable and disable. When I click enable and then disable in quick succession, the application crashes.
    Also, I put in some System out statemets in the button handlers and found out that a new thread is spawned for every request made. Each if this thread tries to service all the actions performed on the page.
    For eg : if i click on Enable and the Disable in my application, the flow is like:
    1. Thread 1 goes into the enable handler
    2. A new Thread 2 goes into the enable handler
    3. Thread 1 returns from enable action
    4. Thread 1 goes into the disable handler
    5. Thread 2 returns from enable handler
    6. Thread 1 returns from disable handler.
    7. Application crashes.
    Can anyone please explain this flow?
    Thanks,
    Mahajan.

  • How can I use the Get_ADgroup cmdlet to filter on specific groups?

    I'm trying to extract the users from specific groups
    Get-AdGroup -Filter {('Name -like "MyCo *Admin*"') -or (Name -like MyCo Helpdesk"')}
    This works: Get-ADGroup -Filter 'Name -like "MyCo *Admin*"'
    I  have an number of administrator groups and one with the name helpdesk, I'm trying to create a list of all the users in any  groups that have admin privledges.
    I can run the command twice and export to csv and merge both lists, but would like to create a script that I can use regualrly and have one csv file when done.
    Any suggestions would be appreciated
    -Tom
    Tom

    
    So I figured out how to paste into the reply.Here is the rest of the code, I tried running it and got the error at the bottom. Get-AdGroup -Filter {("Name -like 'MyCo *Admin*'") -or ("Name -like 'MyCo Helpdesk'")} |
    ForEach-Object{
    $hash=@{GroupName=$_.Name;Member=''}
    $_ | Get-ADGroupMember -ea 0 -recurs |
    ForEach-Object{
    $hash.Member=$_.Name
    New-Object psObject -Property $hash
    } |
    sort groupname,member
    Get-ADGroup : Error parsing query: '("Name -like 'MyCo *Admin*'") -or ("Name -like 'MyCo Helpdesk'")' Error Message: 'syntax error' at position: '2'.
    At line:1 char:12
    + Get-AdGroup <<<< -Filter {("Name -like 'MyCo *Admin*'") -or ("Name -like 'MyCo Helpdesk'")} |
    + CategoryInfo : ParserError: (:) [Get-ADGroup], ADFilterParsingException
    + FullyQualifiedErrorId : Error parsing query: '("Name -like 'MyCo *Admin*'") -or ("Name -like 'MyCo Helpdesk'")' Error Message: 'syntax error' at position: '2'.,Microsoft.ActiveDire
    ctory.Management.Commands.GetADGroup
    Tom

  • Apply Quartz filter to a PDF from command line

    Hello
    I'm trying to apply a quartz filter to a PDF document via the command line. I know it's possible with Python and there's a script at /Developer/Examples/Quartz/Python/filter-pdf.py that does just that. The problem with this is that the script is Python 2.3 and my OS X 10.5.8 is running Python 2.5 - so I'm getting the error message:
    +<Error>: The function `CGPDFDocumentGetMediaBox' is obsolete and will be removed in an upcoming update. Unfortunately, this application, or a library it uses, is using this obsolete function, and is thereby contributing to an overall degradation of system performance. Please use `CGPDFPageGetBoxRect' instead.+
    I don't know Pyton but I checked the file and I grep'ed the Developer folder to see if I can find where CGPDFDocumentGetMediaBox is being used, but no matter what I changed, I didn't manage to get the script working.
    So I guess my question is one of the following:
    How can I update the developer example to a new, functioning version?
    or how do I get rid of the deprecated functions that script is using?
    or is there any other way to apply a quartz filter to a PDF via the command line? (I've read that SIPS is able to accomplish this, but I couldn't find out how..)
    Thank you for any help!
    Cheers

    There a quartz filter printer hidden in OS X which makes applying a quartz filter to a PDF file on the command line a piece of cake. This is the syntax:
    +/System/Library/Printers/Libraries/./quartzfilter inputfile filterpath outputfile+
    So if I wanted to convert big.pdf to small.pdf it would go like this:$
    +/System/Library/Printers/Libraries/quartzfilter big.pdf /System/Library/Filters/Reduce\ File\ Size.qfilter small.pdf+
    The "Reduce File Size" filter isn't that great though when it comes to reducing the file size for printing. Jerome Colas wrote a nice article about quartz filters though: http://discussions.apple.com/thread.jspa?messageID=6109445&tstart=0
    You can also create your own with the ColorSync utility in Mac OS X. A Quartz filter is nothing but an XML, so you could also generate one on the fly if necessary.
    Resource: http://macscripter.net/viewtopic.php?id=25916

Maybe you are looking for

  • In OS 9, backup of photo preserves original folder/photos create/mod date. In OS 10, the dates get changed. How can we fix this ?

    We use OS 9.2.x to copy photos off the camera's SD memory card to an iMac. We then in turn backup the folders and files to an external hard drive. The original dates (create date / modified date) of the folders and photo files are preserved, which is

  • 2 Simple FCE Questions

    Hey, I'm kind of a noob at final cut and have 2 questions I think should be relatively simple to answer. 1. How can I make the audio clip stay with the video clip? For example if I move a video clip 5 seconds down the timeline, the audio clip that go

  • To validate each part of the string

    Hi, In one string, we have these ID_A,ID_B,ID_C, ..., ... I know that there's a way in PL/SQL to divide it into several parts and each part is for one ID. But can I expect that to divide the string into several parts, and then to store all IDs in som

  • 10.1.3.3.0 cloning issue

    I am trying to clone a 10.1.3.3.0 SOA apps tier, I've run prepare.pl, created a tar file, extracted it on the target node and then run clone.pl but it gets so far then creates hundreds of opmn processes saying: ps -ef | grep opmn "/oracle/product/10.

  • Dynamic gallery

    Question: Why is it that when the thumbnails of my gallery are clicked on, that the larger images corresponding to them are moved to the northwest quadrant of the stage? When this occurs, the gallery loses functionality and I'm no longer able to clic