Parallel running of process and Roi?

Hello,
i have two questions:
1. how can i run two process in parallel in labview,which one process is webcam that stream video,and the other process is image snapshot from the video   
   (i want to snapshot an image after selecting a ROI from the live camera view and to keep the camera view live).
2. how can i choose that ROI will be the entire image frame?
Solved!
Go to Solution.

Hello,
Below I've attached an example program that implements something similar.  It uses an event structure to listen when to snap and save and image while performing an IMAQdx grab.  I'm unsure if you are trying to extract the ROI, however you should be able to modify the event structure to perform processing on the image instead of saving it to file.  If you are looking to extract an ROI, I recommend you look at Extract Example.vi that ships with Vision Development module.  You can find it by going to Help»Find Examples... then navigate to Toolkits and Modules»Vision»Functions»Extract Example.vi.  It uses a Get Last Event Invoke Node to listen when an ROI is placed then passes the Image to IMAQ Extract.  For your application you could implement something similar to my attachment.  Have one loop grabbing images then place the Get Last Event Invoke Node for the image display in a parallel loop to perform the ROI extraction and post processing.  You probably wouldn't need an event structure for this.  Notice a 2nd IMAQ Create.VI is used in both examples to create a memory space to hold the image for processing/saving.
If you desire to programmatically set the ROI to the whole image, it may be easier to just clear the ROI using a property node and just pass the image as normal for post processing.  What ever processing you want to performwill automatically be applied to the whole image.
Regards,
Isaac S.
Applications Engineer
National Instruments
Attachments:
SnapForProcess.vi ‏25 KB

Similar Messages

  • LV 8.21: strange behavior with DAQ tasks, parallel running VI's and shift registers

    Hello,
    I have made a VI using DAQmx vi's. The VI uses shift registers to store DAQ tasks and other (internal) information. I have implemented  several modes of operation (enum control with a case structure) like 'init', 'read AD', 'config AD' etc. If I use this multi mode VI in a single main VI everything work as expected. I have attached a jpg that shows one example where the DAQ VI is called from 2 parallel running while loops. One loop aquires the data (LOOP 1) while the other loop configures the aquisition task (LOOP 2). If I implement the same thing by putting LOOP2 in a different VI that runs seperately from the first VI I get an error message (200428):
    Possible reason(s):
    Measurements: Value passed to the Task/Channels In control is invalid.
    The value must refer to a valid task or valid virtual channels.
    Task Name: EasyDAQ_AD
    Of course, the second VI is started manually after the 1. VI has passed the initialization part. The error message is triggered from the 1. VI that executes the DAQ task. From my understanding of the LV execution system this seems like a bug to me. Does anyone have an idea what could go wrong here?
    klaus
    Attachments:
    problem.jpg ‏30 KB

    1. In general, this kind of technique is something I've been using successfully for years.  (Ben recently wrote up a very nice treatment of these "Action Engines" as a "Community Nugget.")  So I don't start by expecting this to be a bug in the LV execution system.
    2. Your description of the problem sounds almost backwards.  You say you manually start the 2nd vi ("Config AD") *after* running the 1st vi ("Read AD").  Seems like you'd need to do the Config 1st and then do the Read, right?   I kinda suspect you actually did it in the right order, but described it wrong.
    3. The next likely scenario is that the Config failed, but you didn't trap the error and were unaware of it.  Then it makes sense that the Read would also fail.
    4. A couple issues I regularly deal with in these DAQ Action Engines is internal error handling.  I often keep a shift register inside to store errors generated inside the Action Engine.  But it can get a little tricky doing sensible things with both the internal error and any other error being wired in as input.
    I said all that so I can say this: if you have complex nested case statements, or lots of different action cases to handle, double check that the task wire makes it from all the way from left shift register to right.  Sometimes they get lost if they go through a case statement, the output tunnel is set to "use default if unwired", and 1 or more of the cases don't wire the output.
    -Kevin P.

  • I run several processes and now they only get some 30% cpu in total, pre-Mavericks they got 100% each – what has happened?

    From Terminal I use gmake to run rather calculation intensive processes, usually in severel threads for several days. I need to process some 120000 files containing around 10 Tbyte data in total. The tools I use are implemented in C++ and have no GUI. Pre-Maverick this worked as expected: running four threads I got 4X100% cpu. After upgrading to Maverick I typically only get 30% in total. No other process is "stealing" cpu time, the computer is just sitting idle...
    Basically, my calculations now run at 1/16 the original pre-Maverick speed due to the lack of cpu time.
    In Activity Monitor I can tell that App Nap is off for these processes.
    I disapled timer coalescing, but it didn't help.
    Why are my processes somehow given such low priority?
    How can I solve this issue?

    Thanks for the reply. They might execute slightly faster after recompilation, but I don't really see why it would get me more cpu time? They execute as fast now as they did pre-Mavericks in terms of user+system time, but it does not help as they only get 1/16 as much cpu resources...
    Anyhow, one should try everything. I did recompile and it didn't help.

  • Examining other running Java processes

    I've written a process which I want to keep running in the background. I'd like to write an application that I can run which will determine if my first process is already started, and if not to start it. I'd like to be able to use API methods to examine the state of my machine to determine what processes are available (similar to the way a debugger can find a list of running processes) rather than coordinating this through a file on disk. What classes should I examine to be able to do this?

    Look at the JMX classes; you can run JConsole to see that it finds running Java processes and lets you connect to them.

  • Batch processing and parallelism

    I have recently taken over a project that is a batch application that processes a number of reports. For the most part, the application is pretty solid from the perspective of what it needs to do. However, one of the goals of this application is to achieve good parallelism when running on a multi CPU system. The application does a large number of calculations for each report and each report is broken down into a series of data units. The threading model is such that only say 5 report threads are running with each report thread processing say 9 data units at a time. When the batch process executes on a 16-CPU Sun box running Solaris 8 and JDK 1.4.2, the application utilizes on average 1 to 2 CPUs with some spikes to around 5 or 8 CPUs. Additionally, the average CPU utilization hovers around 8% to 22%. Another oddity of the application is that when the system is processing the calculations, and not reading from the database, the CPU utilization drops rather increase. So goal of good parallelism is not too good right now.
    There is a database involved in the app and one of the things that does concern me is that the DAOs are implemented oddly. For one thing, these DAO's are implemented as either Singletons or classes with all static methods. Some of these DAO's also have a number of synchronized methods. Each of the worker threads that process a piece of the report data does make calls to many of these static and single instance DAO's. Furthermore, there is what I'll call a "master DAO" that handles the logic of what work to process next and write the status of the completed work. This master DAO does not handle writing the results of the data processing. When each data unit completes, the "Master DAO" is called to update the status of the data unit and get the next group of data units to process for this report. This "Master DAO" is both completely static and every method is synchronized. Additionally, there are some classes that perform data calculations that are also implemented as singletons and their accessor methods are synchronized.
    My gut is telling me that in order to achieve, having each thread call a singleton, or a series of static methods is not going to help you gain good parallelism. Being new to parallel systems, I am not sure that I am right in even looking there. Additionally, if my gut is right, I don't know quite how to articulate the reasons why this design will hinder parallelism. I am hoping that anyone with an experience is parallel system design in Java can lend some pointers here. I hope I have been able to be clear while trying not to reveal much of the finer details of the application :)

    There is a database involved in the app and one of
    the things that does concern me is that the DAOs are
    implemented oddly. For one thing, these DAO's are
    implemented as either Singletons or classes with all
    static methods. Some of these DAO's also have a
    number of synchronized methods. Each of the worker
    threads that process a piece of the report data does
    make calls to many of these static and single
    instance DAO's. Furthermore, there is what I'll call
    a "master DAO" that handles the logic of what work to
    process next and write the status of the completed
    work. This master DAO does not handle writing the
    results of the data processing. When each data unit
    completes, the "Master DAO" is called to update the
    status of the data unit and get the next group of
    data units to process for this report. This "Master
    DAO" is both completely static and every method is
    synchronized. Additionally, there are some classes
    that perform data calculations that are also
    implemented as singletons and their accessor methods
    are synchronized. What I've quoted above suggests to me that what you are looking at may actually be good for parallel processing. It could also be a attempt that didn't come off completely.
    You suggest that these synchronized methods do not promote parallelism. That is true but you have to consider what you hope to achieve from parallelism. If you have 8 threads all running the same query at the same time, what have you gained? More strain on the DB and the possiblility of inconistencies in the data.
    For example:
    Senario 1:
    say you have a DAO retrieval that is synchronized. The query takes 20 seconds (for the sake of the example.) Thread A comes in and starts the retrieval. Thread B comes in and requests the same data 10 seconds later. It blocks because the method is synchronized. When Thread A's query finishes, the same data is given to Thread B almost instantly.
    Senario 2:
    The method that does the retrieval is not synchronized. When Thread B calls the method, it starts a new 20 second query against the DB.
    Which one gets Thread B the data faster while using less resources?
    The point is that it sounds like you have a bunch of queries where the results of those queries are bing used by different reports. It may be that the original authors set it up to fire off a bunch of queries and then start the threads that will build the reports. Obviously the threads cannot create the reports unless the data is there, so the synchrionization makes them wait for it. When the data gets back, the report thread can continue on to get the next piece of data it needs if that isn't back it waits there.
    This is actually an effective way to manage parallelism. What you may be seeing is that the critical path of data retrieval must complete before the reports can be generated. The best you can do is retrieve the data in parallel and let the report writers run in parallel once the data the need is retrieved.
    I think this is what was suggest above by matfud.

  • Parallel processing and time-out

    Hi all,
    I've got a prob with doing a great number of postings.
    While the time elapsed for these postings is too long, I tried to do it with an function module and "IN BACKGROUND TASK". Well, there is also a alternative "STARTING NEW TASK".
    But I figured out, that these both variants are starting dialog work processes. I think there is a time out for dialog WP's of 300 seconds in standard.
    Will this timeout kill the processes or not??
    And witch alternative is the best to do some parallel processing??
    thanx in advanced
    regards
    Olli

    Hi Oliver,
    Some solutions here:
    1. You could increase the value of the dialog time-out (allthough this can only go to a maximum of 600 seconds). This parameter is in the SAP profiles (parameter name = rdisp/max_wprun_time).
    2. As suggested by Christian, decrease the amount of work within one LUW. You can do this by inserting (from time to time) a COMMIT WORK. This COMMIT WORK also resets the timeslice counter of the running dialog process (thus giving again an extra timeslice to work). The downside is, that if you have many related objects to modify, your ROLLBACK options become limited.
    3. Split the proces in several tasks and put the to work in the background (by scheduling jobs for them).
    4. Program your own parallel handler (see sample code). With this you could process document by document (as if each is done separately). The number of dialog processes (minus 2) is the limit you could use.
    Sample code:
    * Declarations
    CONSTANTS:
      opcode_arfc_noreq TYPE x VALUE 10.
    DATA:
       server       TYPE msname,
       reason       TYPE i,
       trace        TYPE i VALUE 0,
       dia_max      TYPE i,
       dia_free     TYPE i,
       taskid       TYPE i VALUE 0,
       taskname(20) TYPE c,
       servergroup  TYPE rzlli_apcl.
    * Parallel processes free check
    CALL 'ThSysInfo' ID 'OPCODE' FIELD opcode_arfc_noreq
                     ID 'SERVER' FIELD server
                     ID 'NOREQ'  FIELD dia_free
                     ID 'MAXREQ' FIELD dia_max
                     ID 'REASON' FIELD reason
                     ID 'TRACE'  FIELD trace.
    IF dia_free GT 1.
      SUBTRACT 2 FROM dia_free.
      SUBTRACT 2 FROM dia_max.
    ENDIF.
    * You must leave some dialogs free (otherwise no one can logon)
    IF dia_free LE 1.
      MESSAGE e000(38)
         WITH 'Not enough processes free'.
    ENDIF.
    * Prepare your run
    ADD 1 TO taskid.
    WRITE taskid DECIMALS 0 TO taskname LEFT-JUSTIFIED.
    CONDENSE taskname.
    * Run your pay load
    CALL FUNCTION 'ZZ_YOUR_FUNCTION'
      STARTING NEW TASK taskname
      DESTINATION IN GROUP servergroup
      EXPORTING
    *   Your exporting parameters come here
      EXCEPTIONS
        communication_failure  = 1
        system_failure         = 2
        RESOURCE_FAILURE       = 3
        OTHERS                 = 4.
    Of course you would put this within a loop and let your "payload" function fire off for each document.
    You MUST check the number of free processes just before you run the payload.
    And as last reminder: Do NOT use the ABAP statement WAIT (this will disrupt the counting of free processes).
    Hope this will help you,
    Regards,
    Rob.

  • I attempt to open a second window from the icon but it does not open then when firefox is closed it will not reopen because it is still running in processes but no window displayed until you kill the process and then restart firefox.

    I attempted to open a new window from the Firefox icon but nothing happens. I then went on browsing and closed Firefox but was later unable to open it. I checked processes and it was already running but there was no window displayed. I am running Windows 7 Professional.
    This is repeated any time I already have the browser open and wish to open a second instance.

    '''<u>Open a second window (not a second tab, that is different) when Firefox is already running and displayed on the monitor</u>'''
    *Firefox button > New Tab > New Window
    *CTRL+N
    *'''''If using the Menu Bar''''': File > New Window
    **To '''''temporarily''''' display and make choices from the Menu Bar press the ALT key or the F10 key
    **Also see: https://support.mozilla.com/en-US/kb/Menu%20bar%20is%20missing
    '''<u>Firefox "hang on exit"</u>'''
    #Stop the Firefox process:
    #*[http://kb.mozillazine.org/Kill_application Mozillazine - Kill application]
    #*Windows 7 users click [http://www.techrepublic.com/blog/window-on-windows/reap-the-benefits-of-windows-7s-task-manager/2576 here]
    #Why Firefox may hang:
    #*[http://support.mozilla.com/en-US/kb/Firefox+hangs Firefox hangs] (see Hang at exit)
    #*[http://kb.mozillazine.org/Firefox_hangs Firefox hangs (Mozillazine)] (see Hang at exit and Closing Firefox properly)
    #*[https://support.mozilla.com/en-US/kb/Firefox+is+already+running+but+is+not+responding Firefox is already running but is not responding]
    #Use Firefox Safe Mode to find a problem with an Extension or Plugin:
    #*Don't check anything when entering Safe Mode, just continue
    #*If the problem does not occur in Safe Mode it is probably and Extension or Plugin causing the problem
    #*See:
    #**[[Safe Mode]] and [http://kb.mozillazine.org/Safe_Mode Safe Mode (Mozillazine)]
    #**[http://support.mozilla.com/en-US/kb/Troubleshooting+extensions+and+themes Troubleshooting extensions and themes]
    #**[http://support.mozilla.com/en-US/kb/Troubleshooting+plugins Troubleshooting plugins]
    #**[http://support.mozilla.com/en-US/kb/Basic+Troubleshooting Basic Troubleshooting]
    '''If this reply solves your problem, please click "Solved It" next to this reply when <u>signed-in</u> to the forum.'''

  • I have an iMac 5.1 which is running OSx10.5.8 - it runs very slowly and I can't find if there is a particular problem. In the process, I have tried to install 10.6 [I have a new iMac] and the 5.1 won't accept the install discs.

    I have an iMac 5.1 which is running OSx10.5.8 - it runs very slowly and I can't find if there is a particular problem. In the process, I have tried to install 10.6 [I have a new iMac] and the 5.1 won't accept the install discs.

    As you have discovered... you cannot use the install discs from another Mac...
    Re the iMac running OS X 10 5 8... and for your New Mac...
    See Here for keeping your Mac Happy...
    http://support.apple.com/kb/HT1147
    http://www.thexlab.com/faqs/maintainingmacosx.html
    http://www.thexlab.com/faqs/performance.html

  • Aim to process all files in folders on desktop to run through photoshop and save in multiple locations

    Aim to process all files in folders on desktop to run through photoshop and save in multiple locations
    Part one:-
    Gather information from desktop to get brand names and week numbers from the folders
    Excluding folders on desktop beginning with "2" or "Hot"
    Not sure about the list of folders
    but I have got this bit to work with
    set folderPath to "Hal 9000:Users:matthew:Desktop:DIVA_WK30_PSD" --<<this would be gained from the items on the desktop
    set {oldTID, my text item delimiters} to {my text item delimiters, ":"}
    set folderName to last text item of folderPath
    set my text item delimiters to "_WK"
    set FolderEndName to last text item of folderName
    set brandName to first text item of folderName
    set my text item delimiters to "_PSD"
    set weekNumber to first text item of FolderEndName
    set my text item delimiters to oldTID
    After running this I have enough information to create folders in multiple locations, (i need to know where they are so that photoshop can later save them in those multiple locations
    So I need the following folders created
    Locally
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName: brandName + "_WK" + weekNumber + "_LR" --(Set path for Later)PathA
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName: brandName + "_WK" + weekNumber + "_HR"--(Set path for Later)PathB
    Network
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:"Week" + weekNumber
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:"Week" + weekNumber:brandName + "_WK" + weekNumber + "_LR"  --(Set path for Later)PathC
    Volumes:GEN:Website_Images --(no need to create folder just set path)PathD
    FTP (Still as a normal Volume) So like another Network
    Volumes:impulse:"Week" + weekNumber
    Volumes:impulse:"Week" + weekNumber:Brand
    Volumes:impulse:"Week" + weekNumber:Brand:brandName + "_WK" + weekNumber + "_LR"  --(Set path for Later)PathE
    Volumes:impulse:"Week" + weekNumber:Brand:brandName + "_WK" + weekNumber + "_HR"  --(Set path for Later)PathF
    I like to think that is end of Part 1
    Part 2
    Take the images  (PSD's) from those folders relevant to the Brand then possibly run more applescript that opens flattens and then saves it in the locations above.
    For example….
    An image in folder DIVA_WK30_PSD will then run an applescript in Photoshop, lets call it DivaProcessImages within this we then save to PathA, PathB, PathC, PathD, PathE, PathF the folder path of C should therefore look like this
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:Week30:DIVA_WK30_LR and of course save the image as original filename.
    Then from the next folder
    An image in folder Free_WK30_PSD will then run an applescript in Photoshop, lets call it FreeProcessImages within this we then save to PathA, PathB, PathC, PathD, PathE, PathF the folder path of C should therefore look like this
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:Week30:Free_WK30_LR and of course save the image as original filename.
    The photoshop applescript i'm hoping will be easier as it should be a clearer step by step process without any if's and but's
    Now for the coffee!!

    Hi,
    MattJayC wrote:
    Now to the other part, where each folder was created (and those that already existed) how do I set them as varibles?
    For example,
    set localBrandFolder_High_Res to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", localBrandFolder)
    This line was used to create more than one folder as it ran though the folders on the desktop. The next part is I will need to reference them to save files to them.
    You can use a records
    Examples
    if you want the path of localBrandFolder_High_Res  of "Diva", if "Diva" is the second folder of the Desktop
    You get the path with this : localBrandFolder_High_Res of record 2 of myRecords
    if you want the path of localWeekFolder  in the first folder of the Desktop
    You get the path with this : localWeekFolder of record 1 of myRecords
    Here is the script
    set myRecords to {}
    set dtF to paragraphs of (do shell script "ls -F ~/Desktop | grep '/' | cut -d'/' -f1")
    repeat with i from 1 to number of items in dtF
        set this_item to item i of dtF
        if this_item does not start with "2_" and this_item does not start with "Hot" then
            try
                set folderPath to this_item
                set {oldTID, my text item delimiters} to {my text item delimiters, ":"}
                set folderName to last text item of folderPath
                set my text item delimiters to "_WK"
                set FolderEndName to last text item of folderName
                set brandName to first text item of folderName
                set my text item delimiters to "_PSD"
                set weekNumber to first text item of FolderEndName
                set my text item delimiters to oldTID
            end try
            try
                set this_local_folder to "Hal 9000:Users:matthew:Pictures:2011-2012"
                set var1 to my getFolderPath("WK" & weekNumber, this_local_folder)
                set var2 to my getFolderPath(brandName, var1)
                set var3 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var2)
                set var4 to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", var2)
                --set up names to destination folders and create over Netwrok including an already exisiting folder
                set this_Network_folder to "DCKGEN:Brands:Zoom:Brand - Zoom:Upload Photos:2012:"
                set var5 to my getFolderPath("WK" & weekNumber, this_Network_folder)
                set var6 to my getFolderPath(brandName, var5)
                set var7 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var6)
                set website_images to "DCKGEN:Website_Images:"
                --set up names to destination folders and create over Netwrok for FTP collection (based on a mounted drive)
                set this_ftp_folder to "Impulse:"
                set var8 to my getFolderPath("Week" & weekNumber, this_ftp_folder)
                set var9 to my getFolderPath(brandName, var8)
                set var10 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var9)
                set var11 to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", var9)
                set end of myRecords to ¬
      {localWeekFolder:var1, localBrandFolder:var2, localBrandFolder_Low_Res:var3, localBrandFolder_High_Res:var4, networkWeekFolder:var5, networkBrandFolder:var6, networkBrandFolder_Low_Res:var7, ftpWeekFolder:var8, ftpBrandFolder:var9, ftpBrandFolder_Low_Res:var10, ftpBrandFolder_High_Res:var11}
            end try
        end if
    end repeat
    localBrandFolder_High_Res of record 2 of myRecords -- get full path of localBrandFolder_High_Res in the second folder of Desktop
    on getFolderPath(tName, folderPath)
        tell application "Finder" to tell folder folderPath
            if not (exists folder tName) then
                return (make new folder at it with properties {name:tName}) as string
            else
                return (folder tName) as string
            end if
        end tell
    end getFolderPath

  • HT4796 When I run migration assistant on my PC, I get a message saying windows mail is running and won't let me proceed.  Windows Mail is not running. I checked Task Manager's applications, services and processes and found no references to Window's Mail.

    Got a new iMac 27" for Christmas (well a little after) and tried using Migration Assistant to transfer some files.  I can't get past a window on the PC that pops up indicating Quit Other Programs  Before you can transfer your information, the following programs must be shutdown:  Windows Mail  Close these programs and run Migration Assistant again.  If these programs do not appear to be running, restart your computer and try again.
    I've rebooted.  I've searched Task Manager - Applications, Processes and Services and there are no references to WMail.  I don't or haven't knowingly used Windows Mail and Outlook isn't even installed on my PC.  Does anybody know how to get around this without using external drives, USB flashdrives, etc.

    Lots of fun.... The superdrive in my Macbook seems to have gone off kilter since my install. Now the CD-R won't work regardless of which operating system I use. Normal CD's and DVD's are fine, but any CD-R is taken as a blank disk.
    Using an external CD drive solved this problem.

  • Firefox become in Offline mode and unable to open any page and when shut down it remains in processes and freez up, blocking another to be run

    When I on my Laptop, Acer Aspire 5735 with Windows 7 Ultimate run Firefox all runs perfectly smooth, all is working as expected, but when I go to Sleep mode or Hibernate (Firefox remains open) after turning on Firefox is not active any more, all open tabs are there but surfing is unavailable. Firefox reports message that it is in Offline mode (but menu File > Work Offline is not checked and if I check it and uncheck it again it doesn't have any impact). Which means If I open new tab and write web address it wont open, if I click any link in opened tabs from before, it wont open anything and it will show message that Firefox is in offline mode. After that I shut down Firefox by using Close(x) button and it disappears from desktop but it remains in memory and active processes not allowing any other instance of Firefox to be run. Finally I shut it down on force (End process) and start another instance and all works perfect. I am not sure is this only my problem with hardware/software configuration so I wanted to share this.

    This is a pain but I have resolved it. This is instructions for Windows 7 but it may be the same on other Operating systems.
    Note: "''YOUR USERNAME''" relates to the account you log on to Windows with.
    # First go to control '''panel / folder options''' and ''uncheck'' ''''Hide extensions for known file types'''', ''check'' ''''Show hidden files and folders'''' and ''check'' ''''Show hidden operating system files'''' then click ''apply'' and ''OK''.
    # Go here :''' C:\Users\"YOUR USERNAME"\AppData\Roaming\Mozilla\Firefox\''' and copy the '''Profiles''' folder to your desktop.
    # Go here : '''C:\Users\"YOUR USERNAME"\AppData\Local\Mozilla\Firefox\''' and copy the '''Profiles''' folder to your desktop. You will need to rename this as you already have a Profiles folder so name it '''Profiles2'''.
    # Uninstall Firefox (''selecting to remove all user settings'') then go here: '''C:\Program Files\''' and delete the ''Mozilla Firefox'' folder.
    # Go here : '''C:\Users\"YOUR USERNAME"\AppData\Roaming''' and delete the ''Mozilla'' folder
    # Go here: '''C:\Users\"YOUR USERNAME"\AppData\Local\ '''delete the ''Mozilla'' folder.
    # Open another browser (i.e.'' Internet Explorer'') and download and install ''C-Cleaner'' here: [http://www.piriform.com/ccleaner/download/standard]
    # Once installed run the ''''Registry cleaner'''' over and over until no problems are detected and exit the application. You may now uninstall it but it is probably one of the best free apps available as it has a better uninstaller than Windows does and controls which programs run when Windows starts without using the 'msconfig' command.
    # Now download the latest version of Firefox with your other browser and install it then run the application once and close it down.
    # Open the ''Profiles'' folder on your desktop and inside you will find one or more folders with a name ending in '''.default''' (i.e. ''uomifhku.default''). Right click each folder and select '''Properties''' to see which contains the highest amount of data or the bigger file size. Once you have done that open the largest folder and make sure there is nothing in there with '''... @security.compass''' written on it. If there is then delete it. Hold your ''left'' mouse button and move your mouse pointer over all of the files to highlight in blue. Put your mouse cursor on any one of the files, ''right click'' and select 'Copy' to copy them all to system memory and then close the folder.
    # Go here: '''C:\Users\"YOUR USERNAME"\AppData\Local\Mozilla\Firefox\Profiles''' and find the folder ending in '''.default''' then open it. Now ''right click'' anywhere on the background in the main Payne containing the files and select 'paste'. When prompted to '''Overwrite Files''' and '''Merge Folders''' always accept and proceed. Once done, close the window.
    # Open the '''Profiles2''' folder on your desktop and inside you will find one or more folders with a name ending in '''.default''' (i.e. ''uomifhku.default''). Right click each folder and select '''Properties''' to see which contains the highest amount of data or the bigger file size. Once you have done that open the largest folder and make sure there is nothing in there with '''... @security.compass''' written on it. If there is then delete it. Hold your'' left'' mouse button and move your mouse pointer over all of the files to highlight in blue. Put your mouse cursor on any one of the files, ''right click'' and select '''Copy''' to copy them all to system memory and then close the folder.
    # Go here : '''C:\Users\"YOUR USERNAME"\AppData\Roaming\Mozilla\Firefox\Profiles''' and find the folder ending in '''.default''' then open it. Now right click anywhere on the background in the main Payne containing the files and select '''paste'''. When prompted to '''Overwrite Files''' and '''Merge Folders''' always accept and proceed. Once done, close the window.
    Start Firefox and if all is well you should be working again and it back in the state you left it last without the bug.
    Many thanks, sparkyuiop

  • Ways to run dir command in Process and get Output

    Hello,
    In one of the control in our web application, User can select any directory from his work area and can get list of directories and content of directories i.e. list of files. As working on file object is really slow so the performance is extremely poor. I am thinking of using Process object and run dir command for any directory selected by user and show the directory listing to user.
    Do you guys think it would be possible

    Its always good to work with IO buffer than low level file api. That should be irrelevant to the question as asked (if not then there might be other problems.)This is no way irrelevant but its a fact. Working with Java File api to traverse the content of the directory is really painful. Because we currently use file api to help user to traverse thru her work area.
    BTW, there are two servers. One running the app and the other have all users work areas. User can traverse the workareas content by using something \\server1\workarea1\user1\folder1 etc... in the app to see the content of any folder.A "server" in this context would be an "application" such as something like tomcat. Your client would then ask the "server" for information.
    In your case you are dealing with another file system via the windows remote file system access. So per my question it is not another "server".A server is what providing service to a client and in our case its a app server with a web app in it. The users use the web app to manage their work area(which is another file server).
    The app server and file server and physically two separate machines.
    So again, I am back to my first question, how can run dir command using Process object and get the buffer.
    Till now, I have done this
    ProcessBuilder pb = new ProcessBuilder("cmd", "dir", "c:/");
            pb.directory( new File("C:/temp")); // Or whatever directory you want for cwd
            Map<String,String> env = pb.environment();
            env.put("PATH", "C:/temp");
            try {
                 Process process = pb.start();
                   InputStream inStream = process.getInputStream();
                   new AsyncPipe(process.getErrorStream(), System.out).start();
                 new AsyncPipe(process.getInputStream(), System.out).start();
                 final int returnCode = process.waitFor();
                 System.out.println("Return code is " + returnCode);
                   System.out.println("\nExit value = " + inStream + "\n");
              } catch (Exception e) {
                   e.printStackTrace();
              }However it simply opens command prompt

  • Dynamic Parallel Approval for HCM Process and Forms

    Hi everyone,
    I have a scenario where I need to use the "Dynamic Parallel Approval" (or to keep it simple, initially I tried using the "Parallel Approval" wizard)for a workflow used in the HCM Process and Forms.
    The standard task for approval in process and forms is TS17900101. I have mentioned a multiline container in the Miscellaneous tab of this task. However,I was unable to use this task in the wizard. There are no results attahced to this task unlike any other standard approval task (like TS30200147). I need to use the task TS17900101 in the workflow assigned to process and forms, but not sure how to handle this scenario (parallel approval).
    If this is not the right way of doing it, Is there any workaround for "Parallel Approval" in HCM Process and Forms.
    Could anybody throw some light around this area.
    Thanks for your help.
    - MM

    Thanks Anuj. But I believe, the container element that I add in the miscellaneous tab does not necessarily have to be used in the agent assignment. The multiline container is just to instantiate the workitem 'n' number of times. Correct me if I am wrong.
    My concern is that I was unable to use this approval task (TS17900101) in the workflow wizard for dynamic paralle/parallel approval.
    Arghadip - Thanks for your suggestion. I have seen some your nice contributions in the WF forum.
    I actually tried using the 'Blocks'. But this is what I ran into. When I send multiple approval requests (say 3), if one person has approved it and the second has rejected it,I need to take out the workitem from the third person's list (because it has been rejected by someone in the group). I am not sure if this is possible using Blocks. And in my case the third person is still having the workitem, but gets a dump/error when he tries to open it.
    Also, if any one has rejected the request, I do not have to wait for the rest to take any action on the workitem and proceed further. But I guess in 'Blocks' it will not let you go out unless every workitem has been processed.
    To summarize,here's what I need - I need to come out of the block for two conditions. One, if everyone has approved, comeout of the block with an apprval flag. Two, if anyone has rejected (even if some have not processed their workitem), delete the workitems from others inbox and come out of the block with a rejection flag.
    So, any kind of input or suggestions on how this could be handled would be highly appreciated.
    Thanks
    MM

  • HT5634 HOW WOULD I DOWNLOAD WINDOWS 7 BCA FOR MY MACBOOK AIR. I HAVE PARALLELS 7 FOR MAC AND WANT TO RUN QUICKEN FOR WINDOWS

    Which systems of Windows 7 do I download for my MacBook Air?  I have OS 10.8,5  I installed Parallels 7 for Mac and would like to run Quicken for Windows.

    You can run Windows 7 in Parallels. You do not need to run Boot Camp Assistant to install Windows if using Parallels. Visit the Parallels site for instructions regarding running Windows in Parallels.

  • Security Update and Parallels running XP

    Is anyone successfully able to use networking using Parallels running XP after installing the Aug 1 Security Update?
    Things were working fine before I ran the update. Now my XP sessions don't establish a network connection at work.

    I can connect just fine. I was having problems using Airport running XP before the update. Had something to do with my VM and Mac having too similar a name.

Maybe you are looking for

  • Assigning roles to users by code without SecurityAdministration permission

    Hello! I'm a begginer working with VS Lightswitch and all the ASP.NET stuff.  Here is my problem: I have an application where users follow a certain hierarchy: we have employees, Area Managers, Department Managers, ... When creating a new employee he

  • Call a program in AS400

    Hi All, I try to call a program in AS400. Is this possible with PI. I made a research and found that through JDBC (with jt400) we can connect. BUT can we call a program which is written in RPG or COBOL? Is this possible? Thanks.

  • Is there no way to simply remove realplayer from the blocked list?

    ...My issue is someone disabled the realplayer extension / addon / plug in for some ungodly reasoning due to some block list. I'm over here cussing a blue streak at this choice of blocking, especially when I cant goto my "add ons" area and simply cli

  • Inspector won't let me type in series section

    I'm trying to make custom error bars for my bar graph, but inspector won't let me enter anything into the series section.  The sections are not grayed out and I can still use the drop down menu.  It was working a few minutes ago, but now it has stopp

  • How To ST03N

    Hi all, in a test system (BW 7.0, SP14), when I'm using ST03N -> BI Workload, I don't see any Analysis Views but  Load Data and Process Chains. The technical content is activated and in 0TCT_MC02 I've got a lot of detailed data, that I'd like to anal