Controlling the same output from multiple locations in vi

Hi
I have a very basic question which I do not seem to find the answer to though.
When programming a VI for measurement and control using a standard DAQ device, for example 6281 I
often want to control the same digital out pin from various places in the program.
F.ex. In the beginning I want P0,0 to be high, then later somewhere deep in the program in a while loop I want
to set P0,0 to low. How do I do this. I keep getting the message that the resource is reserved.
This I have the same question with AI, AO and Digital in.
The DAQassistant does not work and doing this more manually by creating a task, stopping it and clearing it has been problematic.
Can anyone help? Hopefully with a demonstration VI if you have it?
Thanks for reading all the way down though  

You should create a functional global that stores tasks for each port.
In the following image I have some code that should work.
It looks up the addressed port and creates a new task if needed, otherwise it will call the old task.
Be sure to call this VI with 'Stop?'=True at the end of your program.
Ton
Message Edited by TonP on 10-11-2008 08:41 PM
Message Edited by TonP on 10-11-2008 08:42 PM
Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
Nederlandse LabVIEW user groep www.lvug.nl
My LabVIEW Ideas
LabVIEW, programming like it should be!
Attachments:
DO access.png ‏15 KB

Similar Messages

  • Parallel run of the same function from multiple jobs

    Hello, everyone!
    I have a function which accepts a date range, reads invoices from a partitioned by date table and writes output to a partitioned by invoice table. Each invoice can have records only with one date, so both tables may have one invoice only in one partition, i.e. partitions do not overlap. Function commits after processing each date. The whole process was running about 6 hrs with 46 million records in source table.
    We are expecting source table to grow over 150 million rows, so we decided to split it into 3 parallel jobs and each job will process 1/3 of dates, and, as a result, 1/3 of invoices.
    So, we call this function from 3 concurrent UNIX jobs and each job passes its own range of dates.
    What we noticed, is that even if we run 3 jobs concurrently, they do not run this way! When 1st job ends after 2 hrs of run, the number of commited rows in the target table is equal to the number of rows inserted by this job. When 2nd job ends after 4 hrs of run, the number of rows in the target table is equal the summary of two jobs. And the 3rd job ends only after 6 hrs.
    So, instead of improving a process by splitting it into 3 parallel jobs we ended up having 3 jobs instead of one with the same 6 hrs until target table is loaded.
    My question is - How to make it work? It looks like Oracle 11g is smart enough to recognize, that all 3 jobs are calling the same function and execute this function only once at the time. I.e. it looks like only one copy of the function is loaded into the memory at the same even if it called by 3 different sessions.
    The function itself has a very complicated logic, does a lot of verifications by joining to another tables and we do not want to maintain 3 copies of the same code under different names. And beside this, the plan is that if with 150 mln rows we will have a performance problem, then split it to more concurrent jobs, for example 6 or 8 jobs. Obviously we do not want to maintain so many copies of the same code by copying this function into another names.
    I was monitoring jobs by quering V$SESSION and V$SQLAREA ROWS_PROCESSED and EXECUTIONS and I can see, that each job has its own set of SID's (i.e. runs up to 8 parallel processes), but number of commited rows is always eqal to the number of rows from the 1st job, then 2nd+1st, etc. So, it looks like all processes of 2nd and 3rd jobs are waiting until 1st one is done.
    Any ideas?

    OK, this is my SQL and results (some output columns are ommited as irrelevant)
    SELECT
            TRIM ( SESS.OSUSER )                                                        "OSUser"
          , TRIM ( SESS.USERNAME )                                                      "OraUser"
          , NVL(TRIM(SESS.SCHEMANAME),'------')                                         "Schema"
          , SESS.AUDSID                                                                 "AudSID"
          , SESS.SID                                                                    "SID"
          , TO_CHAR(SESS.LOGON_TIME,'HH24:MI:SS')                                       "Sess Strt"
          , SUBSTR(SQLAREA.FIRST_LOAD_TIME,12)                                          "Tran Strt"
          , NUMTODSINTERVAL((SYSDATE-TO_DATE(SQLAREA.FIRST_LOAD_TIME,'yyyy-mm-dd hh24:mi:ss')),'DAY') "Tran Time"
          , SQLAREA.EXECUTIONS                                                          "Execs"
          , TO_CHAR(SQLAREA.ROWS_PROCESSED,'999,999,999')                               "Rows"
          , TO_CHAR(TRAN.USED_UREC,'999,999,999')                                       "Undo Rec"
          , TO_CHAR(TRAN.USED_UBLK,'999,999,999')                                       "Undo Blks"
          , SQLAREA.SORTS                                                               "Sorts"
          , SQLAREA.FETCHES                                                             "Fetches"
          , SQLAREA.LOADS                                                               "Loads"
          , SQLAREA.PARSE_CALLS                                                         "Parse Calls"
          , TRIM ( SESS.PROGRAM )                                                       "Program"
          , SESS.SERIAL#                                                                "Serial#"
          , TRAN.STATUS                                                                 "Status" 
          , SESS.STATE                                                                  "State"
          , SESS.EVENT                                                                  "Event"
          , SESS.P1TEXT||' '||SESS.P1                                                   "P1"
          , SESS.P2TEXT||' '||SESS.P2                                                   "P2"
          , SESS.P3TEXT||' '||SESS.P3                                                   "P3"
          , SESS.WAIT_CLASS                                                             "Wait Class"
          , NUMTODSINTERVAL(SESS.WAIT_TIME_MICRO/1000000,'SECOND')                      "Wait Time"
          , NUMTODSINTERVAL(SQLAREA.CONCURRENCY_WAIT_TIME/1000000,'SECOND')             "Wait Concurr"
          , NUMTODSINTERVAL(SQLAREA.CLUSTER_WAIT_TIME/1000000,'SECOND')                 "Wait Cluster"
          , NUMTODSINTERVAL(SQLAREA.USER_IO_WAIT_TIME/1000000,'SECOND')                 "Wait I/O"
          , SESS.ROW_WAIT_FILE#                                                         "Row Wait File"
          , SESS.ROW_WAIT_OBJ#                                                          "Row Wait Obj"
          , SESS.USER#                                                                  "User#"
          , SESS.OWNERID                                                                "OwnerID"
          , SESS.SCHEMA#                                                                "Schema#"
          , TRIM ( SESS.PROCESS )                                                       "Process"
          , NUMTODSINTERVAL(SQLAREA.CPU_TIME/1000000,'SECOND')                          "CPU Time"
          , NUMTODSINTERVAL(SQLAREA.ELAPSED_TIME/1000000,'SECOND')                      "Elapsed Time"
          , SQLAREA.DISK_READS                                                          "Disk Reads"
          , SQLAREA.DIRECT_WRITES                                                       "Direct Writes"
          , SQLAREA.BUFFER_GETS                                                         "Buffers"
          , SQLAREA.SHARABLE_MEM                                                        "Sharable Memory"
          , SQLAREA.PERSISTENT_MEM                                                      "Persistent Memory"
          , SQLAREA.RUNTIME_MEM                                                         "RunTime Memory"
          , TRIM ( SESS.MACHINE )                                                       "Machine"
          , TRIM ( SESS.TERMINAL )                                                      "Terminal"
          , TRIM ( SESS.TYPE )                                                          "Type"
          , SQLAREA.MODULE                                                              "Module"
          , SESS.SERVICE_NAME                                                           "Service name"
    FROM    V$SESSION    SESS
    INNER JOIN V$SQLAREA    SQLAREA  
       ON SESS.SQL_ADDRESS  = SQLAREA.ADDRESS
       and UPPER(SESS.STATUS)  = 'ACTIVE'
    LEFT JOIN  V$TRANSACTION  TRAN
       ON  TRAN.ADDR         = SESS.TADDR
    ORDER BY SESS.OSUSER
            ,SESS.USERNAME
            ,SESS.AUDSID
            ,NVL(SESS.SCHEMANAME,' ')
            ,SESS.SID
    AudSID     SID     Sess Strt     Tran Strt     Tran Time     Execs     Rows     Undo Rec     Undo Blks     Sorts     Fetches     Loads     Parse Calls     Status     State     Event     P1     P2     P3     Wait Class     Wait Time     Wait Concurr     Wait Cluster     Wait I/O     Row Wait File     Row Wait Obj     Process     CPU Time     Elapsed Time     Disk Reads     Direct Writes     Buffers     Sharable Memory     Persistent Memory     RunTime Memory
    409585     272     22:15:36     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITED SHORT TIME     PX Deq: Execute Reply     sleeptime/senderid 200     passes 2     0     Idle     0 0:0:0.436000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     7     21777     22739     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     203     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.9674000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     25     124730     4180     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     210     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.11714000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     24     124730     22854     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     231     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.4623000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     46     21451     4178     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     243     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITED SHORT TIME     PX qref latch     function 154     sleeptime 13835058061074451432     qref 0     Other     0 0:0:0.4000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     35     21451     3550     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     252     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.19815000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     49     21451     22860     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     273     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.11621000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     22     124730     4182     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     277     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     db file parallel read     files 20     blocks 125     requests 125     User I/O     0 0:0:0.242651000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     39     21451     4184     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     283     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.2781000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     42     21451     3552     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     295     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.24424000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     40     21451     22862     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409585     311     22:30:01     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.15788000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     31     21451     22856     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409586     242     22:15:36     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITED KNOWN TIME     PX Deq: Execute Reply     sleeptime/senderid 200     passes 1     0     Idle     0 0:0:0.522344000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     28     137723     22736     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409586     192     22:29:20     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.14334000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     31     21462     4202     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409586     222     22:29:20     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.16694000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     37     21462     4194     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409586     233     22:29:20     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.7731000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     44     21462     4198     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409586     253     22:29:20     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     db file parallel read     files 21     blocks 125     requests 125     User I/O     0 0:0:0.792518000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     39     21462     4204     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409586     259     22:29:20     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.2961000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     35     21462     4196     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409586     291     22:29:20     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq Credit: send blkd     sleeptime/senderid 268566527     passes 1     qref 0     Idle     0 0:0:0.9548000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     35     21462     4200     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409587     236     22:15:36     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq: Table Q Normal     sleeptime/senderid 200     passes 2     0     Idle     0 0:0:0.91548000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     25     124870     22831     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409587     207     22:30:30     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq: Execution Msg     sleeptime/senderid 268566527     passes 3     0     Idle     0 0:0:0.644662000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     43     21423     4208     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409587     241     22:30:30     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     PX Deq: Execution Msg     sleeptime/senderid 268566527     passes 3     0     Idle     0 0:0:0.644594000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     47     21423     4192     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448
    409587     297     22:30:30     22:15:36     0 0:14:52.999999999     302     383,521               305     0     1     3598          WAITING     db file parallel read     files 20     blocks 109     requests 109     User I/O     0 0:0:0.793261000     0 0:0:1.124995000     0 0:0:0.0     0 1:56:15.227863000     12     21316     4206     0 0:25:25.760000000     0 2:17:1.815044000     526959     0     25612732     277567     56344     55448Here I found one interesting query http://www.pythian.com/news/922/recent-spike-report-from-vactive_session_history-ash/
    But it does not help me

  • How to read from and write into the same file from multiple threads?

    I need to read from and write into a same file multiple threads.
    How can we do that without any data contamination.
    Can u please provide coding for this type of task.
    Thanks in advance.

    Assuming you are using RandomAccessFile, you can use the locking functionality in the Java NIO library to lock sections of a file that you are reading/writing from each thread (or process).
    If you can't use NIO, and all your threads are in the same application, you can create your own in-process locking mechanism that each thread uses prior to accessing the file. That would take some development, and the OS already has the capability, so using NIO is the best way to go if you can use JDK 1.4 or higher.
    - K
    I need to read from and write into a same file
    multiple threads.
    How can we do that without any data contamination.
    Can u please provide coding for this type of task.
    Thanks in advance.

  • SLT Replication for the same table from Multiple Source Systems

    Hello,
    With HANA 1.0 SP03, it is now possible to connect multiple source systems to the same SLT Replication server and from there on to the same schema in SAP HANA - does this mean same table as well? Or will it be different tables?
    My doubt:
    Consider i am replicating the information from KNA1 from 2 Source Systems - say SourceA and SourceB.
    If I have different records in SourceA.KNA1 and SourceB.KNA1, i believe the records will be added during the replication and as a result, the final table has 2 different records.
    Now, if the same record appears in the KNA1 tables from both the sources, the final table should hold only 1 record.
    Also, if the same Customer's record is posted in both the systems with different values, it should add the records.
    How does HANA have a check in this situation?
    Please throw some light on this.

    Hi Vishal,
    i suggest you to take a look to SAP HANA SPS03 Master Guide. There is a comparison table for the three replication technologies available (see page 25).
    For Multi-System Support, there are these values:
    - Trigger-Based Replication (SLT Replication): Multiple source systems to multiple SAP  HANA instances (one  source system can be connected to one SAP HANA schema only)
    So i think that in your case you should consider BO Data Services (losing real-time analytics capabilities of course).
    Regards
    Leopoldo Capasso

  • Why does PSE 10 Organizer jumble up photos on the same date from different locations ?

    I have PSE 10 installed on a PC with Windows 7. My camera is a Nikon D90 using a Sandisk 8 gb SD card. When I take photos at different locations on the same date and download them into the Organizer , instead of keeping the photos from the different locations together by location, it jumbles them all up. It does not keep them in order by time taken from first to last for that day , it just mixes them all up in random order. Why ?

    Hi Lyndy,
    When you use Albums and Keyword Tags, you aren't moving the images around (they stay in their folders) - you just look at them differently.
    What you can try is this:-
    1) select one of your folders in folder view so that it displays all of those images in filename order
    2) click on the instant album button (to the top right of the thumbnails)
    This will generate an album with the same name as the folder
    3) Now switch to Thumbnail view
    4) click on the new albumb name on the right side
    Now, all the images should be in date/time order - you may have adjust the options
    The real power of the Keywrd Tags is the many different ways you can look at the images.
    If you have a Keyword Tag structure like this:-
    Places
         Scotland
                Holyrood
                Britania
    Then if you assign the Holyrood and britania tags to the appropiate photos, there are various ways of viewing the photos.
    Selecting just Holyrood would show only the Holyrood ones
    Selecting Scotland would show both Holyrood and Britania ones.
    The only limit seems to be your own imagination
    I hope that gives you ideas rather than adding confusion
    Brian

  • Accessing the same object from multiple classes.

    For the life of me I can't workout how I can access things from classes that haven't created them.
    I don't want to have to use "new" multiple times in seperate classes as that would erase manipulated values.
    I want to be able to access and manipulate an array of objects from multiple classes without resetting the array object values. How do I create a new object then be able to use it from all classes rather than re-creating it?
    Also I need my GUI to recognize the changes my buttons are making to data and reload itself so they show up.
    All help is good help!
    regards,
    Luke Grainger

    As long as you have a headquarters it shouldn't be to painfull. You simply keep a refrence to your ShipDataBase and Arsenal and all your irrellevant stuff in Headquarters. So the start of your Headquarters class would look something like:
    public class Headquarters{
      public Arsenal arsenal;
      public ShipDatabase db;
    //constructor
      public Headquarters(){
        arsenal = new Arsenal(this, ....);
        db = new ShipDatabase(...);
    //The Arsenal class:
    public class Arsenal{
      public Headquarter hq;
      public Arsenal(Headquarter hq, ....){
        this.hq = hq;
        .Then in your ShipDatabase you should, as good programing goes, keep the arraylist as a private class variable, and then make simple public methods to access it. Methods like getShipList(), addToShipList(Ship ship)....
    So when you want to add a ship from arsenal you write:
    hq.db.addToShipList(ship);
    .Or you could of course use a more direct refrence:
    ShipDatabase db = hq.db;
    or even
    ShipList = hq.db.getShipList();
    but this requires that the shiplist in the DB is kept public....
    Hope it was comprehensive enough, and that I answered what you where wondering.
    Sjur
    PS: Initialise the array is what you do right here:
    //constructor
    shipList = new Ship[6];
    shipList[0] = new Ship("Sentry", 15, 10, "Scout");
    " " " "etc
    shipList[5] = new Ship("Sentry", 15, 10, "Scout");PPS: To make code snippets etc. more readable please read this article:
    http://forum.java.sun.com/faq.jsp#messageformat

  • How do I make simplesearch look for the same tag in multiple locations

    Currently i'm trying to write a simplesearch implementation that will only return the result it if contains a specific tag. I would also like to include DAM assets in the search, wherein lies the problem:
    I need to look in both jcr:content and jcr:content/metadata for the tag, and if the tag is found in either location, return that page. Here is the code I am trying to use, but it currently only looks for the tag(s) in one location. Any tips on how to get it to look in multiple locations?
    Code:
                        tagPredicate = new Predicate("tags2", "tagid");
                        tagPredicate.set("property", "jcr:content/cq:tags");
                        for (Cookie cookie : tagsFromCookie) {
                                       if (cookie.getName().contains(CREDENTIALS)) {
                                                      tagPredicate.set("tagid",
                                                                          (cookie.getValue().replaceAll("---", ":")));
                                                      search.addPredicate(tagPredicate);
    Thanks for any help you can give!

    I figured this one out on my own. Kind of a "Doh!" moment.
    What I ended up doing is this: I created a PredicateGroup, and populated this group with the predicates I needed. Then, I set the PredicateGroup allRequired to false. Voila! It works!
    Code:
              PredicateGroup tagPredicateGroup = new PredicateGroup();
                        tagPredicateGroup.setAllRequired(false);
                        Predicate tagPredicate = new Predicate("tags", "tagid");
                        if (slingRequest.getParameter(GROUP1) != null) {
                                       tagPredicate = new Predicate("tags", "tagid");
                                       tagPredicate.set("property", "jcr:content/cq:tags");
                                       tagPredicate.set("tagid", GROUP1);
                                       tagPredicateGroup.add(tagPredicate);
                                       tagPredicate = new Predicate("damTags", "tagid");
                                       tagPredicate.set("property", "jcr:content/metadata/cq:tags");
                                       tagPredicate.set("tagid", GROUP1);
                                       tagPredicateGroup.add(tagPredicate);
                        return tagPredicateGroup;

  • Accessing the same data from multiple threads

    Hi
    In the following program the task5 routine takes ~3s to complete, when I uncomment the t2 lines it takes 11s (this is on a quad-core x86/64 machine). Since there is no explicit synchronization I was expecting 3s in both cases.
    public static int sdata;
    public static void Task5()
    int acc = 0;
    for (int i = 0; i < 1000000000; ++i)
    sdata = i;
    acc += sdata;
    [STAThread]
    static void Main()
    Stopwatch sw = new Stopwatch();
    sw.Start();
    Thread t1 = new Thread(new ThreadStart(Task5));
    // Thread t2 = new Thread(new ThreadStart(Task5));
    t1.Start();
    // t2.Start();
    t1.Join();
    // t2.Join();
    sw.Stop();
    System.Diagnostics.Debug.WriteLine(sw.ElapsedMilliseconds.ToString());
    Why are these threads blocking each other?

    This is loosely a duplicate of https://social.msdn.microsoft.com/Forums/en-US/cd00284d-3da3-457e-8926-c490e7ca6d92/atomic-loadstore?forum=vclanguage
    I answered you in detail over at the other thread.
    But the short version is that the threads are competing for access to system memory, specifically at the memory location of sdata.  This demonstrates how to spoil the benefits of not having to write-through from your CPU cache to system memory.  CPU
    Caches are wonderful things.  CPU cache memory is WAY faster than system memory.

  • Is there a way to work on the same project from multiple macs?

    I was hoping to save a project that I was working in to the Cloud and then downlpad it to my other Mac. Can't seem to figure it out.

    Not sure what your question has to do in respect to Apple networking, but if you're asking can you use your Xbox 360 on a wireless network (regardless of the router's manufacturer) to access different Xbox Live! accounts, then the answer is yes.

  • How to receive emails of the same inbox from in a Weblogic Server cluster

    Hi All,
    I have an application running on WebLogic Server with 6 instances. Many requests for the application come from Email. We already set up an email account that will be used by all clients to send email to. But the problem is that the email account inbox can only be opened for reading by a single connection, unlike a typical database. Currently I can only deploy the email reading service on a single server instance, this will effectively create a single point of failure and unbalanced load. What's the best way to read from the same inbox from multiple servers? I am thinking developing something using a database table, sort of leasing, whoever locked the table own the lease and can connect to the email server, but this is pretty hard to implement correctly in all circumstances.
    I did an intensive search on the internet, but couldn't find anything.
    I appreciate your help very much.
    Thanks
    Tao

    Hi Aditya
    By default, OPA 10.4 will collect the minimum amount of information needed to answer the goal of your interview and then display the answer and the reasons for the answer.You can change what appears on these screens within OPM itself (you don't need to change those settings).
    So I would:
    - add a Summary Screen to your project (if you haven't already)
    - add <your goal attribute> to your summary screen. This will display as a link if the goal is unknown and as a sentence showing the outcome and a Why? link when it is known. 
    - add a label to the summary screen, providing information about the outcome (eg "Congratulations you are eligible") or simply show the value of the outcome (eg Your total deduction is %deduction_amount%). 
    - use visibility rules to control which items to display at the start/end of your interview (in your case, maybe hide your goal once it is known and show your label once your goal is known). See OPA help topic "Tutorial: Hiding and displaying summary screen elements".
    The Inferred Brand Discount sample supplied in the OPM install provides an example of this (see Help topic Sample Rulebases for information on how to open this).
    Hope that helps,
    Fiona

  • Suddenly my event library in iMovie 11 is empty.  I have tried deleting the plist, importing clips from multiple locations (to see if they show up in the library), re-installing iMovie, and re-installing Mountain Lion. Help??

    Suddenly my event library in iMovie 11 is empty.  I have tried deleting the plist, importing clips from multiple locations (to see if they show up in the library), re-installing iMovie, and re-installing Mountain Lion.
    I was working on a project, when all of a sudden the event library went blank.  There are no devices showing in the left column, only "last import" and "aperture videos".  There are no video clips in the editing window.  I have "Show: All Clips" selected.  I've tried "group events by disk". 
    When I have a Project open, it plays just fine.  There are no yellow triangles saying "source clip is missing".  Yet, again...nothing is in the event library large editing window.  So, it seems the data is still there, but invisible.  This is true with all the projects, and all the external devices I've experimented by plugging in to see if their video clips show up in the event library. 
    I have tried to import new movies, and iMovie responds as normal...looking like it's importing, then "generating thumbnails", then it makes the "ding" signalling that the import is complete...but nothing shows in the event library.  I've tried importing movies from the harddrive, from quicktime, and from an external drive.
    I have searched the forums, and found many users with a similar, but not the same, problem.  For them, it seems the "go to users-library-preferences and delete iMovie plist" has solved their problem.  This didn't work for me. 
    I uninstalled/reinstalled iMovie, I even have re-installed Mountain Lion (from a last-ditch effort suggestion from Apple technician). 
    HERE'S AN INTERESTING DETAIL:  After almost 5 hours on the phone with Apple, I decided to cut my losses and take my project to another Mac I have.  I'm working on a project, for work, that is critical that it's completed by tonight (whoops), and all my video is on an external drive.  So, I plug in the external drive to Mac #2, open iMovie, and everything is looking fine.  I continue importing some files I was converting, through Wondershare, and suddenly (whether or not this has to do with the importing, I'm not sure), the SAME THING happened to Mac #2!!!!  I can't believe this. 
    Does anyone have any suggestions?  Have you ever heard of this happening?  Could it have to do with the files I'm importing??? 
    I apologize if my language is confusing.  Obviously, I'm not an Apple genius-person!  I hope I've provided all the information you need.
    Mac #1 is a 27" desktop, mid-2012, software up-to-date.  Mac #2 is a 24" desktop, about 4 years old, OSX 10.7.5

    I'm adding more information:
    in Finder, my iMovie folders are all visible and accessible.  When I click on a clip from iMovie Events folder, Quicktime opens and plays the clip.  ALL my video is not missing - it just isn't showing up in the Event Library!
    This happened all of a sudden, while I was working on iMovie project (on both computers).
    All my Projects are intact, and play when I open them. 

  • Is the raw output from JSP and XSQLServlet the same?

    Hi
    Is there is difference between the raw output from XSQLServlet and JSP? For example, assuming the same content is being generated, is there some additional header information emitted by XSQLServlet that is not done by JSP?
    I am using software that successfully consumes generated content from a JSP OK, but the same content (using the same XML/XSL) generated from XSQLServlet is being rejected. I am puzzled by this. Maybe this is due to some differences in servlet output or an encoding issue? The content is not HTML but XML-like with an "application" contentType, like Steve's SVG example.
    It seems that XSQLServlet is showing some data prior to emitting the actual content, i.e. HTTP version, responding server version, content type and date, e.g.
    HTTP/1.0 200 OK^M
    Server: Resin/2.0.5^M
    Content-type: application/x-sky; charset=UTF-8^M
    Date: Thu, 28 Mar 2002 06:45:34 GMT^M
    ^M
    (then the generated content)
    Is this preamble usually generated by a JSP also?
    If not, can this information be turned off, or put another way, can XSQLServlet's raw output be set to be exactly like JSP? If not, is there a workaround?
    I have tried setting the following XSQLConfig.xml
    <suppress-mime-charset> for the mime-type &
    <character-set-conversion>
    <none/>
    </character-set-conversion>
    also, but to no avail.
    Please help! I really want to use XSQLServlet!
    Thanks.
    Michael.

    Yes, just less fine control over the process but the same engine.
    Regards
    TD

  • Is there a way to delete multiple pictures at the same time from the iphone4s?

    Is there a way to deleter multiple pictures at the same time, from my iphone4s? I know how to delete one at a time. Thanks

    Open your Photos App > Camera Roll > At the top right corner you will see a rectangle with a right arrow, select that. Now you can select as many photos as you want and you can hit the red Delete button on the bottom right.

  • How can I remove multiple copies of the same song from the iTunes listing?

    How can I remove multiple copies of the same song from the iTunes listing. The program seems to be picking up the same songs from, for example, my user area and my public area in the C drive

    As above, Apple's official advice is here... HT2905 - How to find and remove duplicate items in your iTunes library, however it is a manual process and the article fails to explain some of the potential pitfalls.
    Use Shift > View > Show Exact Duplicate Items to display duplicates as this is normally a more useful selection. You need to manually select all but one of each group to remove. Sorting the list by Date Added may make it easier to select the appropriate tracks, however this works best when performed immediately after the dupes have been created.  If you have multiple entries in iTunes connected to the same file on the hard drive then don't send to the recycle bin.
    Use my DeDuper script if you're not sure, don't want to do it by hand, or want to preserve/merge ratings, play counts and playlist membership. See this thread for background and please take note of the warning to backup your library before deduping.
    (If you don't see the menu bar press ALT to show it temporarily or CTRL+B to keep it displayed)
    tt2

  • Does anyone have experience with having multiple editors work on the same project from different computers?

    Does anyone have experience with having multiple editors work on the same project from different computers?

    As much as I hate to admit it, YOU ARE RIGHT!
    I will tread lightly on this project.
    Thanks for the sanity check,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

Maybe you are looking for