Parallel ActiveX instances

Hello,
I am trying to run two Signal Recovery 3830 multiplexers using a single Labview VI, but I have run into repeated difficulties.  The instrument manual says to simply open up multiple instances of the instrument's ActiveX library (SR3830Comms) to communicate with multiple multiplexers.  However, some test VIs I wrote seem to connect to only one multiplexer at a time.  Using a simple Windows test program, I have independently verified that both multiplexers are connected properly to the computer, and that I can connect to each individual one separately.  I have attached my test program, which attempts to connect to both instruments and then allow for inputs to each of them.  Am I coding this (in particular, the use of multiple ActiveX instances) properly?  If not, how should I go about opening multiple ActiveX instances in parallel?  Thank you for your help.
Note: this was written using Labview 2010.
Attachments:
two_mux_test.vi ‏20 KB

Most of us (like me) aren't going to have that ActiveX installed, so it's hard to say. Other than your inputs are named the same, I see nothing glaringly wrong. I would name the inputs differently - such as tack on the serial numbers.
Have you tried opening & setting up the two ActiveX sessions in series, then read in parallel?
Richard

Similar Messages

  • Calling Multiple (and parallel) ActiveX instances

    I'm having a problem of running multiple activeX instances using LabVIEW (apparently the problem occurs with more than 4 instances). This problem doesn't happen when I do the same thing in C (Visual Studio). I can create as many instances as I wish, but when I run methods that hang or run for a long period of time, only 4 are able to run at each moment. If I stop any one of the methods, the next one starts running.
    I attached an example (in LabVIEW 8.5.1) of using the excel activeX automation, but it happens with all of the activeX's I tried so far. It even happens when using several different activeXs.
    Please notice, that the problem is not with creating the instances, but when running methods of the activeX in parallel at the same time (If you run short methods that finish executing fast, you won't notice the problem).
    Attachments:
    Instances of Excel1.vi ‏23 KB

    I think you're running into the max number of threads LabVIEW allocates per execution system (4 by default). All of this code is running in one VI and hence one execution system. You can do a quick test to confirm this. Select one or more of your parallel instances and create separate subVIs from them. Then go into those subVI properties and go to the Execution category. Select various execution systems for the VIs. Make one DAQ, for instance, another Instrument I/O, and another Other 1 or whatever. When you rerun your test after that you'll see all six File IO dialogs pop up simultaneously.
    Even though LabVIEW is using only four threads in each execution system, it will still multitask between various things to achieve as much parallelism as possible. This is what you saw when you said that as soon as one method finished, another one would start up.
    I'm not too familiar with dealing with execution systems to achieve highly scalable applications, unfortunately. Most of the time LabVIEW gives you something really good out of the box without having to think about execution systems. And when you run applications on Timed Loops, that helps LabVIEW divide up application threads better as well.
    But you could start by seeing if you can divide up your ActiveX routines somehow and then duplicate the code into subVIs that run in different execution systems.
    Another option is to manipulate the TreadConfig VI that ships with LabVIEW. Check out the following VI:
    <LabVIEW>\vi.lib\Utility\sysinfo.llb\threadconfig.vi
    You can increase the number of threads LabVIEW will allocate for each execution system to up to 8.
    Here's a help topic with more info.
    Message Edited by Jarrod S. on 05-12-2008 09:30 PM
    Jarrod S.
    National Instruments

  • Parallel workflow instances

    Hi Experts,
    Need help for the scenario on which i am working..
    SAP System: ECC 5.0
    Scenario:
    1) Material master workflow is triggered through a customized event and  we are passing material number
    and Sales Org as the object key for Business Object.
    3) Based on the Sales Org, plants will determined using a background task.
    2) After this we have to send tasks(for creating a view) to the agent, for a plant and material combination
    and these tasks should be in parallel. number of plants can be more than 100.
    3) Once all these parallel tasks is completed then workflow should continue .
    4) other workflow task based on material and sales org combination...
    For sending parallel task for a material and plant combinations (Point 3), i can trigger a seperate workflow.
    so there will be n instance of this new workflow based on material and plant combination.
    But my concern is How workflow will know that all the workflow instances of new workflow is completed
    for the material for which main workflow is triggered.
    Please suggest.
    Please let me know if there is a some other way for sending parallel task from the workflow..
    Thanks,
    V

    I will create a new step and pass the remaining plants using the same Table-Driven Dynamic Parallel Processing .
    Again i have a issue, there will be multiple parallel task created but how i will know which particular task is related to which plant.
    I want to pass the plant name in the Task description, how i can do that...
    I will a create a new thread for this issue.
    Regards,
    Vargi

  • No parallel BPM instances

    Hello people, good morning!
    Could someone explain me how can I avoid two instances of an integration process to be running at the same time?
    As I receive many starts of an Integration Proccess in a short period of time I would appreciate to enqueue the execution of each one for performance reasons.
    Thank you,
    Alessandro.
    Message was edited by:
            Alessandro Reichert

    If you are on SP 10 on PI this is possible by configuring just a single Queue for your BPM instance.
    More in this guide,
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f0e73c7b-5301-2a10-f1ab-832f301b6c02
    Regards
    Bhavesh

  • How can I force TestStand to create multiple instance of an activex server instead of share the same reference?

    Hi, All:
         I am trying to migrate a application coded by VC++ to TestStand+CVI. This application opens multiple serial com ports and send commands to test targets, and get responses from these targets. The VC++ application creates multiple thread, and every thread create an activeX server instance which is responsible for opening serial port,sending test commands ,and getting response. It works fine as every thread has itsown activeX instance, so data and commands can be handled in the right serial port successfully.
         Yet when I works on TestStand, I got problems. I choose parallel model as the process model. I create a activeX object reference with AxtiveX/COM Adapter, and store the reference in a sequence local varialbe. If there is only 1 testsocket, It is OK. But if there are multiple test sockets, and communication between the application and test targets will be directly to the last test target. I try to popup a message within a execution to indicate the value of the activeX object reference, and all of them are identical. But this is not the behavior I want.
        So, is it possible to force TestStand to create independent instances of an activeX server? How can I make it work?
    Thanks

    Thanks for your comment, Dan.
          Yet I do get problems as none of us know how to program an activeX server so that it can be forced to be a single-instance or multiple-instance.
    As I mentioned earlier, there is existing application which is written by VC++ can create multiple instances of this activeX server, all I have to do is to create multiple threads, and initiate an instance of this server to contrl multiple test targets.
         My colleague said he creates the server with VC++ ATL, and cofigure it to be dual-interface and STA. And in the VC++ application side(client side), I use smart-pointer in threads to create an instance of the server:
     IMySerialServer pIMySerialServer;
      HRESULT hr = pIMySerialServer.CreateInstance("UUTCmd.MySerialServer");
      if(FAILED(hr))
       AfxMessageBox("Fail to create instance.");
    And then I got multple instances of the activeX server. Every thread can have itsown com port, can send/receive commands indepently. I have no idea how can I make it work on TestStand. Would you show me some reference documents or sample codes?
    Thank you
    Cipher

  • Adding parallel loops programatically

    Hi!
    I'm building a system with some instruments and I want to use the same instrument-VI for all instruments.
    One way is to do it set it up as 'Possible solution..' (see image).
    Is there some way where I can loop through the 'hardware settings'-array and create as many as necessary? Like the 'More what I want...' (see image)?
    Attachments:
    parallel loops.PNG ‏15 KB

    Hello again thread!
    So I'm back on the same problem...
    Summary:
    What I want to do is something like in the new picture 'parallel loops2.png'. Start X parallel loops that each handle the communication with a specific instrument. At the moment I have 9 instruments connected.
    If I use 'Configure Iteration Parallelism' I can set the 'Number of generated parallel loop instances' to the max number (=64 for me) and then use the P terminal with the 'array size' VI to get my 9 instrumentloops running.
    When reading this white paper
    http://www.ni.com/white-paper/9393/en/ ('Improving Performance with Parallel For Loops')
    I'm getting the feeling that the way I solved it is not the way right way. Since the P-terminal should equal the number of cores in the computer.
    The Run! VI does not have any outputs wired to its connector pane. It is reentrant (Preallocate clones - No debugging allowed).
    I havn't looked at Asyncchronous calls yet. Is that the way to go?
    Attachments:
    parallel loops2.PNG ‏3 KB

  • How to create parallel tasks using parallel for loops

    Hi,
    I am setting up a program that communicates with six logic controllers and has to read the system status every 100 ms. We are using OPC datasockets for this, and they appear a little slow. 
    I have created a uniform comm. method for all controllers, and now I find myself programming this method six times to communicate with each system. I am wondering if this could be done more elegant using the parallel for loop, in which case I would program an exchange once and then have six workers running simultaneously. Since a picture is more clear that a thousand words, what I am asking is this:
    Is it possible to replace something like
    by
    and have this for loop running these tasks in parallel (on different cores / in different threads)?
    I have configured the loop to create 8 instances at compile, so I would have 2 instances surplus available at runtime if I find I need an additional system.
    The benefits of the method show in the second picture to me are:
    * takes less space
    * modifications have to be made only once
    * less blocks, wires and stuff makes it more clear what's going on.
    * flexibility in the actual number of tasks running (8 instances available at runtime)
    * if more tasks are required, I need only to update the maximum number of instances and recompile, i.e. no cutting and pasting required. 
    Unfortunately, I don't have those system available yet, so there's no way to test this. Yet, I would like to know if the above works as I expect - unfortunately the labview help is not completely clear to me on this.
    Best regards,
    Frans 
    Solved!
    Go to Solution.

    Dear mfletcher,
    First of all: thanks for confirming that my intuition was right in this case.
    As for your question on the help: below is a copy/paste from the help on the 'configure parallelism dialog box' 
    Number of generated parallel loop instances—Determines the number of For Loop instances LabVIEW generates at compile time. The Number of generated parallel loop instances should equal the number of logical processors on which you expect the VI to execute. If you plan to distribute the VI to multiple computers, Number of generated parallel loop instances should equal the maximum number of logical processors you expect any of those computers to ever contain. Use the parallel instances terminal on the For Loop to specify how many of the generated instances to use at run time. If you wire a larger number to the parallel instances terminal than you specify in this dialog box, LabVIEW only executes as many loop instances as you specify here.The reason for me doubting if what I programmed would work the way I intended lies in the fact that the help only mentions processors here, which would be interpreted as actual cores. Thus on a dual core machine, the number should be 2.
    I think it would be helpful to mention something about threads here, because in some case one would like to have more parallel threads than there are cores in a system.
    In mu case I would like to create six threads, which on my dual core processor would be spread over only two cores. Then these six threads run in parallel.I know that in case of heavy math that would not help, but since I am doing communications, which have timeouts and such, and that probably runs smoother in six parallel tasks even though I only have two cores. 
    Hope this helps in improving the help of the for loop.
    Regards,
    Frans 

  • ORACLE PARALLEL SERVER (OPS)

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    ORACLE PARALLEL SERVER (OPS)
    ==============================
    PURPOSE
    다음은 OPS(ORACLE PARALLEL SERVER) 의 구조에 대해서 알아본다.
    SCOPE
    Standard Edition 에서는 Real Application Clusters 기능이 10g(10.1.0) 이상 부터 지원이 됩니다.
    Explanation
    1. Parallel Server Architecture
    OPS는 다수의 Loosely Coupled System들 간의 원활한 Resource
    Sharing을 위해 제공하는 DLM(PCM)을 이용하여 동시에 다수의
    사용자가 각 Node를 통해 한 Database를 Access함으로써 System의
    가용성과 전체적인 성능을 극대화시키기 위한 다중처리 System
    구성이다.
    (1) Loosely Coupled System
    SMP Machine과 같이 Tightly Coupled System들 간에 Data Files,
    Print Queue 같은 Resource를 공유하기 위한 Shared Disk
    Architecture로 Node간의 정보전송은 Common High-Speed Bus를
    이용한다.
    (2) Distributed Lock Manager(DLM)
    Loosely Coupled System에서 Resource Sharing을 조정,관리하는
    Software로 Application들이 같은 Resource에 대해 동시에 Access를
    요구할 때 서로간의 Synchronization을 유지하며 Conflict가
    발생하지 않도록 하는 기능을 담당한다.
    다음은 DLM의 주요 Service이다
    - Resource에 대한 Current "ownership" 유지 관리
    - Application Process들의 Resource Request Accept
    - 요구한 Resource가 가용할 경우 해당 Process에 공지
    - Resource에 대한 Exclusive Access 허용
    (3) Parallel Cache Management(PCM)
    Data File내의 하나 이상의 Data Block을 관리하기 위해
    할당되는 Distributed Lock을 PCM Lock이라 하며 어떤 Instance가
    특정 Resource를 Access하기 위해서는 반드시 그 Resource의 Master
    Copy의 "owner"가 되어야 한다.
    이는 곧 그 Resource를 Cover하는 Distributed Lock의 "owner"가
    됨을 의미한다. 이러한 "owner ship"은 다른 Instance가 동일
    Data Block 또는 그 PCM Lock에 의해 Cover되고 있는 다른 Data
    Block에 대한 Update 요구가 있을때까지 계속 유지된다.
    "owner ship"이 한 Instance에 다른 Instance로 전이 되기 전에
    변경된 Data Block은 반드시 Disk로 "write"하므로 각 Node의
    Instance간 Cache Coherency는 철저하게 보장된다.
    2. Parallel Server의 특성
    - Network상의 각 Node에서 Oracle Instance 기동 가능
    - 각 Instance는 SGA + Background Processes로 구성
    - 모든 Instance는 Control File과 Datafile들을 공유
    - 각 Instance는 자신의 Redo Log를 보유
    - Control File, Datafiles, Redo Log Files는 하나 이상의
    Disk에 위치
    - 다수의 사용자가 각 Instance를 통해 Transaction 실행 가능
    - Row Locking Mode 유지
    3. Tuning Focus
    서로 다른 Node를 통해서 하나의 Database 구성하의 Resource를
    동시에 사용하는 OPS 환경에서 Data의 일관성과 연속성을 위한
    Instance간의 Lock Managing은 불가피한 현실이다. 즉, 위에서
    언급한 Instance간의 Resource "owner ship"의 전이(pinging 현상)
    와 같은 Overhead를 최소화하기 위해선 효율적인 Application
    Partition(Job의 분산)이 가장 중요한 현실 Factor이다.
    다시 말해 서로 다른 Node를 통한 동일 Resource에의 Cross-Access
    상황을 최소화해야 함을 의미한다.
    다음은 OPS 환경에서 Database Structure 차원의 Tuning Point로서
    PCM Lock 관련 GC(Global Constant) Parameters와 Storage에 적용할
    Options 및 기타 필요한 사항이다.
    (1) Initial Parameters
    OPS 환경에서 PCM Locks를 의해 주는 GC(Global Constant)
    Parameters는 Lock Managing에 절대적인 영향을 미치며 각 Node마다
    반드시 동일한 Value로 설정(gc_lck_procs 제외)되어야 한다.
    일반적인 UNIX System에서 GC Parameters로 정의하는 PCM Locks의
    총합은 System에서 제공하는 DLM Configuration 중 "Number of
    Resources"의 범위 내에서 설정 가능하다.
    - gc_db_locks
    PCM Locks(DistributedLocks)의 총합을 정의하는 Parameter로
    gc_file_to_locks Parameter에 정의한 Locks의 합보다 반드시
    커야 한다.
    너무 작게 설정될 경우 하나의 PCM Lock이 관리하는 Data Blocks가
    상대적으로 많아지므로 Pinging(False Pinging) 현상의 발생
    가능성도 그만큼 커지게 되며 그에 따른 Overhead로 인해 System
    Performance도 현격히 저하될 가능성이 크다. 따라서 가능한
    최대의 값으로 설정해야 한다.
    - False Pinging
    하나의 PCM Lock이 다수의 Data Blocks를 관리하므로 자신의
    Block이 아닌 같은 PCM 관할하의 다른 Block의 영향으로 인해
    Pining현상이 발생할 수 있는 데 이를 "False Pinging"이라 한다.
    Database Object별 발생한 Pinging Count는 다음과 같이 확인할 수
    있으며 sum(xnc) > 5(V$PING) 인 경우에 더욱 유의해야 한다.
    - gc_file_to_locks
    결국 gc_db_locks에 정의된 전체 PCM Locks는 각 Datafile 당
    적절히 안배가 되는데 전체 Locks를 운용자의 분석 결과에 의거
    각 Datafile에 적절히 할당하기 위한 Parameter이다.
    운용자의 분석 내용은 각 Datafile 내에 존재하는 Objects의 성격,
    Transaction Type, Access 빈도 등의 세부 사항을 포함하되 전체
    PCM Locks 대비 Data Blocks의 적절하고도 효율적인 안배가
    절대적이다.
    이 Parameter로 각 Datafile당 PCM Locks를 안배한 후 Status를
    다음의 Fixed Table로 확인할 수 있다.
    Sample : gc_db_locks = 1000
    gc_file_to_locks = "1=500:5=200"
    X$KCLFI ----> 정의된 Bucket 확인
    Fileno      Bucket     
    1     1     
    2      0      
    3     0     
    4      0     
    5     2     
    X$KCLFH ----> Bucket별 할당된 Locks 확인
    Bucket     Locks     Grouping     Start     
    0     300     1     0     
    1     500     1     300     
    2     200     1     800     
    gc_files_to_locks에 정의한 각 Datafile당 PCM Locks의 총합은 물론
    gc_db_locks의 범위를 초과할 수 없다.
    다음은 각 Datafile에 할당된 Data Blocks의 수를 알아보는 문장이다.
    select e.file_id id,f.file_name name,sum(e.blocks) allocated,
    f.blocks "file size"
    from dba_extents e,dba_data_files f
    where e.file_id = f.file_id
    group by e.file_id,f.file_name,f.blocks
    order by e.file_id;
    - gc_rollback_segments
    OPS로 구성된 각 Node의 Instance에 만들어진 Rollback Segment
    (Init.ora의 rollback_segments에 정의한)의 총합을 정의한다.
    다수의 Instance가 Rollback Segment를 공용으로 사용할 수는 있으나
    OPS 환경에서는 그로 인한 Contention Overhead가 엄청나므로 반드시
    Instance당 독자적인 Rollback Segment를 만들어야 하며 Instance간
    동일한 이름의 부여는 불가하다.
    select count(*) from dba_rollback_segs
    where status='ONLINE';
    위의 결과치 이상의 값으로 정해야 한다.
    - gc_rollback_locks
    하나의 Rollback Segment에서 동시에 변경되는 Rollback Segment
    Blocks에 부여하는 Distributed Lock의 수를 정의한다.
    Total# of RBS Locks = gc_rollback_locks * (gc_rollback_segments+1)
    위에서 "1"을 더한 것은 System Rollback Segment를 고려한 것이다.
    전체 Rollback Segment Blocks 대비 적절한 Locks를 정의해야 한다.
    다음은 Rollback Segment에 할당된 Blocks의 수를 알아보는 문장이다.
    select s.segment_name name,sum(r.blocks) blocks
    from dba_segments s,dba_extents r
    where s.segment_name = r.segment_name
    and s.segment_type = 'ROLLBACK'
    group by s.segment_name;
    - gc_save_rollback_locks
    특정 시점에서 어떤 Tablespace 내의 Object를 Access하고 있는
    Transaction이 있어도 그 Tablespace를 Offline하는 것은 가능하다.
    Offline 이후에 발생된 Undo는 System Tablespace내의 "Differred
    Rollback Segment"에 기록, 보관 됨으로써 Read Consistency를
    유지한다. 이 때 생성되는 Differred Rollback Segment에 할당하는
    Locks를 정의하는 Parameter이다.
    일반적으로 gc_rollback_locks의 값과 같은 정도로 정의하면 된다.
    - gc_segments
    모든 Segment Header Blocks를 Cover하는 Locks를 정의한다. 이 값이
    작을 경우에도 Pinging 발생 가능성은 그 만큼 커지므로 해당
    Database에 정의된 Segments 이상으로 설정해야 한다.
    select count(*) from dba_segments
    where segment_type in ('INDEX','TABLE','CLUSTER');
    - gc_tablespaces
    OPS 환경에서 동시에 Offline에서 Online으로 또는 Online에서
    Offline으로 전환 가능한 Tablespace의 최대값을 정의하는 것으로
    안전하게 설정하기 위해서 Database에 정의된 Tablespace의 수만큼
    설정한다.
    select count(*) from dba_tablespaces;
    - gc_lck_procs
    Background Lock Process의 수를 정하는 것으로 최대 10개까지
    설정(LCK0-LCK9)할 수 있다. 기본적으로 하나가 설정되지만 필요에
    따라 수를 늘려야 한다.
    (2) Storage Options
    - Free Lists
    Free List는 사용 가능한 Free Blocks에 대한 List를 의미한다.
    Database에서 새롭게 가용한 Space를 필요로 하는 Insert나
    Update시엔 반드시 Free Space List와 관련 정보를 가지고 있는
    Blocks Common Pool을 검색한 후 필요한 만큼의 충분한 Blocks가
    확보되지 않으면 Oracle은 새로운 Extent를 할당하게 된다.
    특정 Object에 동시에 다수의 Transaction이 발생한 경우 다수의
    Free Lists는 그만큼 Free Space에 대한 Contention을 감소시킨다.
    결국 Free List의 개수는 Object의 성격과 Access Type에 따라
    적절히 늘림으로써 커다란 효과를 거둘 수 있다.
    예를 들면 Insert나 크기가 늘어나는 Update가 빈번한 Object인
    경우엔 Access 빈도에 따라 3 - 5 정도로 늘려줘야 한다.
    - freelist groups
    Freelist group의 수를 정의하며 전형적으로 OPS 환경에서
    Instance의 수만큼 설정한다. 특정 Object의 Extent를 특정
    Instance에 할당하여 그 Instance에 대한 Freelist Groups를
    유지하므로 Instance별 Free List 관리도 가능하다.
    (3) 기타
    - Initrans
    동시에 Data Block을 Access할 때 필요한 Transaction Entries에
    대한 초기치를 의미하며 Entry당 23Byte의 Space를 미리 할당한다.
    기본값은 Table이 "1" 이고 Index와 Cluster는 "2" 이다. Access가
    아주 빈번한 Object인 경우 Concurrent Transactions를 고려하여
    적절히 설정한다.
    4. Application Partition
    OPS Application Design의 가장 중요한 부분으로 Partitioning의
    기본 원리는 다음과 같다.
    . Read Intensive Data는 Write Intensive Data로부터 다른
    Tablespaces로 분리한다.
    . 가능한 한 하나의 Application은 한 Node에서만 실행되도록
    한다. 이는 곧 다른 Application들에 의해 Access되는 Data에
    대한 Partition을 의미한다.
    . 각 Node마다 Temporary Tablespaces를 할당한다.
    . 각 Node마다 Rollback Segments를 독립적으로 유지한다.
    5. Backup & Recovery
    일반적으로 OPS 환경의 Sites는 대부분 24 * 365 Online 상황이므로
    전체적인 Database 운영은 Archive Log Mode & Hot Backup으로 갈
    수에 없으며 Failure 발생시 얼마나 빠른 시간 안에 Database를
    완벽하게 복구 할 수 있는 지가 최대 관건이라 하겠다.
    모든 Backup & Recovery에 관한 일반적인 내용은 Exclusive Mode
    Database 운영 환경에서와 동일하다.
    (1) Backup
    - Hot Backup Internal
    Archive Mode로 DB를 정상적으로 운영하며 Online Data Files를
    Backup하는 방법으로 Tablespace 단위로 행해진다.
    alter tablespace - begin backup이 실행되면 해당 Tablespace를
    구성하는 Datafiles에 Checkpoint가 발생되어 Memory상의 Dirty
    Buffers를 해당 Datafiles(Disk)에 "Write"함과 동시에 Hot Backup
    Mode에 있는 모든 Datafiles의 Header에 Checkpoint SCN을 Update
    하는데 이는 Recovery시에 중요한 Point가 된다.
    또한 alter tablespace - end backup이 실행되기 전까지 즉,
    Hot Backup이 행해지는 동안 해당 Datafiles는 "fuzzy" Backup
    Data가 생성되며 특정 Record의 변형 시에도 해당 Block이 Redo
    Log에 기록 되므로 다수의 Archive File이 더 생성되는 것을 볼 수
    있다. 따라서 Admin이 해당 Datafiles를 모두 Backup하고도 end
    backup을 실행하지 않으면 전체 인 System 성능에 심각한 영향을
    미치게 되므로 특히 주의해야 한다.
    Hot Backup 중인지의 여부는 다음 문장을 통해 확인할 수 있다.
    select * from v$backup; -> status 확인
    - Hot Backup Step (Recommended)
    ① alter system archive log current
    ② alter tablespace tablespacename begin backup
    ③ backup the datafiles,control files,redo log files
    ④ alter tablespace tablespacename end backup
    ⑤ alter database backup controlfile to 'filespec'
    ⑥ alter database backup controlfile to trace noresetlogs(safety)
    ⑦ alter system archive log current
    (2) Recovery
    - Instance Failure시
    OPS 환경에서 한 Instance의 Failure시 다른 Instance의 SMON이
    즉시 감지하여 Failed Instance에 대한 Recovery를 자동으로
    수행한다. 이 때 운영중인 Instance는 Failed Instance가 생성한
    Redo Log Entries와 Rollback Images를 이용하여 Recovery한다.
    Multi-Node Failure시엔 바로 다음에 Open 된 Instance가 그 역할을
    담당하게 된다. 아울러 Failed Instance가 Access하던 모든 Online
    Datafiles에 대한 Recovery도 병행되는 데 이런 과정의 일부로
    Datafiles에 관한 Verification이 실패하여 Instance Recovery가 되지
    않을 경우엔 다음 SQL Statement를 실행시키면 된다.
    alter system check datafiles global;
    - Media Failure시
    다양한 형태로 발생하는 Media Failure시엔 Backup Copy를
    Restore한 후 Complete 또는 Incomplete Media Recovery를 행해야
    하며 이는 Exclusive Mode로 Database를 운영할 때와 전혀 다르지
    않다.
    Node별 생성된 즉, Thread마다 생성된 모든 Archived Log Files는
    당연히 필요하며 많은 OPS Node 중 어디에서든지 Recovery 작업을
    수행할 수 있다.
    - Parallel Recovery
    Instance 또는 Media Failure시 ORACLE 7.1 이상에서는 Instance
    Level(init.ora) 또는 Command Level(Recover--)에서 Parallel
    Recovery가 가능하다. 여러 개의 Recovery Processes를 이용하여
    Redo Log Files를 동시에 읽어 해당 After Image를 Datafiles에
    반영시킬 수 있다. Recovery_Parallelism Parameter는 Concurrent
    Recovery Processes의 수를 정하며 Parallel_Max_Servers Parameter의
    값을 초과할 수는 없다.
    (3) 운영 시 발생 가능한 Error
    - ORA-1187 발생
    ORA-1187 : can not read from file name because it
    failed verification tests.
    (상황) 하나의 Node에서 create tablespace ... 한 상태에
    정상적으로 운영하던 중 다른 Node를 통해 특정 Object를
    Access하는데 ORA-1187 발생.
    (원인) 다른 Node에서 raw disk의 owner, group, mode 등을
    Tablespace가 생성된 후 뒤늦게 전환.
    (Admin의 Fault)
    (조치) SQL> alter system check datafiles global;
    Reference Documents
    --------------------

    hal lavender wrote:
    Hi,
    I am trying to achieve Load Balancing & Failover of Database requests to two of the nodes in 8i OPS.
    Both the nodes are located in the same data center.
    Here comes the config of one of the connection pools.
    <JDBCConnectionPool CapacityIncrement="5" ConnLeakProfilingEnabled="true"
    DriverName="oracle.jdbc.driver.OracleDriver" InactiveConnectionTimeoutSeconds="0"
    InitialCapacity="10" MaxCapacity="25" Name="db1Connection598011" PasswordEncrypted="{3DES}ARaEjYZ58HfKOKk41unCdQ=="
    Properties="user=ts2user" Targets="ngusCluster12,ngusCluster34" TestConnectionsOnCreate="false"
    TestConnectionsOnRelease="false" TestConnectionsOnReserve="true" TestFrequencySeconds="0"
    TestTableName="SQL SELECT 1 FROM DUAL" URL="jdbc:oracle:thin:@192.22.11.160:1421:dbinst01" />
    <JDBCConnectionPool CapacityIncrement="5" ConnLeakProfilingEnabled="true"
    DriverName="oracle.jdbc.driver.OracleDriver" InactiveConnectionTimeoutSeconds="0"
    InitialCapacity="10" MaxCapacity="25" Name="db2Connection598011" PasswordEncrypted="{3DES}ARaEjYZ58HfKOKk41unCdQ=="
    Properties="user=ts2user" Targets="ngusCluster12,ngusCluster34" TestConnectionsOnCreate="false"
    TestConnectionsOnRelease="false" TestConnectionsOnReserve="true" TestFrequencySeconds="0"
    TestTableName="SQL SELECT 1 FROM DUAL" URL="jdbc:oracle:thin:@192.22.11.161:1421:dbinst01" />
    <JDBCMultiPool AlgorithmType="Load-Balancing" Name="pooledConnection598011"
    PoolList="db1Connection598011,db2Connection598011" Targets="ngusCluster12,ngusCluster34" />
    Please let me know , if you need further information
    HalHi Hal. All that seems fine, as it should be. Tell me how you
    enact a failure so that you'd expect one pool to still be good
    when the other is bad.
    thanks,
    Joe

  • EntryProcessor invokeAll returning empty ConverterMap{}

    Hi All,
    I am trying to write a custom entryprocessor and whatever I return from the invokeAll method in the entryprocessor, I am always getting a empty ConverterMap{}. The code for the entryprocessor is as below:
    public class CustomEP implements PortableObject, EntryProcessor {
         public CustomEP (){
    public Map processAll(Set entries) {
              Map results=new HashMap ();
    results.put ("1", "1");
    System.out.println("Inside process All method");
              return results;
    public Object process(Entry arg0) {
              Map results=new HashMap ();
    results.put ("1", "1");
    System.out.println("Inside process method");
              return results;
    The client code to invoke this entryprocessor is as below:
    Map results=cache.invokeAll(AlwaysFilter.INSTANCE, new CustomEP());
    The processAll method on the Coherence nodes is invoked but if the print the results on the client side it return empty ConverterMap{}
    On the other hand, if I invoke process method of CustomEP as below:
    Map results=(Map) cache.invoke(AlwaysFilter.INSTANCE, new CustomEP());
    I get the desired results. Please help me with the details why it is happening this way when the return type of the processAll is a Map.
    Thanks a lot!
    Regards,
    S

    911767 wrote:
    Hi Robert and JK,
    Thank you for your reply and time!
    I could not find these details in any of the documentation that specifies keys passed in the result should be subset of the keys passed to the processAll method. Anyways, my problem is to invoke server-side code (avoid de-serialization) by passing a filter and then create a entirely new map (key and value will be different from the entries extracted from the passed filter) by reading the data from the passed entries. How can I implement it?
    I am thinking to use aggregator as they are read-only and faster but again how to implement it using:
    public Object aggregate(Set entries){
    Again, I am getting an empty Map so is it necessary that the object returned should have keys matching the set of the entries passed to this method.
    Secondly, there are other methods such as, finalizeResult() and init() if I extend AbstractAggregator, do I need to implement them and if yes, how? The entries set passed to the aggregate() method may not reside on the same node.
    Please advise!
    Regards,
    SHi S,
    the process() return value object, or the entry value objects in the map returned by processAll() can be arbitrary objects. So you just return a map from process(), and return a map as the entry value in the result map from processAll().
    The AbstractAggregator has a fairly badly documented contract in the Javadoc (does not properly cover the values received in different scenarios for invocation). You should probably read the section about it in the Coherence book, that explains leveraging AbstractAggregator in significantly more details. It also happens to be in the sample chapter, but I recommend reading the entire book.
    I am not sure about the issues relating to posting links to PDFs on Packt's webpage, so I won't do that. Please go to Packt's webpage (http://www.packtpub.com ), look for the Coherence book there and download the sample chapter (or order the book).
    In short, all 3 to-be-implemented methods (init(), process(), finalizeResult()) in AbstractAggregator are called both on the server and on the caller side. You can distinguish which side you are on from looking at both the passed in fFinal boolean parameter and the m_fParallel attribute tof the aggregator instance.
    There are 3 cases:
    - non-parallel aggregation processing extracted values (m_fParallel is false, I don't remember what fFinal is in this case),
    - parallel aggregation storage side processing extracted values (if I correctly remember, m_fParallel is true, fFinal is false),
    - parallel aggregation caller side processing parallel results (m_fParallel and fFinal are both true).
    Depending on which side you are on, the process method takes different object types (on server side it receievs the extracted value, on caller side it receives a parallel result object instance).
    You SHOULD NOT override any of the other methods (e.g. aggregate() which you mentioned).
    The advantage of this approach is that the AbstractAggregator subclass instance can pass itself off as a parallel-aggregator instance.
    You should put together a temporary result in a member attribute of the AbstractAggregator subclass, which also means that it will likely not be thread-safe, but at the moment it is not necessary for it to be thread-safe either as it is called only on a single-thread.
    Best regards,
    Robert
    Edited by: robvarga on Feb 3, 2012 10:38 AM

  • Vista Share Does Not Appear in Finder Shared Section

    I consider myself a fairly technical guy, but I gotta say, I'm extremely frustrated trying to get a Vista64 machine to show up in the Shared section of Finder on my MacBook Pro (10.5.6). This has been a problem for me since Leopard came out, upgrade or new installation.
    I can access the Vista shares via Finder > Go > Connect to Server, but I'm tired of this manual work-around. I'd like to see the PC available on the sidebar.
    What's given me hope is that I can see other PC shares on other people's networks. This weekend I was on an Airport Express driven network and I could see a Vista32 and an XP machine in the sidebar. Having built and configured all 3 components of that setup, I know I did nothing special with the Vista NTVLM2 settings, or the Airport Express DNS server, or my own MacBook Pro's settings in order to see those PCs in the sidebar. We're all on the same workgroup setting, on their network and on mine at home, so no workgroup changes were made or required.
    Does anyone have suggestions to remedy this? Why I could see the shares in the Finder sidebar on that network with standard configuration PCs and router, but not on my own network at home?
    I see lots of related discussions that end with the OP saying "Thanks for your post, but it still doesn't work". Is this is known bug? What happened to the ease with which Tiger would display the PC shares?
    Thank you

    I'm using an Airport Extreme, and I'm not aware of UPnP settings in the Airport Utility.
    On the Mac, I set up a Parallels Vista instance in bridged mode and verified that I can see the other Vista PC share folders, so I know that the Vista PC has been set up correctly.
    Additionally, as mentioned before, I switched DNS Servers several times, but settled on an alternative Rogers DNS to avoid their awful failed lookup ad page (64.71.255.202 if anyone needs it).
    I also tried adding the Vista PC IP to my Mac hosts file, which oddly showed the Mac on the Vista side, but nothing on the Mac side.
    I also goofed around in the smb.conf file with some settings, but alas, nada.
    The only thing that might explain why the Airport Express network (Vista, XP) provided me with shares in the sidebar and not my own Airport Extreme network (just Vista) could be that it's a master browser issue. I saw a couple of posts about this.
    If the Vista machine is the master browser, nothing will show in the sidebar, but if XP or a Mac is the master browser, then the sidebar will show shares. I don't have a way to verify this as I'm not sure how to force one machine to be a master browser over another. I'm suspicious if this will do anything as I've found other posts saying exactly the opposite scenario.
    Is there anyone out there that has had a network, possibly with Airport Extreme, one Mac (Leopard), one Vista PC, and Vista share is showing in the Finder sidebar?

  • SQL 문장이 RULE 에서 COST-BASED로 전환되는 경우

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-28
    SQL 문장이 RULE에서 COST-BASED로 전환되는 경우
    ==============================================
    PURPOSE
    SQL statement 문장이 자동으로 cost-based mode로 전환되는 경우에 대해
    알아보자.
    Explanation
    Rule-based mode에서 sql statement를 실행하더라도 Optimizer에 의해
    cost-based mode로 전환되는 경우가 있다.
    이런 경우는 해당 SQL이 아래와 같은 경우로 사용되는 경우 가능하다.
    - Partitioned tables
    - Index-organized tables
    - Reverse key indexes
    - Function-based indexes
    - SAMPLE clauses in a SELECT statement
    - Parallel execution and parallel DML
    - Star transformations
    - Star joins
    - Extensible optimizer
    - Query rewrite (materialized views)
    - Progress meter
    - Hash joins
    - Bitmap indexes
    - Partition views (release 7.3)
    - Hint (RULE 또는 DRIVING_SITE제외한 Hint가 왔을경우)
    - FIRST_ROWS,ALL_ROWS Optimizer의 경우는 통계정보가 없어도 CBO로 동작
    - TABLE 또는 INDEX에 Parallel degree가 설정되어 있거나,
    INSTANCE가 설정되어 있는 경우(DEFAULT도 해당)
    - Table에 domain index(Text index등) 이 생성되어 있는 경우

  • Performance issues in BPEL

    We are having an integration wherein the messages coming via b2b are being passed on to the bpel process for further processing..
    When we are testing with about 500 transactions means 500 separate files each having one transaction in it ..one ST SE transaction data block in it only about 10 transactions are being processed in parallel by the bpel process...while the remaining transactions go into the manual recovery queue.
    so at a time only about 10 bpel instances are getting created and then once few of these transactions get completely processed by the bpel process..slowly the transactions from the Manual Recovery Queue gets assigned to a new bpel instance..means a new bpel instances get created..
    we are expecting a load of about 16000 transactions for the entire day and the concern here is that we need all of these b2b transactions to be processed by the bpel process in about 5 to 6 hours...is it possible in any way that we change the configuration setting such that more number of parallel bpel instances could be trigerred at the same time so that this could increase the processing speed on the bpel side.
    Could someone please help us out with this slowness issue in bpel.
    thanks

    Have a look at this doc
    http://www.oracle.com/technology/tech/soa/soa-suite-best-practices/soa_best_practices_1013x_drop1.pdf
    The section on threading should answer you question. You need to set the dspMaxThreads property to the appropraite level for your environment. Make sure your hardware is able to cope.
    cheers
    James

  • 3d graphs over remote desktop

    possibly can't be solved at the labview end of the problem, but worth mentioning anyway.
    I'm running labview remotly from my home computer to a network. It all works except it won't display the 3d graphs.
    Is this a common problem with activex regions??
    Can it be solved somehow? (I'm currnetly getting a 'getimage' from the method nodes and simply grabbing an image of it and saving them to see them.)
    Help!
    Solved!
    Go to Solution.

    JamesC wrote:
    Hi John,
    This is an issue of how ActiveX instances are created, and is documented in this KB. The easiest work around would be to upgrade to LabVIEW 2009 as we have a new set of LV Object-Oriented based 3D graphs, which then will work with the LabVIEW run-time engine.
    Regards
    JamesC
    NIUK and Ireland
    It only takes a second to rate an answer
    But you can't do everything a CW 3d graph can do with the LVOOP version.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • EOIO & BPE Message Sequencing

    Hi,
    I'm sure this is an absurdly simple thing I've overlooked but...
    Scenario:
    I have files being processed through file adapter in EOIO mode.
    Messages get thrown into one of two IP's (create and update).
    Each IP has 4 exit points, of which one (or all) of the exit points must have messages appear in the same order as the messages that appeared on the original queue (update processing).
    Problem:
    Files are loaded in alpha-numeric order.... OK!
    Messages are in order in QRFC.... OK!
    Messages get loaded into many parallel BPE instances.... OK!
    BPE instances handle mappings then send messages as soon as a send step is reached, rather than completing the BPM then firing in batches in the same sequence order that they arrived in QRFC... NOT OK!
    Any ideas?
    I'm thinking correlations perhaps.
    Cheers
    James.

    Hi Michal
    No it's used to do error handling and alert raising also.  Changing it so that there's no IP required isnt an option at this time.
    If I could change it so that there was only 1 BPE thread running for this business system or SWCV...
    UPDATE: Just noticed How to avoid second BPM instance
    So much for my searching yesterday.  Seems I'm not the only person   Still, it doesnt help.  Ah well!
    Cheers,
    James.

  • Subprocess Relation with Main Process when Invoked programmatically.

    Hi
    JDeveloper 11.1.1.6, WLS 10.3.6, BPM 11.1.1.6
    I have 2 processes A and B. From A I want to invoke B.
    I can think of 2 approaches to do this.
    Approach 1. Call the subprocess B from A using Send and Recieve activities.
    If we do it this way, when an instance of B is created, it is created as a child of the Process A. (When seen in the EM Console)
    Approach 2. Use a Web Service call in the main process. Lets say it is a Java Web Service and we use the following API to invoke the process.
    IInstanceManagementService ims = Fixture.getBPMServiceClient().getInstanceManagementService();
    Task task = ims.createProcessInstanceTask(bpmContext, pms.getCompositeDN()+"/"+pms.getProcessName());
    But I think it will be created as a separate process.
    So my Questions are as follows.
    a. In Approach 2, is there a way to make the sub process B as a child of Process A?
    b. The subprocess B might have to be called more than once in parallel, with some parameters each time.
    I want to understand what is the better approach?
    Use Approach A to call the from a Subprocess (loop) with Parallel, Multi Instance and use Array to pass parameters for each sub process call?
    Or
    Use Approach B?
    Thanks for any help
    Sameer

    Hi Sameer,
    Your send and receive events would be the better of the two options that you've listed. If the two processes are in the same composite project and send and receive events are used, there would be less overhead to calling the subprocess.
    Think you were alluding to this, but you'll also retain the audit trail information if you take the first approach not just in Enterprise Manager but also in the Workspace.
    Dan

Maybe you are looking for

  • Using Mozilla FIrefox I now keep gettng a display that says I must install a new "plug-in". How can I make sure this "plug-in" is secure?

    Lately I've been receiving requests to update a "plug-in". This is a new message, and I can't get to my usual web sites without upgrading a "plug-in". A list of "plug-ins" is then displayed that require updating. How secure is this update page? Shoul

  • I can't download free apps without a credit card anymore with ios 6. Help?

    Hey guys maybe you can help me. After updating to ios 6 on my iPad 2 everything was fine. I was downloading free apps. Now if I try it says I need to enter credit card details which I have never needed to do in the past. Is there any way to get aroun

  • SUBMIT COMMAND ERROR

    Dear All, I am using submit command with selection screen parameters of the called program and making a spool out of it and then converting spool output to pdf and sending it on through mail. While running the program in Development client it gives t

  • Exclude TAO tables from stat gathering

    Hi, on FSCM 9.1, Tools 8.52. on Windows 2008. DB Oracle 11g R2. Automatic stat gathering, collects statistics for temporary tables PS_XXX_TAO. This distorts the Explain plans cardinality but if I delete the statistics : dbms_stats.delete_table_stats(

  • Computer specs for Photoshop - help!

    My wife needs to upgrade her computer. We plan to go to a 64 bit machine running Windows 7. She currently uses Photoshop CS2  and Illustrator and plans to continue with them for the moment but will upgrade the software eventually to either CS4 or 5.