Parallel instances?

Hello guy, i have created an another instance on a single machine
i got error when starting up a new one - error starting up database in exclusive mode
, so i shutdown older database instance , to start new one
......now is it possible to run both instance parallel? without shutting anyone
Thanks

P. Forstmann wrote:
You must not use the same PFILE or SPFILE that an existing database. Oops!! but i have done that already ! , and database is running .Thanks for a advice , but what it would effect on New database
If you want to create a database manually you need to create a PFILE by hand and make sure that any file name in this PFILE like CONTROL_FILES parameter or directory name are distinct from any existing database on the same machine.from oracle docs , i found sample Pfile , and i changed only few parameters like DBname , and Oracle_home , Oracle_base ( i have put the same oracle_home & base)
database is running, what disadvantages could i have
It would be much easier if you use DBCA to create a new database because DBCA will do this automatically for you (including ORADIM steps).I knew about the DBCA , but i was trying to do with only command line interface !
Thanks
Edited by: newbieDBA on Feb 20, 2013 9:33 AM

Similar Messages

  • How to limit the number of parallel instances of a given event-based job?

    Hi,
    DB version : 11.2.0.3
    I'm using an event-based job with parallel_instances attribute set to true.
    It works perfectly fine, messages are enqueued then dequeued and processed simultaneously by separate instances of the same job.
    Now, I'd like to limit the number of parallel instances of this job running at the same time, say to an arbitrary number n.
    What would be the best way to do that?
    Maybe using a notification callback is a better option in this case?
    Thanks in advance for any input.

    Hi,
    DB version : 11.2.0.3
    I'm using an event-based job with parallel_instances attribute set to true.
    It works perfectly fine, messages are enqueued then dequeued and processed simultaneously by separate instances of the same job.
    Now, I'd like to limit the number of parallel instances of this job running at the same time, say to an arbitrary number n.
    What would be the best way to do that?
    Maybe using a notification callback is a better option in this case?
    Thanks in advance for any input.

  • Parallel instances using ONE Action Engine

    Hi Everyone.
    I just wanted to see what type of ideas this would bring to this issue.
    I have an AE that holds all the pointers (or referances) that 20 instances of a reentrantVI use in parallel. Obviously the issue is the execution speed as all the instances are waiting to read the AE (and probably a heap of other issues I haven't ecountered yet). 
    I have placed an example of the AE that I am using where I also show HOW I'm using it on a bigger picture.
    Please DO NOT RUN the example, just open it and see the BD, it is where I tried to put all the information on and it was the best way that I can think of to explain the issue.
    Kas
    Solved!
    Go to Solution.
    Attachments:
    Example.zip ‏63 KB

    johnsold wrote:
    1. Are items 1 and 2 in Example.vi or the image guaranteed to run before 3 and 4 start? Or, can segments be added while the proceesing is occurring?
    2. How do the changes made by item 4 affect the data not yet processed by item 3?  How do they affect segments already processed?
    3. The only value on FGV.vi Segment: Out which is meaningful is the last one. The compiler is probably smart enough to not update it on earlier iterations but there might be a small speed improvement to moving the terminal outside the for loop.
    4. When are the Get Segments and Get Items commands used?
    5. Could a parallelized for loop do what you want rather than multiple reentrant VIs?
    Ok, sorry for not properly explaining things. My explanation of My problem made sence in my head so I thought every one else will get it .
    1. Items 1 and 2 are sequential, so guaranteed to run before 3 and 4. However, when 3 (the reentrant) is executed and sudenly gets an error (i.e. TCP TimeOut 56) then the segment function that was initialy changed to "in process" when taken out (before the timeout happened) is set again to "waiting" so that untill the error gets sorted, another reentrant can get the segment and does something with it. Also, if the reentrant in step 3 is executed and there are no errors then the output data is sent to item 4, which sets the function of the segment in AE to either "Damaged" or "Missing". This is happening while all the other reentrant VIs (in item 3) are still running.
    2. Item 4 will ONLY make changes to the function of the segment AFTER item 3 has done something to it, and not before. And NO, after step 2 on my description, no Segments can be ADDED or DELETED. Only the segment Function changes. Based on this function, the segment is read. So, if the function says "waiting" then the data is sent out, otherwise, the reentrant VI thinks that all the segments are processed and so it exits (I do this through a "5003" error inside the AE). 
    3. Ummm.. Maybe the compiler does what you say it does (no idea here) but I doubt it will give me any improvement that would make a difference to me.
    4. They are used in my step 3 "Get segment" used ONCE (If no error) and TWICE (if error), "Get Items" used ONCE (if no error).
    5. Won't make a difference, since the "Choke Point" is the AE. How you launch the VI won't make a difference. 
    Kas

  • Firefox4 freezes on startup after about 10 seconds. A second parallel instance of firefox works normally as long as the first instance is left alone frozen. What causes this and how can it be prevented?

    I am running firefox4 on win XP. When I start firefox, it says it is trying to restore a previous session which has only the blank tab. After about 10 seconds the whole instance of firefox freezes - none of the buttons work and the only way to close firefox is to use the Ctrl+Alt+Del and task manager. If I open a second parallel sessionof firefox, it works normally, as long as the first session is left alone (frozen). What could be causing the freeze up of the first firefox instance, and how can it be prevented?

    I've done some research on the SQLite database. Whenever Aperture hangs up (like during auto-stack or opening the filter hud) there are thousands of SQLite queries happening. These SQLite queries cause massive file I/O because the database is stored on the disk as 1kb pages. However, the OS is caching the database file; mine's only 12MB. I'm trying to track down some performance numbers for SQLite on osx but having trouble.
    It's starting to look like most of the speed problems are in the libraries that Aperture uses instead of the actual Aperture code. Of course, that doesn't completely let the developers off the hook since they choose to use them in the first place.
    Oh, and if anyone is curious, the database is completely open to queries using the command line sqlite3 tool. Here's the language reference http://www.sqlite.org/lang.html
    Hmm, just found this. Looks like someone else has been playing around in the db http://www.majid.info/mylos/stories/2005/12/01/apertureInternals.html
    Dual 1.8 G5   Mac OS X (10.4.3)   1GB RAM, Sony Artisan Monitor, Sony HC-1 HD Camera

  • How can I make parallel Instances of a VI, avalible through webserver URL?

    Hello, I have a vi that I want multiple people be be able to access at the same time through the web.  Each time a client accesses the URL I want the VI to start a new instance in paralles for them.  So if 5 people went to the URL each would be using their own independent parrallel instance.  I dont want them to be able to see what the other is doing.  Kind of like if you start solitare then click its .exe again it starts another seperate instance of it.  I tried using reentrant execution still showed to the second client that the .vi is controled by someone else, even attempted making it a sub-vi called by a main but i would have to make the main one the .html and it wont display the sub vi's in the webpages embeded window.  
    Any Ideas?
    Thanks In Advance
    I'm running :
    Windows XP Pro. SP3
    LabView 8.0

    I did this once using CGI in combination with a VIT. the CGI server would open this VIT and create the dynamic web page to show this VIT to the user by using remote panels. So each user had its own anstance of an remote panel.
    To explain it in details:
    the index.htm would have a POST command in it that will trigger when the user opens up the web page. This post command informs the CGI server. The cgi server then creates a VIT in memory, and creates a dynamic html code that is sent back to the user, (this code would be the html code for a remote panel) but where the VI name that is shown in the remote panel will be the same as the new name that the new VIT got in memory.
    Remember also to shut down the VIT when the user closes its browser, otherwise you'll going to have a lot of VIs in memory after a while!
     If any questions just ask, you could actually also use datasocket to retrieve the post command.
    On tip: the CGI part of labview runs in an own instance. You must link the VIT you your main application instance for it to be able to communicate and share data with your main apllication by using VI server. (weird sentence..)

  • Calling Multiple (and parallel) ActiveX instances

    I'm having a problem of running multiple activeX instances using LabVIEW (apparently the problem occurs with more than 4 instances). This problem doesn't happen when I do the same thing in C (Visual Studio). I can create as many instances as I wish, but when I run methods that hang or run for a long period of time, only 4 are able to run at each moment. If I stop any one of the methods, the next one starts running.
    I attached an example (in LabVIEW 8.5.1) of using the excel activeX automation, but it happens with all of the activeX's I tried so far. It even happens when using several different activeXs.
    Please notice, that the problem is not with creating the instances, but when running methods of the activeX in parallel at the same time (If you run short methods that finish executing fast, you won't notice the problem).
    Attachments:
    Instances of Excel1.vi ‏23 KB

    I think you're running into the max number of threads LabVIEW allocates per execution system (4 by default). All of this code is running in one VI and hence one execution system. You can do a quick test to confirm this. Select one or more of your parallel instances and create separate subVIs from them. Then go into those subVI properties and go to the Execution category. Select various execution systems for the VIs. Make one DAQ, for instance, another Instrument I/O, and another Other 1 or whatever. When you rerun your test after that you'll see all six File IO dialogs pop up simultaneously.
    Even though LabVIEW is using only four threads in each execution system, it will still multitask between various things to achieve as much parallelism as possible. This is what you saw when you said that as soon as one method finished, another one would start up.
    I'm not too familiar with dealing with execution systems to achieve highly scalable applications, unfortunately. Most of the time LabVIEW gives you something really good out of the box without having to think about execution systems. And when you run applications on Timed Loops, that helps LabVIEW divide up application threads better as well.
    But you could start by seeing if you can divide up your ActiveX routines somehow and then duplicate the code into subVIs that run in different execution systems.
    Another option is to manipulate the TreadConfig VI that ships with LabVIEW. Check out the following VI:
    <LabVIEW>\vi.lib\Utility\sysinfo.llb\threadconfig.vi
    You can increase the number of threads LabVIEW will allocate for each execution system to up to 8.
    Here's a help topic with more info.
    Message Edited by Jarrod S. on 05-12-2008 09:30 PM
    Jarrod S.
    National Instruments

  • 10g: parallel pipelined table func. using table(cast(SQL collect.))?

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

  • Thought Experiment: Variable Number of Parallel Time Dependent Systems

    Hi all, 
    I'm just going through a thought experiment before beginning a new major project and am trying to figure out the best way this code could work. We have a new project coming up that is going to be based out a single computer, this computer is then going to control a number of identical iterations of measurement devices, at present the number is 20 but this could change depending on mechanical failure etc during trials. All of these devices need to be running at the same time and are relatively time dependent, each device has to:
    Record measurement data at 1Hz (0.05s Active Time)
    Respond to inputs and undertake longer tasks (0.5s Active Time). Inputs occur at random intervals greater then ~10s
    As all of these systems must run in parrallel I am looking at the number of different ways LV handles the parallel instances to try and determine a reasonable design pattern before beginning anything. The current ways that I am looking at it are:
    Standard Multiple VI parallelization
    Build a SubVI, place multiple instance in block-diagram, run.
    Pros:
    Simple, Build a Sub-VI that can run the device plop X instances.
    Cons:
    Scalability, Not able to easily change number of instances without re-editing code, rebuilding moving etc.
    State Machine parrallelization
    Build a state machine for the Device, run X instances in a parrallelized For Loop, for all longer tasks run start them and then check progress each time you come back into the state machine.
    Pros:
    Programmatically scalable
    Standard State Machine Architecture.
    Cons:
    Less control over long tasks
    More complex programming and control.
    I am looking forward to what the community has to say, I have a feeling I should probably be looking at OOP for this but I have never done any in LV before and I am a little worried how long the learning curve on that would add to the project without resulting in a much improved outcome.
    Let me know your thoughts on the best way to continue.

    As Mike said, you can launch multiple instances of a reentrant VI similar to launching a single instance of a VI - it's all in how you get your reference / what your reference actually is.
    If you use the Open VI Reference function with a parameter of 0x80 or 0x100 on a preallocated clone reentrant VI, a clone is allocated and the reference is a reference to that clone of the VI (you can check this in testing using a property node and checking the name, which will have the clone number after a colon, and/or Is Clone VI).  At that point, anything that you do with that reference is being done to that clone.
    What I did recently for a similar design was defined my class (or just a cluster if not using OOP) to have all of the information to define a single identity - TCP connection ID, queue refs, etc., and the reference to a clone.  A constructor would be called like a "regular" SubVI to create everything it needed, including allocating a clone for itself to run in, and return a Data Value Reference to the resulting object (cluster).  A launcher could then be used that would take the clone reference and Start Asynchronous Call on it, passing into one of its controls the DVR to the object.  Now the clone is running and has everything that it needs to identify itself and operate, including responding to connections like you might have for displaying detailed information about one process all on its own, and my main application still holds onto the reference to the object so that it can do things like be a common server to give new clients the list of TCP ports that each clone is available at.
    As for some items in the original post:
      An object oriented approach is usually great for managing these kinds of designs.  If you have knowledge of object oriented design and just aren't familiar with LabVIEW's OOP support, then the learning curve should be very shallow, but it is a good idea to start with a small project or a single module first.  You don't want to be in the, "It's supposed to work this way, but I've never actually done it, and that coercian dot on a child being passed into a parent terminal is making me nervous..." boat for a critical project, even when it does work how you think its supposed to.
      I believe that the Actor Framework (which requires LabVIEW OOP to work with) is made for this type of stuff.
      I must say, I hadn't ever considered or seen using a parallel-enabled for loop with a SubVI in it to do this kind of thing, and I thought it was a cool idea.

  • Unable to put a timed structure in a parallel for loop, error -808

    If I try to place a Timed Structure in a Parallel For Loop, the TS needs a different structure name per instance to prevent collisions. However, setting the Structure Name node  to something unique per instance doesn't work, I get error -808. Have I missed something?
    Thoric (CLA, CLED, CTD and LabVIEW Champion)

    That got me thinking - something to do with the number of compiled parallel instances.
    Indeed, the compiler creates a number of parallel instances of the for loop in the compiled code (in my case 8), and uses the number requested at the Parallel Instances terminal (just under the Iterations terminal), which is set to 2. Therefore the compiler has created 8 instances of the Timed Structure, each in its own for loop instance, but they all have the same defined Structure Name. When launched, there is an immediate conflict because they can't share the same name, and error -808 is launched before the set Structure Name function can change it. Hence the error still shows when requesting only one instance, because the other seven still exist.
    I guess this means you can't have Timed Structures (or Timed Loops for that matter too) in a Parallelised For Loop?
    Thoric (CLA, CLED, CTD and LabVIEW Champion)

  • How to create parallel tasks using parallel for loops

    Hi,
    I am setting up a program that communicates with six logic controllers and has to read the system status every 100 ms. We are using OPC datasockets for this, and they appear a little slow. 
    I have created a uniform comm. method for all controllers, and now I find myself programming this method six times to communicate with each system. I am wondering if this could be done more elegant using the parallel for loop, in which case I would program an exchange once and then have six workers running simultaneously. Since a picture is more clear that a thousand words, what I am asking is this:
    Is it possible to replace something like
    by
    and have this for loop running these tasks in parallel (on different cores / in different threads)?
    I have configured the loop to create 8 instances at compile, so I would have 2 instances surplus available at runtime if I find I need an additional system.
    The benefits of the method show in the second picture to me are:
    * takes less space
    * modifications have to be made only once
    * less blocks, wires and stuff makes it more clear what's going on.
    * flexibility in the actual number of tasks running (8 instances available at runtime)
    * if more tasks are required, I need only to update the maximum number of instances and recompile, i.e. no cutting and pasting required. 
    Unfortunately, I don't have those system available yet, so there's no way to test this. Yet, I would like to know if the above works as I expect - unfortunately the labview help is not completely clear to me on this.
    Best regards,
    Frans 
    Solved!
    Go to Solution.

    Dear mfletcher,
    First of all: thanks for confirming that my intuition was right in this case.
    As for your question on the help: below is a copy/paste from the help on the 'configure parallelism dialog box' 
    Number of generated parallel loop instances—Determines the number of For Loop instances LabVIEW generates at compile time. The Number of generated parallel loop instances should equal the number of logical processors on which you expect the VI to execute. If you plan to distribute the VI to multiple computers, Number of generated parallel loop instances should equal the maximum number of logical processors you expect any of those computers to ever contain. Use the parallel instances terminal on the For Loop to specify how many of the generated instances to use at run time. If you wire a larger number to the parallel instances terminal than you specify in this dialog box, LabVIEW only executes as many loop instances as you specify here.The reason for me doubting if what I programmed would work the way I intended lies in the fact that the help only mentions processors here, which would be interpreted as actual cores. Thus on a dual core machine, the number should be 2.
    I think it would be helpful to mention something about threads here, because in some case one would like to have more parallel threads than there are cores in a system.
    In mu case I would like to create six threads, which on my dual core processor would be spread over only two cores. Then these six threads run in parallel.I know that in case of heavy math that would not help, but since I am doing communications, which have timeouts and such, and that probably runs smoother in six parallel tasks even though I only have two cores. 
    Hope this helps in improving the help of the for loop.
    Regards,
    Frans 

  • Mulitple instances of same VI on same machine

    is this possible?
    if so how?
    when i try and open the same vi on my machine it justs goes back to the
    previously opened one..

    D wrote:
    : is this possible?
    Yes but ...
    (1) If your VI has front panel controls
    One way is to make a physical copy with a different NAME. I believe
    shortcuts may work. You will need to be careful about sub-VIs tho.
    I have also fiddled with function nodes but I am not quite sure if those
    will work or not
    Finally, the best way of doing this is to get the application builder
    option and compile the VI. Once that has happened the VIs become
    independent and you should be able to run any number of instances of the
    executable.
    (2) If your VI does not have front panel controls and you just wish to
    use the same sun-VI in multiple VIs or multiple parallel instances in the
    same VI, there is an execution option "reentrant" that
    will allow this.
    Rudolf

  • Exception Handling inside a Multi-Instance Loop

    I would like to see a sample process that demonstrates Exception Handling inside an inline subprocess containing another subprocess with a multi-instance sub-process in parallel. The outer sub-process is executing in sequence. Each instance of the outer loop depends upon the outcome of the successful execution of previous step. At each step, the inner inline sub-process activity can have more than one instance which are all executed in parallel using multi-instance. If the outcome code of any one of these parallel instances is "REJECT" code, we simply raise a business exception and the stop the outer sub-process from going through the next instance of the loop. The problem we are trying to solve is similar to the sample in chapter 5 of the book "New Book: Oracle BPM Suite 11g: Advanced BPMN Topics" by Mark Nelson. Particularly, the exception handling example shown in Page 73 under the topic “Exception Handling with embedded Sub-processes”. The inner most multi-instance sub-process should raise a business exception and interrupt the .
    We would like to see a smple that demonstrates how exceptions are handled inside a multi-instance parallel sub-process. Could someone please provide a working sample that we can go though? We would like to raise a business exception as soon as certain outcome of a Human Tasks in observed and break out of the loop and continue thereafter. Thanks very much in advance for your help.
    Pankaj
    Edited by: 1001027 on 2-May-2013 10:09 AM

    I would like to see a sample process that demonstrates Exception Handling inside an inline subprocess containing another subprocess with a multi-instance sub-process in parallel. The outer sub-process is executing in sequence. Each instance of the outer loop depends upon the outcome of the successful execution of previous step. At each step, the inner inline sub-process activity can have more than one instance which are all executed in parallel using multi-instance. If the outcome code of any one of these parallel instances is "REJECT" code, we simply raise a business exception and the stop the outer sub-process from going through the next instance of the loop. The problem we are trying to solve is similar to the sample in chapter 5 of the book "New Book: Oracle BPM Suite 11g: Advanced BPMN Topics" by Mark Nelson. Particularly, the exception handling example shown in Page 73 under the topic “Exception Handling with embedded Sub-processes”. The inner most multi-instance sub-process should raise a business exception and interrupt the .
    We would like to see a smple that demonstrates how exceptions are handled inside a multi-instance parallel sub-process. Could someone please provide a working sample that we can go though? We would like to raise a business exception as soon as certain outcome of a Human Tasks in observed and break out of the loop and continue thereafter. Thanks very much in advance for your help.
    Pankaj
    Edited by: 1001027 on 2-May-2013 10:09 AM

  • MAC PRO 2.66, XT1900 & 3GB RAM for Aperture?

    Hi,
    I have read a lot of post about the performance of Aperture and CS2 and budget-wise I am thinking in buying a MAC PRO 2.66, XT1900 & 3GB RAM.
    Will this configuration be OK to work with Aperture and CS2?
    I shoot primary weedings using 8MP Canon RAW files.
    Thanks

    I have this exact same setup, powering a 24" Dell and a 23" apple monitor - but with 4 gigs of RAM as the previous post suggests. It's awesome (I'm opening 10 mp Nikon NEFs on mine, FWIW).
    I initially started with 2 gigs of RAM, and that was miserable in more ways than one. After 4 gigs of RAM, Aperture ran much better...but still not snappy. Following the advice of others in this forum, I got the ATI card you are considering, and it has made a world of difference.
    This speed is addicting, however. Now I'm pondering, incredibly, "What ELSE can I add to the mix to make it even FASTER." I've read that some have success with faster drives. Others recommend even more RAM. I might try more RAM down the road, but then again I push the machine in other ways (lots of Rosetta apps, a Parallels instance of XP, etc). Still, nothing really ever feels slow.

  • Oracle backup to ftp

    Hello all
    I have problems with oracle backup through db13 to ftp.
    Job log
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000028, user ID USER)
    Execute logical command BRBACKUP On host sap1-ast
    Parameters:-u / -jid ALLOG20061117094801 -c force -t online -m all -p initTHD.sap -a -c force -p initTHD.sap -c
    ds
    BR0051I BRBACKUP 7.00 (16)
    BR0282E Directory '/sapbackup' not found
    BR0182E Checking parameter/option 'compress_dir' failed
    BR0056I End of database backup: bdtyppxk.log 2006-11-17 09.48.04
    BR0280I BRBACKUP time stamp: 2006-11-17 09.48.04
    BR0054I BRBACKUP terminated with errors
    External program terminated with exit code 3
    BRBACKUP returned error status E
    Job finished
    this is my initTHD.sap file
    @(#) $Id: //bas/700_REL/src/ccm/rsbr/initNT.sap#5 $ SAP
    SAP backup sample profile. #
    The parameter syntax is the same as for init.ora parameters. #
    Enclose parameter values which consist of more than one symbol in #
    double quotes. #
    After any symbol, parameter definition can be continued on the next #
    line. #
    A parameter value list should be enclosed in parentheses, the list #
    items should be delimited by commas. #
    There can be any number of white spaces (blanks, tabs and new lines) #
    between symbols in parameter definition. #
    backup mode [all | all_data | full | incr | sap_dir | ora_dir
    | all_dir | <tablespace_name> | <file_id> | <file_id1>-<file_id2>
    | <generic_path> | (<object_list>)]
    default: all
    backup_mode = all
    restore mode [all | all_data | full | incr | incr_only | incr_full
    | incr_all | <tablespace_name> | <file_id> | <file_id1>-<file_id2>
    | <generic_path> | (<object_list>) | partial | non_db
    redirection with '=' is not supported here - use option '-m' instead
    default: all
    restore_mode = all
    backup type [offline | offline_force | offline_standby | offline_split
    | offline_mirror | offline_stop | online | online_cons | online_split
    | online_mirror
    default: offline
    backup_type = online_cons
    backup device type
    [tape | tape_auto | tape_box | pipe | pipe_auto | pipe_box | disk
    | disk_copy | disk_standby | stage | stage_copy | stage_standby
    | util_file | util_file_online | rman_util | rman_disk | rman_stage
    | rman_prep]
    default: tape
    backup_dev_type = stage
    backup root directory [<path_name> | (<path_name_list>)]
    default: %SAPDATA_HOME%\sapbackup
    backup_root_dir = /sapbackup
    stage root directory [<path_name> | (<path_name_list>)]
    default: value of the backup_root_dir parameter
    stage_root_dir = /sapbackup
    compression flag [no | yes | hardware | only]
    default: no
    compress = no
    compress command
    first $-character is replaced by the source file name
    second $-character is replaced by the target file name
    <target_file_name> = <source_file_name>.Z
    for compress command the -c option must be set
    recommended setting for brbackup -k only run:
    "%SAPEXE%\mkszip -l 0 -c $ > $"
    no default
    compress_cmd = "C:\usr\sap\THD\SYS\exe\uc\NTI386\mkszip -c $ > $"
    uncompress command
    first $-character is replaced by the source file name
    second $-character is replaced by the target file name
    <source_file_name> = <target_file_name>.Z
    for uncompress command the -c option must be set
    no default
    uncompress_cmd = "C:\usr\sap\THD\SYS\exe\uc\NTI386\uncompress -c $ > $"
    directory for compression [<path_name> | (<path_name_list>)]
    default: value of the backup_root_dir parameter
    compress_dir = /sapbackup
    brarchive function [save | second_copy | double_save | save_delete
    | second_copy_delete | double_save_delete | copy_save
    | copy_delete_save | delete_saved | delete_copied]
    default: save
    archive_function = save
    directory for archive log copies to disk
    default: first value of the backup_root_dir parameter
    archive_copy_dir = /sapbackup
    directory for archive log copies to stage
    default: first value of the stage_root_dir parameter
    archive_stage_dir = /sapbackup
    delete archive logs from duplex destination [only | no | yes | check]
    default: only
    archive_dupl_del = only
    new sapdata home directory for disk_copy | disk_standby
    no default
    new_db_home = X:\oracle\C11
    stage sapdata home directory for stage_copy | stage_standby
    default: value of the new_db_home parameter
    #stage_db_home = /sapbackup
    original sapdata home directory for split mirror disk backup
    no default
    #orig_db_home = C:\oracle\THD
    remote host name
    no default
    remote_host = srv1
    remote user name
    default: current operating system user
    remote_user = "thdadm password"
    tape copy command [cpio | cpio_gnu | dd | dd_gnu | rman | rman_gnu
    rman_dd | rman_dd_gnu]
    default: cpio
    tape_copy_cmd = cpio
    disk copy command [copy | copy_gnu | dd | dd_gnu | rman | rman_gnu]
    default: copy
    disk_copy_cmd = copy
    stage copy command [rcp | scp | ftp]
    default: rcp
    stage_copy_cmd = ftp
    pipe copy command [rsh | ssh]
    default: rsh
    pipe_copy_cmd = rsh
    flags for cpio output command
    default: -ovB
    cpio_flags = -ovB
    flags for cpio input command
    default: -iuvB
    cpio_in_flags = -iuvB
    flags for cpio command for copy of directories to disk
    default: -pdcu
    use flags -pdu for gnu tools
    cpio_disk_flags = -pdcu
    flags for dd output command
    default: "obs=16k"
    caution: option "obs=" not supported for Windows
    recommended setting:
    Unix: "obs=nk bs=nk", example: "obs=64k bs=64k"
    Windows: "bs=nk", example: "bs=64k"
    dd_flags = "bs=64k"
    flags for dd input command
    default: "ibs=16k"
    caution: option "ibs=" not supported for Windows
    recommended setting:
    Unix: "ibs=nk bs=nk", example: "ibs=64k bs=64k"
    Windows: "bs=nk", example: "bs=64k"
    dd_in_flags = "bs=64k"
    number of members in RMAN save sets [ 1 | 2 | 3 | 4 | tsp | all ]
    default: 1
    saveset_members = 1
    additional parameters for RMAN
    rman_channels and rman_filesperset are only used when rman_util,
    rman_disk or rman_stage
    rman_channels defines the number of parallel sbt channel allocations
    rman_filesperset = 0 means:
    one file per save set - for non-incremental backups
    all files in one save set - for incremental backups
    the others have the same meaning as for native RMAN
    rman_channels = 1
    rman_filesperset = 0
    rman_maxpiecesize = 0 # in KB - former name rman_kbytes
    rman_rate = 0 # in KB - former name rman_readrate
    rman_maxopenfiles = 0
    rman_maxsetsize = 0 # in KB - former name rman_setsize
    additional parameters for RMAN version 8.1
    the parameters have the same meaning as for native RMAN
    rman_diskratio = 0 # deprecated in Oracle 10g
    rman_pool = 0
    rman_copies = 0 | 1 | 2 | 3 | 4 # former name rman_duplex
    rman_proxy = no | yes | only
    special parameters for an external backup library, example:
    rman_parms = "BLKSIZE=65536 ENV=(BACKUP_SERVER=HOSTNAME)"
    rman_send = "'<command>'"
    rman_send = ("channel sbt_1 '<command1>' parms='<parameters1>'",
    "channel sbt_2 '<command2>' parms='<parameters2>'")
    remote copy-out command (backup_dev_type = pipe)
    $-character is replaced by current device address
    no default
    copy_out_cmd = "dd ibs=8k obs=64k of=$"
    remote copy-in command (backup_dev_type = pipe)
    $-character is replaced by current device address
    no default
    copy_in_cmd = "dd ibs=64k obs=8k if=$"
    rewind command
    $-character is replaced by current device address
    no default
    operating system dependent, examples:
    HP-UX: "mt -f $ rew"
    TRU64: "mt -f $ rewind"
    AIX: "tctl -f $ rewind"
    Solaris: "mt -f $ rewind"
    Windows: "mt -f $ rewind"
    Linux: "mt -f $ rewind"
    rewind = "mt -f $ rewind"
    rewind and set offline command
    $-character is replaced by current device address
    default: value of the rewind parameter
    operating system dependent, examples:
    HP-UX: "mt -f $ offl"
    TRU64: "mt -f $ offline"
    AIX: "tctl -f $ offline"
    Solaris: "mt -f $ offline"
    Windows: "mt -f $ offline"
    Linux: "mt -f $ offline"
    rewind_offline = "mt -f $ offline"
    tape positioning command
    first $-character is replaced by current device address
    second $-character is replaced by number of files to be skipped
    no default
    operating system dependent, examples:
    HP-UX: "mt -f $ fsf $"
    TRU64: "mt -f $ fsf $"
    AIX: "tctl -f $ fsf $"
    Solaris: "mt -f $ fsf $"
    Windows: "mt -f $ fsf $"
    Linux: "mt -f $ fsf $"
    tape_pos_cmd = "mt -f $ fsf $"
    mount backup volume command in auto loader / juke box
    used if backup_dev_type = tape_box | pipe_box
    no default
    mount_cmd = "<mount_cmd> $ $ $ [$]"
    dismount backup volume command in auto loader / juke box
    used if backup_dev_type = tape_box | pipe_box
    no default
    dismount_cmd = "<dismount_cmd> $ $ [$]"
    split mirror disks command
    used if backup_type = offline_split | online_split | offline_mirror
    | online_mirror
    no default
    split_cmd = "<split_cmd> [$]"
    resynchronize mirror disks command
    used if backup_type = offline_split | online_split | offline_mirror
    | online_mirror
    no default
    resync_cmd = "<resync_cmd> [$]"
    additional options for SPLITINT interface program
    no default
    split_options = "<split_options>"
    resynchronize after backup flag [no | yes]
    default: no
    split_resync = no
    volume size in KB = K, MB = M or GB = G (backup device dependent)
    default: 1200M
    recommended values for tape devices without hardware compression:
    60 m 4 mm DAT DDS-1 tape: 1200M
    90 m 4 mm DAT DDS-1 tape: 1800M
    120 m 4 mm DAT DDS-2 tape: 3800M
    125 m 4 mm DAT DDS-3 tape: 11000M
    112 m 8 mm Video tape: 2000M
    112 m 8 mm high density: 4500M
    DLT 2000 10/20 GB: 10000M
    DLT 2000XT 15/30 GB: 15000M
    DLT 4000 20/40 GB: 20000M
    DLT 7000 35/70 GB: 35000M
    recommended values for tape devices with hardware compression:
    60 m 4 mm DAT DDS-1 tape: 1000M
    90 m 4 mm DAT DDS-1 tape: 1600M
    120 m 4 mm DAT DDS-2 tape: 3600M
    125 m 4 mm DAT DDS-3 tape: 10000M
    112 m 8 mm Video tape: 1800M
    112 m 8 mm high density: 4300M
    DLT 2000 10/20 GB: 9000M
    DLT 2000XT 15/30 GB: 14000M
    DLT 4000 20/40 GB: 18000M
    DLT 7000 35/70 GB: 30000M
    tape_size = 100G
    volume size in KB = K, MB = M or GB = G used by brarchive
    default: value of the tape_size parameter
    tape_size_arch = 100G
    level of parallel execution
    default: 0 - set to number of backup devices
    exec_parallel = 0
    address of backup device without rewind
    [<dev_address> | (<dev_address_list>)]
    no default
    operating system dependent, examples:
    HP-UX: /dev/rmt/0mn
    TRU64: /dev/nrmt0h
    AIX: /dev/rmt0.1
    Solaris: /dev/rmt/0mn
    Windows: /dev/nmt0 | /dev/nst0
    Linux: /dev/nst0
    tape_address = /dev/nmt0
    address of backup device without rewind used by brarchive
    default: value of the tape_address parameter
    operating system dependent
    tape_address_arch = /dev/nmt0
    address of backup device with rewind
    [<dev_address> | (<dev_address_list>)]
    no default
    operating system dependent, examples:
    HP-UX: /dev/rmt/0m
    TRU64: /dev/rmt0h
    AIX: /dev/rmt0
    Solaris: /dev/rmt/0m
    Windows: /dev/mt0 | /dev/st0
    Linux: /dev/st0
    tape_address_rew = /dev/mt0
    address of backup device with rewind used by brarchive
    default: value of the tape_address_rew parameter
    operating system dependent
    tape_address_rew_arch = /dev/mt0
    address of backup device with control for mount/dismount command
    [<dev_address> | (<dev_address_list>)]
    default: value of the tape_address_rew parameter
    operating system dependent
    tape_address_ctl = /dev/...
    address of backup device with control for mount/dismount command
    used by brarchive
    default: value of the tape_address_rew_arch parameter
    operating system dependent
    tape_address_ctl_arch = /dev/...
    volumes for brarchive
    [<volume_name> | (<volume_name_list>) | SCRATCH]
    no default
    volume_archive = (THDA01, THDA02, THDA03, THDA04, THDA05,
    THDA06, THDA07)
    volumes for brbackup
    [<volume_name> | (<volume_name_list>) | SCRATCH]
    no default
    volume_backup = (THDB01, THDB02, THDB03, THDB04, THDB05,
    THDB06, THDB07)
    expiration period for backup volumes in days
    default: 30
    expir_period = 30
    recommended usages of backup volumes
    default: 100
    tape_use_count = 100
    backup utility parameter file
    default: no parameter file
    util_par_file = initTHD.utl
    mount/dismount command parameter file
    default: no parameter file
    mount_par_file = initTHD.mnt
    Oracle instance string to the primary database
    [primary_db = <conn_name> | LOCAL]
    no default
    primary_db = <conn_name>
    description of parallel instances for Oracle RAC
    parallel_instances = <instance_desc> | (<instance_desc_list>)
    <instance_desc_list> -> <instance_desc>[,<instance_desc>...]
    <instance_desc> -> <Oracle_sid>:<Oracle_home>@<conn_name>
    <Oracle_sid> -> Oracle system id for parallel instance
    <Oracle_home> -> Oracle home for parallel instance
    <conn_name> -> Oracle instance string to parallel instance
    Please include the local instance in the parameter definition!
    default: no parallel instances
    example for initRAC001.sap:
    parallel_instances = (RAC001:/oracle/RAC/920_64@RAC001,
    RAC002:/oracle/RAC/920_64@RAC002, RAC003:/oracle/RAC/920_64@RAC003)
    database owner of objects to be checked
    <owner> | (<owner_list>)
    default: all SAP owners
    check_owner = sapr3
    database objects to be excluded from checks
    all_part | non_sap | [<owner>.]<table> | [<owner>.]<index>
    | [<owner>.]<prefix>* | <tablespace> | (<object_list>)
    default: no exclusion, example:
    check_exclude = (SDBAH, SAPR3.SDBAD)
    database owner of SDBAH, SDBAD and XDB tables for cleanup
    <owner> | (<owner_list>)
    default: all SAP owners
    cleanup_owner = sapr3
    retention period in days for brarchive log files
    default: 30
    cleanup_brarchive_log = 30
    retention period in days for brbackup log files
    default: 30
    cleanup_brbackup_log = 30
    retention period in days for brconnect log files
    default: 30
    cleanup_brconnect_log = 30
    retention period in days for brrestore log files
    default: 30
    cleanup_brrestore_log = 30
    retention period in days for brrecover log files
    default: 30
    cleanup_brrecover_log = 30
    retention period in days for brspace log files
    default: 30
    cleanup_brspace_log = 30
    retention period in days for archive log files saved on disk
    default: 30
    cleanup_disk_archive = 30
    retention period in days for database files backed up on disk
    default: 30
    cleanup_disk_backup = 30
    retention period in days for brspace export dumps and scripts
    default: 30
    cleanup_exp_dump = 30
    retention period in days for Oracle trace and audit files
    default: 30
    cleanup_ora_trace = 30
    retention period in days for records in SDBAH and SDBAD tables
    default: 100
    cleanup_db_log = 100
    retention period in days for records in XDB tables
    default: 100
    cleanup_xdb_log = 100
    retention period in days for database check messages
    default: 100
    cleanup_check_msg = 100
    database owner of objects to adapt next extents
    <owner> | (<owner_list>)
    default: all SAP owners
    next_owner = sapr3
    database objects to adapt next extents
    all | all_ind | special | [<owner>.]<table> | [<owner>.]<index>
    | [<owner>.]<prefix>* | <tablespace> | (<object_list>)
    default: all abjects of selected owners, example:
    next_table = (SDBAH, SAPR3.SDBAD)
    database objects to be excluded from adapting next extents
    all_part | [<owner>.]<table> | [<owner>.]<index> | [<owner>.]<prefix>*
    | <tablespace> | (<object_list>)
    default: no exclusion, example:
    next_exclude = (SDBAH, SAPR3.SDBAD)
    database objects to get special next extent size
    all_sel:<size>[/<limit>] | [<owner>.]<table>:<size>[/<limit>]
    | [<owner>.]<index>:<size>[/<limit>]
    | [<owner>.]<prefix>*:<size>[/<limit>] | (<object_size_list>)
    default: according to table category, example:
    next_special = (SDBAH:100K, SAPR3.SDBAD:1M/200)
    maximum next extent size
    default: 2 GB - 5 * <database_block_size>
    next_max_size = 1G
    maximum number of next extents
    default: 0 - unlimited
    next_limit_count = 300
    database owner of objects to update statistics
    <owner> | (<owner_list>)
    default: all SAP owners
    stats_owner = sapr3
    database objects to update statistics
    all | all_ind | all_part | missing | info_cubes | dbstatc_tab
    | dbstatc_mon | dbstatc_mona | [<owner>.]<table> | [<owner>.]<index>
    | [<owner>.]<prefix>* | <tablespace> | (<object_list>) | harmful
    | locked | system_stats | oradict_stats
    default: all abjects of selected owners, example:
    stats_table = (SDBAH, SAPR3.SDBAD)
    database objects to be excluded from updating statistics
    all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>
    | [<owner>.]<prefix>* | <tablespace> | (<object_list>)
    default: no exclusion, example:
    stats_exclude = (SDBAH, SAPR3.SDBAD)
    method for updating statistics for tables not in DBSTATC
    E | EH | EI | EX | C | CH | CI | CX | A | AH | AI | AX | E= | C= | =H
    | =I | =X | +H | +I
    default: according to internal rules
    stats_method = E
    sample size for updating statistics for tables not in DBSTATC
    P<percentage_of_rows> | R<thousands_of_rows>
    default: according to internal rules
    stats_sample_size = P10
    number of buckets for updating statistics with histograms
    default: 75
    stats_bucket_count = 75
    threshold for collecting statistics after checking
    default: 50%
    stats_change_threshold = 50
    number of parallel threads for updating statistics
    default: 1
    stats_parallel_degree = 1
    processing time limit in minutes for updating statistics
    default: 0 - no limit
    stats_limit_time = 0
    parameters for calling DBMS_STATS supplied package
    all:R|B:<degree> | all_part:R|B:<degree> | info_cubes:R|B:<degree>
    | [<owner>.]<table>:R|B:<degree> | [<owner>.]<prefix>*:R|B:<degree>
    | (<object_list>) | NO
    default: NULL - use ANALYZE statement
    stats_dbms_stats = ([ALL:R:1,][<owner>.]<table>:R|B:<degree>,...)
    definition of info cube tables
    default | rsnspace_tab | [<owner>.]<table> | [<owner>.]<prefix>*
    | (<object_list>) | null
    default: from RSNSPACE control table
    stats_info_cubes = (/BIC/D, /BI0/D, ...)
    recovery type [complete | dbpit | tspit | reset | restore | apply
    | disaster]
    default: complete
    recov_type = complete
    directory for brrecover file copies
    default: $SAPDATA_HOME/sapbackup
    recov_copy_dir = C:\oracle\THD\sapbackup
    time period for searching for backups
    0 - all available backups, >0 - backups from n last days
    default: 30
    recov_interval = 30
    degree of paralelism for applying archive log files
    0 - use Oracle default parallelism, 1 - serial, >1 - parallel
    default: Oracle default
    recov_degree = 0
    number of lines for scrolling in list menus
    0 - no scrolling, >0 - scroll n lines
    default: 20
    scroll_lines = 20
    time period for displaying profiles and logs
    0 - all available logs, >0 - logs from n last days
    default: 30
    show_period = 30
    directory for brspace file copies
    default: $SAPDATA_HOME/sapreorg
    space_copy_dir = C:\oracle\THD\sapreorg
    directory for table export dump files
    default: $SAPDATA_HOME/sapreorg
    exp_dump_dir = C:\oracle\THD\sapreorg
    database tables for reorganization
    [<owner>.]<table> | [<owner>.]<prefix>* | [<owner>.]<prefix>%
    | (<table_list>)
    no default
    reorg_table = (SDBAH, SAPR3.SDBAD)
    database indexes for rebuild
    [<owner>.]<index> | [<owner>.]<prefix>* | [<owner>.]<prefix>%
    | (<index_list>)
    no default
    rebuild_index = (SDBAH0, SAPR3.SDBAD0)
    database tables for export
    [<owner>.]<table> | [<owner>.]<prefix>* | [<owner>.]<prefix>%
    | (<table_list>)
    no default
    exp_table = (SDBAH, SAPR3.SDBAD)
    database tables for import
    [<owner>.]<table> | (<table_list>)
    no default
    imp_table = (SDBAH, SAPR3.SDBAD)
    I can log in to ftp from server
    C:\Documents and Settings\thdadm>ftp srv1
    Connected to srv1.
    220 Microsoft FTP Service
    User (srv1:(none)): thdadm
    331 Password required for thdadm.
    Password:
    230 User thdadm logged in.
    ftp> ls
    200 PORT command successful.
    150 Opening ASCII mode data connection for file list.
    sapbackup
    226 Transfer complete.
    ftp: 11 bytes received in 0,00Seconds 11000,00Kbytes/sec.
    ftp> mkdir sap
    257 "sap" directory created.
    please help
    best regards
    Olzhas Suleimenov

    del

  • Is it possible to use a "for loop" structure to represent DAQ of 16 channels?

    Hi,
    I tried to search but I failed to find a similar problem so I am posting this problem here.
    I have a temperature DAQ board which has 16 channels, and I am using all 16 channels to record and monitor temperature data.
    When I writed blocks for all 16 channels, a mess was created (as you can see in my attached program). I have noticed that all the VIs for the 16 channels have similar structure, so I was thinking whether we can represent all 16 channels with a "for loop" strucutre with a pointer i scanning from i=1 to 16 as we usually do in Matlab or Fortran.
    For instance, we may try to simplify the code like:
    for i=1:1:16
         acquire data from channel i
         show data from channel i in X-Y graph
         write data from channel i to excel in every 10 seconds
    end  
    I could not do it myself because I cannot find a "pointer" to represent each channel.
    Does anyone know how to do this in LabVIEW? Thank you.
    It is said that it is best if we can put all of our code within the screen of a monitor we use by using subVIs to divide the code into many subroutines. But I found it quite difficult to do so as I have too many controls as inputs and indicators as outputs. Also, I use property node to read and write frequently, which make things harder....
    Attached code does not work as it requires the 9213 hardware, and for simplicity I deleted irrelavent modules. But if you need I can upload one that may work by modifying the read VI.
    Thank you.
    Attachments:
    ask for help.vi ‏132 KB

    Yup, instead of all these 1D arrays, use a single 2D array. Your code could be reduce to <10% of the current size. I am sure it would fit easily on a single screen.
    Also try to do some tutorial or take some courses. I realy don't think you are ready to tackle a system of this magnitude at this time.
    Your code is just peppered with very poor choices. Some things you apparently haven't figured out:
    index array is resizable. There is no need to create dozens of parallel instances, all wired to the same array and all with a unique index diagram constant. If you want all elements in order, the indices don't need to be wired at all.
    Replace all your value property nodes with local variables. Much more efficient. Then redesign the code so you don't even need most of the local variables.
    You can use built array to append elements to an existing array. Using "insert into array" is much more complicated an error prone.
    It is not a good idea to have deeply stacked while loops with the innermost loop having long delays and indeterminate stop conditions. That just gums up the gears.
    Learn about proper code architectures, state machines, etc.
    Get rid of that gigantic, all encompassing stacked sequence. I am sure you can find a better way to reset three booleans when the code is about to stop. In fact it is typically more reasonable to set things to a defined state when the program starts to avoid problems due to accidental value changes at edit time.
    Don't use hidden control/indicators as data storage and interloop communication via local variables. There are better ways.
    You can wire the reminder or Q&R directly to a case structure. No need for e.g. "=0".
    Don't use ambiguous labels, e.g.a boolean labeled "Dampers on or off". Nobody will be able to later figure out if TRUE corresponds to ON or OFF. Could it be that TRUE mean it is either ON or OFF and FALSE that it is in some imternediate position?
    LabVIEW Champion . Do more with less code and in less time .

Maybe you are looking for

  • HT4623 i cant find the software update option in between about and usage?

    Hello am useing a Apple i phone 3GS , i want to upgrade the option from 4.2 to any new versions? plz support me to get it

  • How can i use the connection pool in JDBC2.0

    Thanks

  • Iphoto storage

    So I was looking at the iphoto folder in my home folder and noticed an original folder and a modified folder. I looked in there and found stuff in the modified folder that was also in the original folder. Now the only modification I have done to some

  • Joining two datasets for one report

    Hi All, I have 2 datasets i want to joing together. The 1st is a mysql query and 2nd is a mssql query, I have set a parameter which is the userid from the mysql query and added this to my mssql query. If i execute the mssql query and add the paramete

  • IP30 - Pass orders in the past automatically

    I have a problem. 1400 plans should be scheduled with the IP30, the problem is that many outlets have already been concluded, therefore it should ignore orders in the past. Does anyone know how to do it? I know you can skip to the ip10 shots, but 140