How to improve speed of data acquisition? Help needed urgently.

I want to convert analog signals to digital signals and simultaneously perform some search on the data acquired and this whole process has to be done continuously for few hours.
So I tried writing two programs in Matlab, one acquires the analog data and converts it to digital and saves the data in small chunks on hard disk (like file1, file2, file3,...) continuously. The other program performs the search operation in those chunks of data continuously. I run both the programs at a time by opening two mat lab windows.
But the problem Iam facing is that the data acquisition is slow. As a result I get an error message in the second program saying that
"??? Error using ==> load
Unable to read file file4.mat: No such file or directory."
Iam unable to synchronize the two programs. I cannot use timers in search program because I cannot add any delays.
Iam using a NI PCI-6036E ,16 Bit Resolution ,200 KS/s Sampling Rate A/D board.
Should I switch to some other series such as M series having sampling rate of the order MS/s?
Can anyone please tell me how to improve the speed of data acquisition?
Thanks.

Gayathri wrote:
I want to convert analog signals to digital signals and simultaneously perform some search on the data acquired and this whole process has to be done continuously for few hours.
So I tried writing two programs in Matlab, one acquires the analog data and converts it to digital and saves the data in small chunks on hard disk (like file1, file2, file3,...) continuously. The other program performs the search operation in those chunks of data continuously. I run both the programs at a time by opening two mat lab windows.
But the problem Iam facing is that the data acquisition is slow. As a result I get an error message in the second program saying that
"??? Error using ==> load
Unable to read file file4.mat: No such file or directory."
Iam unable to synchronize the two programs. I cannot use timers in search program because I cannot add any delays.
Iam using a NI PCI-6036E ,16 Bit Resolution ,200 KS/s Sampling Rate A/D board.
Should I switch to some other series such as M series having sampling rate of the order MS/s?
Can anyone please tell me how to improve the speed of data acquisition?
Thanks.
Hi gayathri,
well my email is [email protected]
if ur from india mail me back.
Regards
labview boy

Similar Messages

  • Some J2ME midlets doubts!! help needed urgently!!!

    Hi,
    I am currently working in a company where it does wireless technology like WAP and I am assigned a task of creating a screensaver midlet. I have some doubts on the midlets.
    1) How do i use a midlet suites? From what I heard from my colleagues & friends, a servlet is needed for midlets to interact with one another. is it true?
    2) How do I get the startin midlet to take note the phone is idling so that the screen saver midlet can be called?
    Help needed urgently... if there is any source codes for me to refer to would be better... Thanks...
    Leonard

    indicates that MIDlet suites are isolated (on purpose) from each other, so you can't write over another one's address space.
    Also, I believe (at least on cell phones) that you have to specifically enter the Java Apps mode; unless you do the app won't execute. If you are in Java apps mode and a call comes in, the cell's OS puts the Java app currently executing on "Pause" mode and switches back to phone mode.
    Not sure if you will be able to have a Java app do that automatically.
    BTW why do you need a screensaver on an LCD display? Is it really intended to show an advertisement?
    Download and real all the docs you can from Sun, once you get over the generic Java deficiencies MIDlet's aren't that hard.

  • HT201210 i have an error of no 11. kindly help, needed urgently

    i have an error of no 11. kindly help, needed urgently
    when i try to upgrage my
    iphone 3gs wit 4.1 to new latest 5.1
    it gives the erorr of 11. what that mean? Reply as soon as you can !
    thnx

    Error -1 may indicate a hardware issue with your device. Follow Troubleshooting security software issues, and restore your device on a different known-good computer. If the errors persist on another computer, the device may need service.

  • Load bar at start up, then shut down. HELP NEEDED URGENTLY!!! plss....

    Load bar at start up, then shut down. HELP NEEDED URGENTLY!!! plss..

    The startup disk may need repairing.
    Startup your Mac while holding down the Command + R keys so you can access the built in utiliites to repair the startup disk if necessary or restore OS X using OS X Recovery

  • How do you cyclicly trigger data acquisition after n pulses counted

    Hello all, please forgive my ignorance because I am very new
    to lab view and data acquisition. I am working on a system which is going to
    scan an object and produce an image. The gimble that I am scanning the object
    with is an X-Y type of gimble with stepper motors on each axis. The stepper
    motor controller will output pulses real time to indicate the real time
    position of the gimble in each axis. What I need to be able to do is count
    pulses from the stepper motor controller and then output a trigger pulse to
    trigger the data acquisition in a buffered mode when N number of pulses have
    passed and then generate another pulse to stop the acquisition after another N
    number of pulses have passed. The controller puts out 10,000 pulses per degree
    of travel. The velocity that I am traveling at is 20 degrees per second, so
    timing here is really important. I need to be able to utilize the speed of the
    daq card and not so much the speed of the computer to iterate through a loop. I
    have tried using the count down feature in the NIDAQ MX library but it does not
    appear to be useful to me. I set it up and it will count down but once it hits
    zero it continues to count down. My expectation was that it would either
    restart the down count or it would stop. I was expecting some sort of trigger
    event to take place once the count reached the zero point but I did not observe
    any sort of event taking place. Once again my knowledge and background is
    really limited so I could be missing something really fundamental here. I have
    tried using some of the legacy functions which would enable me to do exactly
    what I want to do but they do not seem to work with my daq card. I have a NI
    PCI-6122 and if anyone has any knowledge on how to get this type of card to
    talk to some of the non MX functions I would be more than happy to hear how. It
    seems to me though, that I am limited to the MX functions which I can not
    really translate into what I have learned I can do with the legacy functions. I
    thank you all once again for taking the time to read this I and I will
    appreciate any and all responses that can be helpful.
    ~ Randy Brown

    I have run a few more tests and obtained some data per the request of a telephone support engineer. I have some scope screen shots that might be able to shed some light on what is going on. I will provide a brief description of what I discovered before I show the resulting data. I discovered that using the number of up ticks and down ticks suggested does not yield the right timing for the clock pulses that I will need for triggering my data acquisition. When I use 55 low ticks and 2 high ticks as my settings I end up getting a pulse every 32 pulses read on the PFI line. I get the same results when I interchange the numbers, for example, when I set the program up for 2 low ticks and 55 high ticks I get the same resulting one clock pulse per 32 pulses on the PFI line. I started playing with the numbers and come to find that I was able to generate a pulse every 57 pulses in this setup. I set the high ticks to 2 and the low ticks to 71 and once I did that it generated a pulse every 57 pulses in. The results are not ideal though, a number of things happen within the first second of operation. One mode of operation the clock output pulse latches after a few pulses generated. Another mode of operation that I noticed was that it would generate n number of pulses and then just stop even though the program was still running. The results I am getting are not reproducible when it comes to the long-term operation of the clock pulse generation but the bottom line is not matter what happens the end result after 1 second is not what is expected. I will show below screen shots of my program and also scope shots for the respective modes of operation.
    Front End interface
    Block Diagram
    55 High ticks and 2 low ticks results
    55 low ticks and 2 high ticks results
    77 Low ticks and 2 high ticks results
    Undesired Latch after 1 second of operation
    N number of pulses generated and stopped while program was still running
     It appears the the long term operation (and when I say long term I mean after a second) is intermittent, it either latches high or low after a random number of pulses are generated on the clock output. I am not sure why this is happening. The one setup that I came up with that generates a pulse every 57 pulses is not going to work for the setup that I have I think I would have to reduce the 71 to 69 in order to compensate for the two pulses that happen while the output pulse of the clock is high. To be honest I have no idea what is going on and I am starting to wonder about my daq card. Being that it is not really reproducing the same results I am starting to think maybe something is wrong with it. Another possibility is that it might be the bnc 2110 that I am using. I will try another one tomarrow and see if this problem persisits. I am leaving now so I won't be able to try that as of yet but I wanted to pass this info and data along such that maybe you will notice something and be able to lead me in the right direction. Thank you again for all of your help.
    ~ Randy Brown

  • How to improve speed of laptop

    how to increase speed of laptop

    Hello CP.S.PATEL_30,
    Have you made any changes before you meet this speed issue?
    Does this issue occur just after the system boot?
    Please take the following steps for troubleshooting:
    1. Open the Task Manager and check which program or process occupy large amount of system resource.
    2. Use the Windows Performance Toolkit to analyze the performance
    http://msdn.microsoft.com/en-us/library/windows/hardware/hh162945.aspx
    Best regards,
    Fangzhou CHEN
    Fangzhou CHEN
    TechNet Community Support

  • Help needed urgently on a problem..plzzz

    hi..this is a linear congruential generator. I have to implement it and i need the execution time for the program.
    for your understanding i'm providing an example below.
    Xn=(( a* xn-1 )+b) mod m
    If X0=7 ; a = 7 ; b =7 ; m=10
    Then
    X0 = 7
    X1 =((7 * 7) + 7))mod 10 = 6
    X2 = ((6*7)+7))mod 10 = 9
    X3 = ((9*7)+7) mod 10 = 0
    X4 = ((0*7)+7) mod 10 = 7
    Now since the cycle is being repeated i.e 7 appears again�so the period is 4 coz there are 4 diff nos. i.e 7,6,9,0�..
    help required urgently....your help will be appreciated...thankyou..

    Hi,
    I wrote the code so that it catches any cycle (not only the "big" one).
    Otherwise it will enter infinite loop...
    The time complexity is O(N*logN): it can do at most N iterations (here N is your 'm'), and in each iteration there can be O(log N) comparisons (since I maintain TreeSet).
    Interesting issue: is it possible to supply such (x0, a, b, m) tuple such that all possible values from 0 to m-1 will be output? I think no :)
    Here is the program:
    package recurr;
    import java.util.TreeSet;
    import java.util.Comparator;
    public class Recurrences {
         private static long x0, a, b, m;
         private static TreeSet theSet;
         public static void main(String[] args)
              long l0, l1, l2, l3;
              try {
                   x0 = Long.parseLong(args[0]);
                   a = Long.parseLong(args[1]);
                   b = Long.parseLong(args[2]);
                   m = Long.parseLong(args[3]);
              } catch(NumberFormatException nfe) {
                   nfe.printStackTrace();
              System.out.println("X[0]: " + x0 + "\n");
              long curr = x0;
              boolean cut = false;
              int i;
              // initialize the set
              theSet = new TreeSet(new LongComparator());
              // we can get at most m distinct values (from 0 to m-1) through recurrences
              for(i=1; i <= m; ++i) {
                   // iterate until we find duplicate
                   theSet.add(new Long(curr));
                   curr = recurrence(curr);
                   if(theSet.contains(new Long(curr))) {
                        cut = true;
                        break;
                   System.out.println("X[" + i + "]: " + curr + "\n");
              if(cut) {
                   System.out.println("Cycle found: the next will come " + curr + "\n");
              } else {
                   System.out.println("No cycle found!");
              System.out.println("----------------------------------");
              System.out.println("Totally " + (i-1) + " iterations");
         private static long recurrence(long previous)
              return (a*previous + b)%m;
         static class LongComparator implements Comparator
              public int compare(Object o1, Object o2)
                   if(((Long)o1).longValue() < ((Long)o2).longValue()) {
                        return -1;
                   } else if(((Long)o1).longValue() > ((Long)o2).longValue()) {
                        return 1;
                   } else return 0;
    }

  • Help needed Urgently- Rebate based on collected amount

    Dear all,
    I come across scenario while discussiion with client that they require rebate with collection. Details of the requirement are given below:
    1. SAP rebates run on billed values & set the accrual in rebate agreement on the rate what we have specified in the rebate agreement. Requirement is that, If i have billed on 1000$ & my accrual value is 100$ with the rate of 10%. If i collected 800$ instead of 1000$, then i need to pay the accrual on the basis of 800$ not on the basis of 1000$. It means i have to adjust accrual amount on the basis of 800$. Conclusion is that i have to pay not 100$ accrual instead less then 100$ on the basis of 800$ which i collected.
    2. In month 1 have billed on 5000$, my accrual amount is 500$ with rate of 10%. In the 2nd month i have to bill 1000$ and i have given an discount of 500$, it means my billed value is 500$ and my accrual amount is 50$@10%. In month 3 again i billed 500$ and my accrual amount is 50$@10%.
    Requirement is that, when i am going to pay the accrual to client, i should pay correct accrual for which he is entitled for. Means i should pay 100$ accrual not 600$ because i have already given an discount of 500$. Discount which i have given already of 500$ should need to be offest with the first month accrual of 500$. So remaning accrual is 100$.
    Great if somebody can help me out for the solutioning of the above requirements.

    Thanks Ivano,
    Somebody has started the conversation.
    Let me put my questions again.
    This requirement is nothing to do with Payment procedure in the agreement type.
    1. In any month if i billed 1000$, so my account receivable would be 1000$. My rebate for that month is 100$ at the rate of 10%. During customer receipt if i collected against my invoice 900$ instead of 1000$, my accrual needs to be corrected 90$ instead of 100$.
    I know this can not be fullfilled by standard SAP, by any thoughts on this welcomed.
    2. I know Rebate can be settled partially or full settlement by payment method( by cheque, bank transfer, or by credit memo) we have configure in rebate agreement type. But here requirement is totally different.
    Here, i need to pay the Rebate as a Discount instead of by cheque or by credit memo. While doing the partial or full settlement system will take into account collected accrual up to that day & apply as a discount to the final bill.
    Scenario is like that sometimes customer asked to give us the discount on bill for whatever they accrued so far.
    This is again cannot solved by standard SAP, but any thought by any body welcome. We have already thought that we need to enhance the solution.
    Solution needed urgently.

  • Unable to allocate 27160 bytes.........Help needed urgently

    hi
    in my production database in getting this error..
    ORA-04031: unable to allocate 27160 bytes of shared memory ("shared
    pool","unknown object","sga heap(1,0)","session param values")
    help needed urgently

    If you have a program that does not use bind variables you can get this error.
    In such cases you do not want to increase the size of the shared pool, but reduce it, and flush regularly. This is a bug in the application and should be fixed to use bind variables.
    Another possible workaround is setting cursor_sharing = force, but this can cause other problems, so should only be used as a last resort. If the apps connections can be distinguished by user account or machine, then a log on trigger could be set cursor_sharing just for that application, to limit the damage until the vendor can fix it.

  • My Iphone 3g is so slow, can anyone help by giving suggestions on how to improve speed?

    Can anyone help?My Iphone 3g is so slow and can you help by giving suggestions?

    hi guys...i have managed to do this but i think there is still something wrong...can any kind soul point out the mistakes? thanx!
    class Lab_04
         public static int wrongans = 0;
         public static int score = 100;
         public static double probability1 = 0.81;
         public static double probability2 = 0.3;
         public static double probability3 = 0.95;
         public static double probability4 = 0.13;
         public static double probability5 = 0.79;
         public static double probability6 = 0.83;
         public static double probability7 = 0.32;
         public static double probability8 = 0.68;
         public static void main(String[] args) {
              if (Math.random() >= probability1) {
    System.out.print("O");}
    else {
         wrongans++;
    System.out.print("X");
              if (Math.random() >= probability2) {
    System.out.print("O");}
    else {
                   wrongans++;
    System.out.print("X");
              if (Math.random() >= probability3) {
    System.out.print("O");}
    else {
                   wrongans++;
    System.out.print("X");
              if (Math.random() >= probability4) {
    System.out.print("O");}
    else {
                   wrongans++;
    System.out.print("X");
              if (Math.random() >= probability5) {
    System.out.print("O");}
    else {
                   wrongans++;
    System.out.print("X");
              if (Math.random() >= probability6) {
    System.out.print("O");}
    else {
                   wrongans++;
    System.out.print("X");
              if (Math.random() >= probability7) {
    System.out.print("O");}
    else {
                   wrongans++;
    System.out.print("X");
              if (Math.random() >= probability8) {
    System.out.print("O");}
    else {
                   wrongans++;
    System.out.println("X");
         System.out.println ("Score (out of 100):" + (score-(20*wrongans)));
    }

  • How to improve speed of queries that use ORM one table per concrete class

    Hi,
    Many tools that make ORM (Object Relational Mapping) like Castor, Hibernate, Toplink, JPOX, etc.., have the one table per concrete class feature that maps objects to follow structure:
    CREATE TABLE ABSTRACTPRODUCT (
        ID VARCHAR(8) NOT NULL,
        DESCRIPTION VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    CREATE TABLE PRODUCT (
        ID VARCHAR(8) NOT NULL REFERENCES ABSTRACTPRODUCT(ID),
        CODE VARCHAR(10) NOT NULL,
        PRICE DECIMAL(12,2),
        PRIMARY KEY(ID)
    CREATE UNIQUE INDEX iProduct ON Product(code)
    CREATE TABLE BOOK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        AUTHOR VARCHAR(60) NOT NULL,
        PRIMARY KEY (ID)
    CREATE TABLE COMPACTDISK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        ARTIST VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    there is a way to improve queries like
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        pd.code like '101%'
    or like this:
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        abpd.description like '%STARS%' AND
        pd.price BETWEEN 1 AND 10
    think in a table with many rows, then exists something inside MaxDB to improve this type of queries? like some anotations on SQL? or declare tables that extends another by PK? on other databases i managed this using Materialized Views, but i think that this can be faster just using PK, i'm wrong? the better is to consolidate all tables in one table? what is the impact on database size with this consolidation?
    note: with consolidation i will miss NOT NULL constraint at database side.
    thanks for any insight.
    Clóvis

    Hi Lars,
    i dont understand because the optimizer get that Index for TM at execution plan, and because dont use the join via KEY column, note the WHERE clause is "TM.OID = MF.MY_TIPO_MOVIMENTO" by the key column, and the optimizer uses an INDEX that the indexed column is ID_SYS, that isnt and cant be a primary key, because its not UNIQUE, follow the index columns:
    indexes of TipoMovimento
    INDEXNAME     COLUMNNAME          SORT     COLUMNNO     DATATYPE     LEN     INDEX_USED     FILESTATE     DISABLED
    ITIPOMOVIMENTO     TIPO               ASC     1          VARCHAR          2     220546          OK          NO
    ITIPOMOVIMENTO     ID_SYS               ASC     2          CHAR          6     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_DEBITO          ASC     3          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_CREDITO     ASC     4          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO1     ID_SYS               ASC     1          CHAR          6     567358          OK          NO
    ITIPOMOVIMENTO2     DESCRICAO          ASC     1          VARCHAR          60     94692          OK          NO
    after i create the index iTituloCobrancaX7 on TituloCobranca(OID,DATA_VENCIMENTO) in a backup instance and get surprised with the follow explain:
    OWNER     TABLENAME     COLUMN_OR_INDEX          STRATEGY                    PAGECOUNT     
         TC          ITITULOCOBRANCA1     RANGE CONDITION FOR INDEX          5368     
                   DATA_VENCIMENTO               (USED INDEX COLUMN)          
         MF          OID               JOIN VIA KEY COLUMN               9427     
         TM          OID               JOIN VIA KEY COLUMN               22     
                                  TABLE HASHED          
         PS          OID               JOIN VIA KEY COLUMN               1350     
         BOL          OID               JOIN VIA KEY COLUMN               497     
                                       NO TEMPORARY RESULTS CREATED          
         JDBC_CURSOR_19                    RESULT IS COPIED   , COSTVALUE IS     988
    note that now the optimizer gets the index ITITULOCOBRANCA1 as i expected, if i drop the new index iTituloCobrancaX7 the optimizer still getting this execution plan, with this the query executes at 110 ms, with that great news i do same thing in the production system, but the execution plan dont changes, and i still getting a long execution time this time at 413516 ms. maybe the problem is how optimizer measure my tables.
    i checked in DBAnalyser that the problem is catalog cache hit rate (we discussed this at [catalog cache hit rate, how to increase?|;
    ) and the low selectivity of this SQL command, then its because of this that to achieve a better selectivity i must have an index with, MF.MY_SACADO, MF.TIPO and TC.DATA_VENCIMENTO, as explained in previous posts, since this type of index inside MaxDB isnt possible, i have no choice to speed this type of query without changing tables structure.
    MaxDB developers can develop this type of index? or a feature like this dont have any plans to be made?
    if no, i must create another schema, to consolidate tables to speed queries on my system, but with this consolidation i will get more overhead, i must solve the less selectivity because i think if the data on tables increase, the query becomes impossible, i see that CREATE INDEX supports FUNCTION, maybe a   FUNCTION that join data of two tables can solve this?
    about instance configuration it is:
    Machine:
    Version:       '64BIT Kernel'
    Version:       'X64/LIX86 7.6.03   Build 007-123-157-515'
    Version:       'FAST'
    Machine:       'x86_64'
    Processors:    2 ( logical: 8, cores: 8 )
    data volumes:
    ID     MODE     CONFIGUREDSIZE     USABLESIZE     USEDSIZE     USEDSIZEPERCENTAGE     DROPVOLUME     TOTALCLUSTERAREASIZE     RESERVEDCLUSTERAREASIZE     USEDCLUSTERAREASIZE     PATH     
    1     NORMAL     4194304          4194288          379464          9               NO          0               0               0               /db/SPDT/data/data01.dat     
    2     NORMAL     4194304          4194288          380432          9               NO          0               0               0               /db/SPDT/data/data02.dat     
    3     NORMAL     4194304          4194288          379184          9               NO          0               0               0               /db/SPDT/data/data03.dat     
    4     NORMAL     4194304          4194288          379624          9               NO          0               0               0               /db/SPDT/data/data04.dat     
    5     NORMAL     4194304          4194288          380024          9               NO          0               0               0               /db/SPDT/data/data05.dat
    log volumes:
    ID     CONFIGUREDSIZE     USABLESIZE     PATH               MIRRORPATH
    1     51200          51176          /db/SPDT/log/log01.dat     ?
    parameters:
    KERNELVERSION                         KERNEL    7.6.03   BUILD 007-123-157-515
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    _SERVERDB_FOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      ISO
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         2
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   /db/SPDT/log/log01.dat
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   6400
    DATA_VOLUME_NAME_0005                 /db/SPDT/data/data05.dat
    DATA_VOLUME_NAME_0004                 /db/SPDT/data/data04.dat
    DATA_VOLUME_NAME_0003                 /db/SPDT/data/data03.dat
    DATA_VOLUME_NAME_0002                 /db/SPDT/data/data02.dat
    DATA_VOLUME_NAME_0001                 /db/SPDT/data/data01.dat
    DATA_VOLUME_TYPE_0005                 F
    DATA_VOLUME_TYPE_0004                 F
    DATA_VOLUME_TYPE_0003                 F
    DATA_VOLUME_TYPE_0002                 F
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0005                 524288
    DATA_VOLUME_SIZE_0004                 524288
    DATA_VOLUME_SIZE_0003                 524288
    DATA_VOLUME_SIZE_0002                 524288
    DATA_VOLUME_SIZE_0001                 524288
    DATA_VOLUME_MODE_0005                 NORMAL
    DATA_VOLUME_MODE_0004                 NORMAL
    DATA_VOLUME_MODE_0003                 NORMAL
    DATA_VOLUME_MODE_0002                 NORMAL
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    LOG_MIRRORED                          NO
    MAXVOLUMES                            14
    LOG_IO_BLOCK_COUNT                    8
    DATA_IO_BLOCK_COUNT                   64
    BACKUP_BLOCK_CNT                      64
    _DELAY_LOGWRITER                      0
    LOG_IO_QUEUE                          50
    _RESTART_TIME                         600
    MAXCPU                                8
    MAX_LOG_QUEUE_COUNT                   0
    USED_MAX_LOG_QUEUE_COUNT              8
    LOG_QUEUE_COUNT                       1
    MAXUSERTASKS                          500
    _TRANS_RGNS                           8
    _TAB_RGNS                             8
    _OMS_REGIONS                          0
    _OMS_RGNS                             7
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        8
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    _ROW_RGNS                             8
    RESERVEDSERVERTASKS                   16
    MINSERVERTASKS                        28
    MAXSERVERTASKS                        28
    _MAXGARBAGE_COLL                      1
    _MAXTRANS                             4008
    MAXLOCKS                              120080
    _LOCK_SUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       180
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    _IOPROCS_PER_DEV                      2
    _IOPROCS_FOR_PRIO                     0
    _IOPROCS_FOR_READER                   0
    _USE_IOPROCS_ONLY                     NO
    _IOPROCS_SWITCH                       2
    LRU_FOR_SCAN                          NO
    _PAGE_SIZE                            8192
    _PACKET_SIZE                          131072
    _MINREPLY_SIZE                        4096
    _MBLOCK_DATA_SIZE                     32768
    _MBLOCK_QUAL_SIZE                     32768
    _MBLOCK_STACK_SIZE                    32768
    _MBLOCK_STRAT_SIZE                    16384
    _WORKSTACK_SIZE                       8192
    _WORKDATA_SIZE                        8192
    _CAT_CACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      131072
    INIT_ALLOCATORSIZE                    262144
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    _TASKCLUSTER_01                       tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
    _TASKCLUSTER_02                       ti,100*dw;63*us;
    _TASKCLUSTER_03                       equalize
    _DYN_TASK_STACK                       NO
    _MP_RGN_QUEUE                         YES
    _MP_RGN_DIRTY_READ                    DEFAULT
    _MP_RGN_BUSY_WAIT                     DEFAULT
    _MP_DISP_LOOPS                        2
    _MP_DISP_PRIO                         DEFAULT
    MP_RGN_LOOP                           -1
    _MP_RGN_PRIO                          DEFAULT
    MAXRGN_REQUEST                        -1
    _PRIO_BASE_U2U                        100
    _PRIO_BASE_IOC                        80
    _PRIO_BASE_RAV                        80
    _PRIO_BASE_REX                        40
    _PRIO_BASE_COM                        10
    _PRIO_FACTOR                          80
    _DELAY_COMMIT                         NO
    _MAXTASK_STACK                        512
    MAX_SERVERTASK_STACK                  500
    MAX_SPECIALTASK_STACK                 500
    _DW_IO_AREA_SIZE                      50
    _DW_IO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    _FBM_LOW_IO_RATE                      10
    CACHE_SIZE                            262144
    _DW_LRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    _DATA_CACHE_RGNS                      64
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              64
    SEQUENCE_CACHE                        1
    _IDXFILE_LIST_SIZE                    2048
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    _READAHEAD_BLOBS                      32
    CLUSTER_WRITE_THRESHOLD               80
    CLUSTERED_LOBS                        NO
    RUNDIRECTORY                          /var/opt/sdb/data/wrk/SPDT
    OPMSG1                                /dev/console
    OPMSG2                                /dev/null
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        2
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        20
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       5369
    EXTERNAL_DUMP_REQUEST                 NO
    _AK_DUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    _UTILITY_PROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    _BACKUP_HISTFILE                      dbm.knl
    _BACKUP_MED_DEF                       dbm.mdf
    _MAX_MESSAGE_FILES                    0
    _SHMKERNEL                            44601
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-03 23:12:55
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     /var/opt/sdb/data/wrk/SPDT/DIAGHISTORY
    _DIAG_SEM                             1
    SHOW_MAX_STACK_USE                    NO
    SHOW_MAX_KB_STACK_USE                 NO
    LOG_SEGMENT_SIZE                      2133
    _COMMENT                              
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    OFFICIAL_NODE                         
    UKT_CPU_RELATIONSHIP                  NONE
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    30
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       YES
    USE_OPEN_DIRECT_FOR_BACKUP            NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    JOIN_TABLEBUFFER                      128
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             YES
    SHAREDSQL_CLEANUPTHRESHOLD            25
    SHAREDSQL_COMMANDCACHESIZE            262144
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    FORBID_LOAD_BALANCING                 YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    ENABLE_CHECK_INSTANCE                 YES
    RTE_TEST_REGIONS                      0
    HASHED_RESULTSET                      YES
    HASHED_RESULTSET_CACHESIZE            262144
    CHECK_HASHED_RESULTSET                0
    AUTO_RECREATE_BAD_INDEXES             NO
    AUTHENTICATION_ALLOW                  
    AUTHENTICATION_DENY                   
    TRACE_AK                              NO
    TRACE_DEFAULT                         NO
    TRACE_DELETE                          NO
    TRACE_INDEX                           NO
    TRACE_INSERT                          NO
    TRACE_LOCK                            NO
    TRACE_LONG                            NO
    TRACE_OBJECT                          NO
    TRACE_OBJECT_ADD                      NO
    TRACE_OBJECT_ALTER                    NO
    TRACE_OBJECT_FREE                     NO
    TRACE_OBJECT_GET                      NO
    TRACE_OPTIMIZE                        NO
    TRACE_ORDER                           NO
    TRACE_ORDER_STANDARD                  NO
    TRACE_PAGES                           NO
    TRACE_PRIMARY_TREE                    NO
    TRACE_SELECT                          NO
    TRACE_TIME                            NO
    TRACE_UPDATE                          NO
    TRACE_STOP_ERRORCODE                  0
    TRACE_ALLOCATOR                       0
    TRACE_CATALOG                         0
    TRACE_CLIENTKERNELCOM                 0
    TRACE_COMMON                          0
    TRACE_COMMUNICATION                   0
    TRACE_CONVERTER                       0
    TRACE_DATACHAIN                       0
    TRACE_DATACACHE                       0
    TRACE_DATAPAM                         0
    TRACE_DATATREE                        0
    TRACE_DATAINDEX                       0
    TRACE_DBPROC                          0
    TRACE_FBM                             0
    TRACE_FILEDIR                         0
    TRACE_FRAMECTRL                       0
    TRACE_IOMAN                           0
    TRACE_IPC                             0
    TRACE_JOIN                            0
    TRACE_KSQL                            0
    TRACE_LOGACTION                       0
    TRACE_LOGHISTORY                      0
    TRACE_LOGPAGE                         0
    TRACE_LOGTRANS                        0
    TRACE_LOGVOLUME                       0
    TRACE_MEMORY                          0
    TRACE_MESSAGES                        0
    TRACE_OBJECTCONTAINER                 0
    TRACE_OMS_CONTAINERDIR                0
    TRACE_OMS_CONTEXT                     0
    TRACE_OMS_ERROR                       0
    TRACE_OMS_FLUSHCACHE                  0
    TRACE_OMS_INTERFACE                   0
    TRACE_OMS_KEY                         0
    TRACE_OMS_KEYRANGE                    0
    TRACE_OMS_LOCK                        0
    TRACE_OMS_MEMORY                      0
    TRACE_OMS_NEWOBJ                      0
    TRACE_OMS_SESSION                     0
    TRACE_OMS_STREAM                      0
    TRACE_OMS_VAROBJECT                   0
    TRACE_OMS_VERSION                     0
    TRACE_PAGER                           0
    TRACE_RUNTIME                         0
    TRACE_SHAREDSQL                       0
    TRACE_SQLMANAGER                      0
    TRACE_SRVTASKS                        0
    TRACE_SYNCHRONISATION                 0
    TRACE_SYSVIEW                         0
    TRACE_TABLE                           0
    TRACE_VOLUME                          0
    CHECK_BACKUP                          NO
    CHECK_DATACACHE                       NO
    CHECK_KB_REGIONS                      NO
    CHECK_LOCK                            NO
    CHECK_LOCK_SUPPLY                     NO
    CHECK_REGIONS                         NO
    CHECK_TASK_SPECIFIC_CATALOGCACHE      NO
    CHECK_TRANSLIST                       NO
    CHECK_TREE                            NO
    CHECK_TREE_LOCKS                      NO
    CHECK_COMMON                          0
    CHECK_CONVERTER                       0
    CHECK_DATAPAGELOG                     0
    CHECK_DATAINDEX                       0
    CHECK_FBM                             0
    CHECK_IOMAN                           0
    CHECK_LOGHISTORY                      0
    CHECK_LOGPAGE                         0
    CHECK_LOGTRANS                        0
    CHECK_LOGVOLUME                       0
    CHECK_SRVTASKS                        0
    OPTIMIZE_AGGREGATION                  YES
    OPTIMIZE_FETCH_REVERSE                YES
    OPTIMIZE_STAR_JOIN                    YES
    OPTIMIZE_JOIN_ONEPHASE                YES
    OPTIMIZE_JOIN_OUTER                   YES
    OPTIMIZE_MIN_MAX                      YES
    OPTIMIZE_FIRST_ROWS                   YES
    OPTIMIZE_OPERATOR_JOIN                YES
    OPTIMIZE_JOIN_HASHTABLE               YES
    OPTIMIZE_JOIN_HASH_MINIMAL_RATIO      1
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_MINSIZE        1000000
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_QUAL_ON_INDEX                YES
    DDLTRIGGER                            YES
    SUBTREE_LOCKS                         NO
    MONITOR_READ                          2147483647
    MONITOR_TIME                          2147483647
    MONITOR_SELECTIVITY                   0
    MONITOR_ROWNO                         0
    CALLSTACKLEVEL                        0
    OMS_RUN_IN_UDE_SERVER                 NO
    OPTIMIZE_QUERYREWRITE                 OPERATOR
    TRACE_QUERYREWRITE                    0
    CHECK_QUERYREWRITE                    0
    PROTECT_DATACACHE_MEMORY              NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FILEDIR_SPINLOCKPOOL_SIZE             10
    TRANS_HISTORY_SIZE                    0
    TRANS_THRESHOLD_VALUE                 60
    ENABLE_SYSTEM_TRIGGERS                YES
    DBFILLINGABOVELIMIT                   70L80M85M90H95H96H97H98H99H
    DBFILLINGBELOWLIMIT                   70L80L85L90L95L
    LOGABOVELIMIT                         50L75L90M95M96H97H98H99H
    AUTOSAVE                              1
    BACKUPRESULT                          1
    CHECKDATA                             1
    EVENT                                 1
    ADMIN                                 1
    ONLINE                                1
    UPDSTATWANTED                         1
    OUTOFSESSIONS                         3
    ERROR                                 3
    SYSTEMERROR                           3
    DATABASEFULL                          1
    LOGFULL                               1
    LOGSEGMENTFULL                        1
    STANDBY                               1
    USESELECTFETCH                        YES
    USEVARIABLEINPUT                      NO
    UPDATESTAT_PARALLEL_SERVERS           0
    UPDATESTAT_SAMPLE_ALGO                1
    SIMULATE_VECTORIO                     IF_OPEN_DIRECT_OR_RAW_DEVICE
    COLUMNCOMPRESSION                     YES
    TIME_MEASUREMENT                      NO
    CHECK_TABLE_WIDTH                     NO
    MAX_MESSAGE_LIST_LENGTH               100
    SYMBOL_RESOLUTION                     YES
    PREALLOCATE_IOWORKER                  NO
    CACHE_IN_SHARED_MEMORY                NO
    INDEX_LEAF_CACHING                    2
    NO_SYNC_TO_DISK_WANTED                NO
    SPINLOCK_LOOP_COUNT                   30000
    SPINLOCK_BACKOFF_BASE                 1
    SPINLOCK_BACKOFF_FACTOR               2
    SPINLOCK_BACKOFF_MAXIMUM              64
    ROW_LOCKS_PER_TRANSACTION             50
    USEUNICODECOLUMNCOMPRESSION           NO
    about send you the data from tables, i dont have permission to do that, since all data is in a production system, the customer dont give me the rights to send any information. sorry about that.
    best regards
    Clóvis

  • How to track the Modified Data-Please help

    Hi Gurus,
    Background about the issue.... We have Customers and Customer Tier on Siebel On Demand. Every month after bookings are done customer tier is modified depending on the reveune genrated by the customer. Say for example ....Customer XXX was under 'A' but due to bad bookings for the month now they are degraded 'B'. Like this we get a list of customer tiers for the month of all customers and are uploaded in to Siebel.
    Now the challenge for me is while reporting in Analytics... how to retain the previous value. For example...For Customer xxx the customer tier was A in Aug and is modified as B in September. I want retain the previous value in my reports....
    In one of the Siebel Query documents i saw that PRE <'Field Refrence'> syntax holds the previous value of the field ..But it did not work in my case...
    Any suggestions to retain the old value after doing modifications..
    Thanks for the help.
    BK

    BK,
    My suggestion would be to write workflows that capture the data when it changes into the task description e.g.
    PRE(CustomerTier) <> (CustomerTier) creates a task that puts (Customer Tier into the description)
    then using the Task created date reflect that in the activity report.
    cheers
    Alex

  • Agilent E4980A Measurement Speed and Data Acquisition Speed

    I am using E4980A LCR meter, I need very fast data aquisition unfortunately limited by device own speed with 5.6 ms per measurement which means nearly 178 Hz. I am using usb interface with the software provided by NI, in that program I made some modification taking the reading side of the programme in a while loop .
    To test my speed I start the program count 5s and stop saving the results to an excell file. If I plot the results in a graph while programme is working I have 60 Hz speed. If I don't, then it is 80 Hz which are far below the potential maximum speed of 178 Hz. If my lap top battery is very low then my speed is worse. But with no battery problem, my celeron laptop performance is the same as an i7 laptop.
    Here it is my programme, it is the same provided NI but some modification. What can I do to have higher data acquisiton speed ?
    Probably, USB's speed is not enough but I can not believe that while it can save gigabytes of data in a few min can't take 200 datapoint in 1s. Why can't I have 178 Hz speed now and how can I reach this limit ?
    Regards.
    Attachments:
    Read Measurement.PNG ‏44 KB
    Read Measurement.PNG ‏44 KB

    Instrument communications via serial, USB, Ethernet, GPIB, etc just tend to be slow.  The instrument has to interpret the data, react to it, and then send data back.  That takes time.  A few ms per measurement is quite normal.  One option you might have is you could tell the instrument to take several measurements and then request all of the measurements once it is done.  I haven't looked into the E4980A yet to see if it can do that.
    What exactly are you trying to measure.  There might be better ways to get "fast" readings.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions

  • HT5957 How to improve speed in Iphone 4 using ios 7

    I am using Iphone 4 and updated IOS 7, But speed is very in compare to last version. Please advice me to increase the speed.
    Opening contact very low.
    call to any number is very low.

    Hi edumaravilla,
    Welcome to the Support Communities!
    The following information will explain how Airplay has changed in iOS 7:
    iOS: Using AirPlay
    http://support.apple.com/kb/HT4437
    Here are some troubleshooting steps for Airplay:
    iOS: Troubleshooting AirPlay and AirPlay Mirroring
    http://support.apple.com/kb/TS4215
    Cheers,
    - Judy

  • Duplicate Data Fields Help Needed

    We have a 9 page Word form that has some duplicate information, have converted it to LiveCycle but am stuck on finding out how to have the data from one field also show in another so the user does not have to enter twice.  Eventually we will remove the duplicate fields, but for many reasons, right now they must remain.  I don't want a Contractor to have to enter their Company name twice, etc., and have my main data collection form in the front of the 9 pages, and then when they get to the last page with sigs, etc., I would like what they've already entered for their Company information to show.
    I am very new to Designer ES, and when I did some forms in Adobe 9.0 Forms awhile back, all I needed to do was to name the fields the same and if I entered info in one it would show in the other.  ES does not seem to do that, and I can't seem to find out how to add any code to the field.  Don't know why but alot of my options are grayed out.
    Anyway, would very much appreciate help, thank you.

    Giving them the same name is only one part of the solution ....you have to change the binding (on the Object palette/binding tab) to Global.
    Paul

Maybe you are looking for

  • Overwriting vm.cfg file

    Hallo, I'm using direct disk access for a couple of VM, because it seems that the performance is quite better. I test it with Oracle 11g and I had about 15-20% better performance with DBMS_RESOURCE_MANAGER.CALIBRATE_IO. So I edit the vm.cfg file with

  • Batch program

    HI, A batch program is needed.This batch program will be scheduled as a batch job. It should query table say 'A' depending on two mandatory and one optional parameter.There is only one primary key in table 'A 'and it is an optional parameter.This que

  • PS CS3 taking a long time to load and seems to "spin" when Reading Preferences.

    I've noticed this problem a few days ago, and am unsure what I should do about it. PS used to load very quickly, and now when launching, it will spin for a good 90 seconds while reading the preferences. Is there a way to reset the preferences? Is thi

  • Displaying document title based on category

    hai all, i want display document title based on document category, what is the field to display doc title for invoice description of freight terms should be displayed based on vbrp-kvgr1, i am not able to find the field Edited by: kpsgoutam on Aug 27

  • Neural network: is there any toolkit?

    Is there any toolkit in order to use neural networks with labview? (I am not an expert about neural networks, I have just been said today to try to solve a problem using neural networks, I even don't know where to start from...well..I am starting fro