I need timed data logging with continuous graphing

I am graphing continuous temperature measurements, but I only want to log onto a spreadsheet measurements every hour. How do I do that?

Hello,
In order to log the measurements every hour, you will want to implement a "Wait (ms)" function. This function can be added to the block diagram by [right-clicking] and selecting the following:
[All Functions] >> [Time & Dialog] >> [Wait (ms)]
You will want to place this wait function inside your code and wire a constant to the wait function with a value of 3,600,000 (60min/hr x 60sec/min x 1000ms/s). To wire a constant to the wait function, [right-click] on the input terminal of the "wait (ms)" icon and select [Create]>>[Constant].
I hope this helps. Please let me know if I can further assist you.
Kind Regards,
Joe Des Rosier
National Instruments

Similar Messages

  • Multi-task data logging with DAQmx

    I was wonder is it possible to use 'DAQmx Configure Logging' VI and 'DAQmx Start New File' VI for multiple tasks?  I'm doing synchronized high speed DAQ with NI PXI-6133 cards.  Each card (there are 16) must have its own task.  Although the DAQ is continuous, the user (software trigger) determines when data is saved to the disk and for how long.
    In my scenario the test length could be up to an hour with various test events scatter throughout.  The users want to display the data during the entire test length.  However they only want to write the data to disk during an event.  The event could last from 10 sec to 1 min.  That is why the users want to control when data is written to the disk.
    DAQmx Logging seems to work for a single task only, but I need to do multiple tasks.

    I've attempted to implement your suggestion, but I still do not acquire data for all channels for all tasks.  I've enclosed my VI.
    Attachments:
    TDMS Logging with Pause LoggingFSPR.vi ‏55 KB

  • Pulsewidth data logging with counter channel

    Hello everyone.
    i have query with data logging of pulse duration measured with counter input into .lvm file
    pulse signal contain frequency of 100 hz with pulse duration vary from 2 to 8 ms
     i want to log pulse duration in .lvm file.
    when running vi(below screen shot) i find data logg with duplicate instance of time.
    i want to log pulse duration data with every ridging or falling edge of pulse.
    so if i run vi for 20 sec i sould have 2000 reading of pulse duration in my.lvm file with appropriate time  in time column
    Attachments:
    pulsewidth using counter.jpg ‏659 KB

    Hello Karthik,
    I can think of two ways I would approach this. One would use software timing, and the other would use hardware timing.
    1. Software timing: This might be the easiest approach. The idea is you would use your example much as it is written, then use a loop with a delay as the "Time between samples" to call your counting code over and over. In this case, I have a few suggestions for your code. First, instead of the loop, you could start the counter, use a delay in a sequence structure (use "Wait (ms)", not "Wait Until Next ms Multiple")for your "Sampling time", then read the counter and stop. After the stop you'll need another "Wait (ms)" in a sequence structure for your "Time between samples". Finally, wrap a loop around all of this to r
    epeat it. One advantage of this approach is the user can change the between time on the fly.
    2. Hardware timing: The idea behind this would be to use two counters. The first would count, the output of the second would be wired to the gate of the first to control when the first counts. The second counter would be programmed to output a pulse train where the positive portion of the pulse would be your "Sampling time" and the negative portion would be your "Time between samples". The creative part for this approach would be figuring out when to read the count. One way to do this would be to also connect the gate of the second counter to an analog input, then read the count whenever the input goes low (say, below 1 volt). This approach might be the more accurate of the two. However, you would always be getting a total count, so you would have to subract the previous count each time. Also, you would not be able to change your sampling time or between time once you start.
    Finally
    I'd suggest looking at some examples - you can usually find code to help you.
    Best Regards,
    Doug Norman

  • Continuous Data Logging with NI 9236 an cRIO 9076 with FPGA

    Hey all,
    i'm a beginner in LabVIEW/FPGA. My goal is it to
    continuous acquire and log data. I've a 9236, CH0 is
    connected to a strain gauge and the cRIO 9076.
    I've written a code and I see the incoming data on the FPGA.vi.
    On the Host.vi there is no outcoming data out of the FIFO.
    There is no error messages or an error during the compilation.
    Do I have a timing problem? Where ist the big mistake
    Thank you!
    Attachments:
    1.jpg ‏98 KB
    2.jpg ‏71 KB
    4.jpg ‏337 KB

    In your first image one problem is that you are starting the module on each iteration of the loop.  I can't tell how your FIFO is configured, but take a look at the example "Hardware Input and Output->CompactRIO->Module Specific IO->Analog Input->NI 923x Continuous DMA.pvproj.  I don't know which LabVIEW version you are using but I found this example in 2012.

  • 64 channel 1450 Hz data logging with PCI-6033E card, on-line averaging and viewing the raw and averaged signal in CWGraphs.

    I'm acquiring 64 channels at sampling rate of about 1450kHz continuously and showing the data second by second in a CWGraph. Now, I would like to add an on-line averaging for all of those 64 channels and show it in another CWGraph. The averaging should occur with respect of external trigger and the sample should consist of about 100ms prestimulus and 500ms poststimulus period. Has anybody been handling this sort of things? How it's solved with ComponentWorks and Visual Basic?
    I'm using double Pentium 700MHz processors and NI-DAQ PCI-6033E card.

    1. To get started with external triggering check out the examples in the Triggering folder that install with CW under MeasurementStudio\VB\Samples\DAQ\Triggering.
    2. Create a prestimulus buffer that always contains the latest 100 ms of data. Just modify the CWAI1_AcquiredData event to transfer ScaledData (the data acquired) to your prestimulus buffer. See how there will always be a prestimulus buffer of data regardless of trigger state?
    3. On the trigger, you can use the same CWAI1_AcquiredData event to route the data into poststimulus buffer and average the 2.
    I see something like this:
    if no trigger
    100 ms of data to prestimulus buffer
    if trigger
    500 ms of data to poststimulus buffer
    averaging and display function
    reset trigger and buffers
    good luck
    ben schulte
    application engineer
    national instruments
    www.ni.com/ask

  • Data logging with gpib device

    I have a lock-in amp controlled by labview via gpib. My goal is to use the
    "advanced data logger" supplied with labview to log data from a daq card
    and the output from the lock-in amp. I have the lock-in vi working correctly
    and have the output on the screen. I don't know how to get the pre-packaged
    "advanced data logger" to read in data from the gpib device. I can only
    read in data from my daq card.
    Any assistance in this matter would be appreciated.

    1.  An OR function should work just fine for you.  I don't understand what you mean that the functionality changed.  If the reset button is true OR the Trip button is true, then a True gets put in the notifier and the Trip file functions are executed.
    I don't understand your "Scale".  Right now you have a zero if the button is true and 100 if it is false.  You don't have anything else going on with it.  I don't understand the stay at zero part.  How would it ever get to the point where it isn't zero.
    2.  For the main log file, you could detect when 43200 iterations of the loop have passed and at that time, just reset the writing to the beginning of the file.
    See attached.
    Attachments:
    Save_Previous_10_SecondsMOD3.vi ‏27 KB

  • Need ASA DHCPD log with client hostname

    I recently switched from a Linux DHCP server to using DHCPD configuration on Cisco ASA 8.4 code.  With the Linux DHCP servers, the logs showed the hostname of the requesting DHCP client.  Unfortunately, I'm not seeing the hostname information in the DHCPD logs from the ASA.  How can I get the ASA to log the clients' hostname?
    Thanks

    I've got the Cisco VPN client 5.x setup with connection profile to Tunnel Group name and pre-shared key.
    Client is communicating with the ASA and is getting prompted for user login.  I have the ASA configured for aaa radius authentication to MS IAS on Windows 2003K server.   Experimenting on the IAS side between the IAS config "connection policies" and AD user profile.  I can now assign a static IP address to the remote VPN client which is nice!  This can be done two ways... either in IAS connection profile or in AD user profile.  What I'm working on next is having the IAS server pass back to the ASA (radius client) a acl list # (filter.id = 80.id) where I have an access-list 80 statement defined.  Not finished up with setup.  Any advice/input on this piece would be helpful.
    The basic goals of this exercise/project include:
    1.  Remote Cisco VPN users authenticating with AD.
    2.  Pre-configured .pcf file created and deployed to remote users.
    3.  Unique static IP's assigned to all VPN users for audit purposes (or troubleshooting).
    4.  Apply ACL's to VPN users based on their assigned static IP so I can control what subnet's/IP's they can reach.
    So far so go... We are a month or so away from implementing our first Windows 2008 server, so I'm fine with getting this to work for our 20-30 remote users with IAS in Win2Kserver environment while I get educated on NSP.
    Joe

  • Data Logging with Fluke 189 Multimeter

    I am trying to get LaBview to read a signal coming off a Fluke 189 Multimeter, and I keep encountering the error code-1073807341. I installed the driver fl18x, and LaBview will still not talk to the multimeter. I have been able to use the fluke 189 multimeter before, but on a different computer and a newer version of LabView. I was able to get the fluke to work with LaBview 7.1.0, but not with LaBview 7.0.0 express. Should I be able to use the fluke with LaBview 7.0.0 express?

    Hi,
    Yes, you should be able to use LabVIEW 7.0 with the Fluke 189.  The fl18x instrument drivers are available on the Instrument Driver Network, and there is a version for LabVIEW 7.0.  You may need to repair or reinstall the NI-VISA and NI-VISA Run-time Engine.  Make sure that you select support for LabVIEW 7.0 when installing the NI-VISA driver.  Also make sure that you installed LabVIEW 7.0 before installing LabVIEW 7.1, and that you installed the drivers after installing LabVIEW.
    Regards,
    Rima
    Rima H.
    Web Product Manager

  • Help In keithley 2400 VI!!(Problem with the data logging and graph plotting)

    Hi,need help badly=(.
    My program works fine when i run it,and tested it out with a simple diode.The expected start current steps up nicely to the stop current.The only problem is when it ends,i cannot get the data log and the graph,though i already have write code for it.Can someone help me see what's wrong with the code?I've attached the necessary file below,and i'm working with Labview 7.1.
    Thanks in advance!!!
    Attachments:
    24xx Swp-I Meas-V gpib.llb ‏687 KB

    Good morning,
    Without the instrument it might be hard for others to help
    troubleshoot the problem.  Was there a
    specific LabVIEW programming question you had, are you having problems with the
    instrument communication, are there errors? 
    I’d like to help, but could you provide some more specific information
    on what problems you are encountering, and maybe accompany that with a simple
    example which demonstrates the behavior? 
    In general we don’t we will be unable to open specific code and debug,
    but I’d be happy to help with specific questions. 
    I did notice, though, that in your logging VI you have at
    least one section of code which appears to not do anything.  It could be that a small section of code, or
    a wire was removed and the data is not being updated correctly (see pic below).  Is your file being opened properly?  Is the data being passed to the file
    properly?  What are some of the things
    you have examined so far?
    Sorry I could not provide the ‘fix’, but I’m confident that
    we can help.  Thanks for posting, and
    have a great day-
    Message Edited by Travis M. on 07-11-2006 08:51 AM
    Travis M
    LabVIEW R&D
    National Instruments
    Attachments:
    untitled.JPG ‏88 KB

  • How do I control a data log session with period and sample time?

    I need a data logging system where the operator can select 2 logging parameters: Log Period and Sample Time. I also need a START and STOP button to control the logging session. For example, set the log period for 1 hour and the sampling time for 1 second. (I may be using the wrong jargon here.) In this case when the START button is clicked, the system starts logging for 1 second. An hour later, it logs data for another second, and so on until the operator clicks the STOP button. (I will also include a time limit so the logging session will automatically stop after a certain amount of time has elapsed.)
    It’s important that when the STOP button is clicked, that the system promptly stops logging. I cannot have the operator wait for up to an hour.
    Note that a logging session could last for several days. The application here involves a ship towing a barge at sea where they want to monitor and data log tow line tension. While the system is logging, I need the graph X-axis (autoscaled) to show the date and time. (I’m having trouble getting the graph to show the correct date and time.) For this application, I also need the system to promptly start data logging at a continuous high rate during alarm conditions.
    Of course I need to archive the data and retrieve it later for analysis. I think this part I can handle.
    Please make a recommendation for program control and provide sample code if you can. It’s the program control concepts that I think I mostly need help here. I also wish to use the Strip Chart Update Mode so the operator can easily view the entire logging session.
    DAQ Hardware: Not Selected Yet
    LabVIEW Version: 6.1 (Feel free to recommend a v7 solution because I need to soon get it anyway.)
    Operating System: Win 2000
    In summary:
    How do I control a graphing (data log) session for both period and sample time?
    How do I stop the session without having to wait for the period to end?
    How do I automatically interrupt and control a session during alarm conditions?
    Does it make a difference if there is more than one graph (or chart) involved where there are variable sample rates?
    Thanks,
    Dave

    Hello Dave,
    Sounds like you have quite the system to set up here. It doesn�t look like you are doing anything terribly complicated. You should be able to modify different examples for the different parts of your application. Examples are always the best place to start.
    For analog input, the �Cont Acq&Chart (buffered).vi� example is a great place to start. You can set the scan rate (scans/second) and how many different input channels you want to acquire. This example has its own stop button; it should be a simple matter to add a manual start button. To manually set how long the application runs, you could add a 100 ms delay to each iteration of the while loop (recommended anyway to allow processor to multi-task) and add a control that sets the number
    of iterations of the while loop.
    For logging data, a great example is the �Cont Acq to File (binary).vi� example.
    For different sample rate for different input lines, you could use two parallel loops both running the first example mentioned above. The data would not be able to be displayed on the same graph, however.
    If you have more specific questions about any of the different parts of your application, let me know and I�ll b happy to look further into it.
    Have a nice day!
    Robert M
    Applications Engineer
    National Instruments
    Robert Mortensen
    Software Engineer
    National Instruments

  • I need to graph data and stack the plots to create 3 graphs, how do I plot more than 1 data line on each graph

    I currently use a code heavy solution thats clunky and need to refine the graphing part. For simplicity I want to use the waveform chart and its "Stack Plots" option. However I cannot see how to collect 2 or 3 data streams and display them on 1 of the stacked plots.
    The final verson will have 3 stacked plots, the top plots needs 3 data streams, The middle & bottom plots require 2 data streams.
    Help Appreciated

    Hi,
    Sometimes the Synchronous Display option makes a difference (right
    mouse>advanced>Synchronous Display). Or while running, right click>Smooth
    Updates. Fiddle around with them, sometimes unexpected settings has the best
    performance (only four combinations anyway).
    Regards,
    Wiebe.
    "CB" wrote in message
    news:50650000000500000005810100-1079395200000@exch​ange.ni.com...
    > Dennis,
    > Thanks for that, I had tried your solution earlier while developing
    > the package. A bit dissapointing the stacked plots wont work. it'd
    > save lots of code and simplify the whole graphing part of my package.
    > I am reluctant to overlay transparent graphs etc on top of others
    > because I am aware it could hinder performance, I am doing a lot in my
    > pac
    kage and need to get the most out of the machine. My current
    > solution is nearly ok and consists of 3 waveform charts contained in a
    > cluster. I need the 3 charts to line up accurately and be of identical
    > size which involved plugging into the various sizing/xy line up params
    > which was a bit fiddly. The main problem is the bottom graph seems to
    > end up lagging at some point which gives doesnt look too good. When
    > the PC gets loaded up with too much work its best effort means the
    > bottom graph get missed out (or something like that) occasionally. I
    > have noticed this on all the machines it has run on even my new 3Ghz
    > machine.
    > I am sure this could work ok but I cannot figure how to force the
    > graphs to only continue once all 3 have their data.
    > Thanks in advance, Regards, Chris

  • Unable to take online backup including logs with HP Data Protector - DB2 9.7 fixpack 8

    Dear All,
    Infrastructure : -
    Database - IBM DB2 LUW 9.7 fixpack 8
    OS - HP-UX 11.31
    Backup solution - Data protector v7
    OEM of tape library i.e. HP asked me to make changes in DB parameters with values as
    --> LOGRETAIN to RECOVERY
    --> USEREXIT to ON
    --> LOGARCHMETH1 to USEREXIT
    These parameters are used for activating online backup of Database (DB2 LUW 9.7 fixpack 8).
    when backup is initiated from DP, backup happen successfully with option of excluding archive logs where as when we execute backup from DP(Data protector) with option including archive logs. Backup terminates with error.
    SQL2428N The BACKUP did not complete because one or more of the requested log files could not be retrieved
    Dear all need your expect help.
    Thanks in advance.
    Neeraj

    Hi Deepak,
    Thanks for your quicky reply and i have check the directory log_dir and log_archive are having enough space available.
    Please find the db2diag.log file.
           Database Configuration for Database
    Database configuration release level                    = 0x0d00
    Database release level                                  = 0x0d00
    Database territory                                      = en_US
    Database code page                                      = 1208
    Database code set                                       = UTF-8
    Database country/region code                            = 1
    Database collating sequence                             = IDENTITY_16BIT
    Alternate collating sequence              (ALT_COLLATE) =
    Number compatibility                                    = OFF
    Varchar2 compatibility                                  = OFF
    Date compatibility                                      = OFF
    Database page size                                      = 16384
    Dynamic SQL Query management           (DYN_QUERY_MGMT) = DISABLE
    Statement concentrator                      (STMT_CONC) = OFF
    Discovery support for this database       (DISCOVER_DB) = ENABLE
    Restrict access                                         = NO
    Default query optimization class         (DFT_QUERYOPT) = 5
    Degree of parallelism                      (DFT_DEGREE) = 1
    Continue upon arithmetic exceptions   (DFT_SQLMATHWARN) = NO
    Default refresh age                   (DFT_REFRESH_AGE) = 0
    Default maintained table types for opt (DFT_MTTB_TYPES) = SYSTEM
    Number of frequent values retained     (NUM_FREQVALUES) = 10
    Number of quantiles retained            (NUM_QUANTILES) = 20
    Decimal floating point rounding mode  (DECFLT_ROUNDING) = ROUND_HALF_EVEN
    Backup pending                                          = NO
    All committed transactions have been written to disk    = NO
    Rollforward pending                                     = NO
    Restore pending                                         = NO
    Multi-page file allocation enabled                      = YES
    Log retain for recovery status                          = RECOVERY
    User exit for logging status                            = YES
    Self tuning memory                    (SELF_TUNING_MEM) = ON
    Size of database shared memory (4KB)  (DATABASE_MEMORY) = AUTOMATIC(3455670)
    Database memory threshold               (DB_MEM_THRESH) = 10
    Max storage for lock list (4KB)              (LOCKLIST) = AUTOMATIC(20000)
    Percent. of lock lists per application       (MAXLOCKS) = AUTOMATIC(90)
    Package cache size (4KB)                   (PCKCACHESZ) = AUTOMATIC(161633)
    Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = AUTOMATIC(480362)
    Sort list heap (4KB)                         (SORTHEAP) = AUTOMATIC(50000)
    Database heap (4KB)                            (DBHEAP) = AUTOMATIC(3102)
    Catalog cache size (4KB)              (CATALOGCACHE_SZ) = 2560
    Log buffer size (4KB)                        (LOGBUFSZ) = 1024
    Utilities heap size (4KB)                (UTIL_HEAP_SZ) = 10000
    Buffer pool size (pages)                     (BUFFPAGE) = 10000
    SQL statement heap (4KB)                     (STMTHEAP) = AUTOMATIC(8192)
    Default application heap (4KB)             (APPLHEAPSZ) = AUTOMATIC(256)
    Application Memory Size (4KB)             (APPL_MEMORY) = AUTOMATIC(40000)
    Statistics heap size (4KB)               (STAT_HEAP_SZ) = AUTOMATIC(4384)
    Interval for checking deadlock (ms)         (DLCHKTIME) = 10000
    Lock timeout (sec)                        (LOCKTIMEOUT) = 3600
    Changed pages threshold                (CHNGPGS_THRESH) = 20
    Number of asynchronous page cleaners   (NUM_IOCLEANERS) = AUTOMATIC(2)
    Number of I/O servers                   (NUM_IOSERVERS) = AUTOMATIC(5)
    Index sort flag                             (INDEXSORT) = YES
    Sequential detect flag                      (SEQDETECT) = YES
    Default prefetch size (pages)         (DFT_PREFETCH_SZ) = AUTOMATIC
    Track modified pages                         (TRACKMOD) = YES
    Default number of containers                            = 1
    Default tablespace extentsize (pages)   (DFT_EXTENT_SZ) = 2
    Max number of active applications            (MAXAPPLS) = AUTOMATIC(125)
    Average number of active applications       (AVG_APPLS) = AUTOMATIC(3)
    Max DB files open per application            (MAXFILOP) = 61440
    Log file size (4KB)                         (LOGFILSIZ) = 16380
    Number of primary log files                (LOGPRIMARY) = 60
    Number of secondary log files               (LOGSECOND) = 0
    Changed path to log files                  (NEWLOGPATH) =
    Path to log files                                       = /db2/ECP/log_dir/NODE                                                                                        0000/
    Overflow log path                     (OVERFLOWLOGPATH) =
    Mirror log path                         (MIRRORLOGPATH) =
    First active log file                                   = S0000187.LOG
    Block log on disk full                (BLK_LOG_DSK_FUL) = YES
    Block non logged operations            (BLOCKNONLOGGED) = NO
    Percent max primary log space by transaction  (MAX_LOG) = 0
    Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0
    Group commit count                          (MINCOMMIT) = 1
    Percent log file reclaimed before soft chckpt (SOFTMAX) = 300
    Log retain for recovery enabled             (LOGRETAIN) = RECOVERY
    User exit for logging enabled                (USEREXIT) = ON
    HADR database role                                      = STANDARD
    HADR local host name                  (HADR_LOCAL_HOST) =
    HADR local service name                (HADR_LOCAL_SVC) =
    HADR remote host name                (HADR_REMOTE_HOST) =
    HADR remote service name              (HADR_REMOTE_SVC) =
    HADR instance name of remote server  (HADR_REMOTE_INST) =
    HADR timeout value                       (HADR_TIMEOUT) = 120
    HADR log write synchronization mode     (HADR_SYNCMODE) = NEARSYNC
    HADR peer window duration (seconds)  (HADR_PEER_WINDOW) = 0
    First log archive method                 (LOGARCHMETH1) = USEREXIT
    Options for logarchmeth1                  (LOGARCHOPT1) =
    Second log archive method                (LOGARCHMETH2) = OFF
    Options for logarchmeth2                  (LOGARCHOPT2) =
    Failover log archive path                (FAILARCHPATH) =
    Number of log archive retries on error   (NUMARCHRETRY) = 5
    Log archive retry Delay (secs)         (ARCHRETRYDELAY) = 20
    Vendor options                              (VENDOROPT) =
    Auto restart enabled                      (AUTORESTART) = ON
    Index re-creation time and redo index build  (INDEXREC) = SYSTEM (RESTART)
    Log pages during index build            (LOGINDEXBUILD) = OFF
    Default number of loadrec sessions    (DFT_LOADREC_SES) = 1
    Number of database backups to retain   (NUM_DB_BACKUPS) = 12
    Recovery history retention (days)     (REC_HIS_RETENTN) = 60
    Auto deletion of recovery objects    (AUTO_DEL_REC_OBJ) = OFF
    TSM management class                    (TSM_MGMTCLASS) =
    TSM node name                            (TSM_NODENAME) =
    TSM owner                                   (TSM_OWNER) =
    TSM password                             (TSM_PASSWORD) =
    Automatic maintenance                      (AUTO_MAINT) = ON
       Automatic database backup            (AUTO_DB_BACKUP) = OFF
       Automatic table maintenance          (AUTO_TBL_MAINT) = ON
         Automatic runstats                  (AUTO_RUNSTATS) = ON
           Automatic statement statistics  (AUTO_STMT_STATS) = ON
         Automatic statistics profiling    (AUTO_STATS_PROF) = ON
           Automatic profile updates         (AUTO_PROF_UPD) = ON
         Automatic reorganization               (AUTO_REORG) = ON
    Auto-Revalidation                          (AUTO_REVAL) = DEFERRED
    Currently Committed                        (CUR_COMMIT) = DISABLED
    CHAR output with DECIMAL input        (DEC_TO_CHAR_FMT) = NEW
    Enable XML Character operations        (ENABLE_XMLCHAR) = YES
    WLM Collection Interval (minutes)     (WLM_COLLECT_INT) = 0
    Monitor Collect Settings
    Request metrics                       (MON_REQ_METRICS) = BASE
    Activity metrics                      (MON_ACT_METRICS) = BASE
    Object metrics                        (MON_OBJ_METRICS) = BASE
    Unit of work events                      (MON_UOW_DATA) = NONE
    Lock timeout events                   (MON_LOCKTIMEOUT) = WITHOUT_HIST
    Deadlock events                          (MON_DEADLOCK) = WITHOUT_HIST
    Lock wait events                         (MON_LOCKWAIT) = NONE
    Lock wait event threshold               (MON_LW_THRESH) = 5000000
    Number of package list entries         (MON_PKGLIST_SZ) = 32
    Lock event notification level         (MON_LCK_MSG_LVL) = 1
    SMTP Server                               (SMTP_SERVER) =
    SQL conditional compilation flags         (SQL_CCFLAGS) =
    Section actuals setting               (SECTION_ACTUALS) = NONE
    Connect procedure                        (CONNECT_PROC) =
    Regards
    Neeraj

  • Need to be logged in to get help with logging in? ...

    While this is obviously a vent, I really would welcome suggestions for password help in the future. Here goes...
    We don't use Skype often, but when we want to use it, we want to use it NOW. Because we don't use it often, we forget the Skype name and password between times. Yes, we should have a way to remember it, but this is 2015 and we expect a tool to help us out a little bit in our busy lives.
    The first step is to remember our Skype name. Since we had to pick a unique one, it's not something we use for anything else and we don't remember what we had to come up with on the fly when we signed up for Skype. Fortunately, we did keep the welcome email from when we signed up and it has our Skype name in it. Great!
    Ok, next we need our password. Nothing we type is working. We click to request a password reset and an email is supposed to be sent to us.  One would think it would arrive immediately, but no. We requested it more than 2 hours ago and have not received anything. And no, it is not in the Spam/Junk folder either. We know the code will only be good for 3 hours. Where the heck is the email to tell us what it is?? If this were a one-time circumstance where it didn't arrive right away, that would be forgivable, but this has happened before. No email, or at least not until several hours later.
    Ok, with no reset email, we keep trying different passwords and none work. Now we're notified that we're locked out and need to wait to try again. Did I mention we want to use Skype NOW? We don't want to wait over 2 hours for an email (which still hasn't arrived). We don't want to be locked out for 24 hours. We want to Skype... NOW.
    Ok, we need customer support help. Where is the customer support number? Doesn't seem to be one. Where is the live chat button? Need to pay for that. Well, email takes forever, but at least it gets a message straight to Skype that we need help so we'll go that route. Guess what, you need to be logged in to be able to send an email for support. Are you kidding me?? I need to be logged in to tell you I can't get logged in.
    Ok, the only other option for help is the community. The existing posts aren’t helping me, and I can't post anything new unless I'm... get this... logged in. I cannot believe there is no way to get a message to anyone connected with Skype unless I'm logged in.
    Out of total desperation, I created a new Skype account. And going against all best practices for security, we have written down the odd user name we needed to choose, as well as the new password. Out of the 300 million Skype accounts that exist, I can't help but think some 200 million of them are extra accounts people needed to create because they couldn't get their password reset.
    I'll end my venting there. If anyone has any insights on more I could've/should've tried to get my password reset IN A TIMELY MANNER, please share. I would not be surprised if the reset code email eventually arrives, but if it's going to take 2+ hours to get it to me, don't even bother. 

    In case anyone needs it, this seems to be the thread that MS is following the closest:
    http://answers.microsoft.com/en-us/windows/forum/windows_tp-winipp/build-9879-windows-feedback-app-doesnt-recognize/6fc9b35b-8141-4045-b17a-f53ecd5ca6ae

  • How to data log graphs using front panel data logging?

    Hello I have a VI that collects data from DAQmx thermocouple readings and graphs the temperature vs time using a while loop to collect data and graph. I have 9 control operators that define the correction factor of the thermocouples.
    I want to create a datalogging using the option under Operations>Data Logging
    When I retrieve the data the only information that is present are the control operators correction factors that I defined. The graphed data that was created is not retrieved.
    Is there a solution to show the graphed data plots that were created on the front panel? They remain unchanged from the last run of the VI or blank if I open the VI without having ran the program.
    Thank you.

    This is expected for the Data Logging in LabVIEW. If you want to record the signla data, use the Write to Measurement File Express VI.  Here's a link with a walk-through:
    http://www.ni.com/academic/students/learn-daq/data-logging/
    The Data Logging from the Operate Menu is for recording front panel control(s), as you have observed.
    Mark P.
    Applications Engineer
    National Instruments
    www.ni.com/support

  • Oracle8i Data Guard with log shipping

    Is it true that :
    in Oralce8i, with the data guard, there will be zero data loss if online redologs have been mirrored in DR site and in the event of DR, the last un-finished redolog can be used to recover the database.
    What product is used to apply the redolog ?
    I know Oracle9i claim this is possible, but when will Oracle9i available to Sun platform ?

    >
    Thomas Schulz wrote:
    > Here are my questions:
    >
    > 1. Is it correct, that I have to restore the last successful restored log (if not the first) from the previous session with "recover_start", before I can restore the next log with "recover_replace" in a new session?
    Yes, that's correct. As soon as you leave the restore session, you have to provide a 'overlap' of log information so that the recovery can continue.
    > 2. Can't I mix the restoring of incremental and log backups in this way: log001, incremental data, log002, ...? In my tests I was able to restore the incremental data direct after the complete data, but not between the log backups.
    No, that's not possible. After you've recovered some log-information the incremental backup cannot be appliead as a "delta" to the data are anymore as the data area has already changed.
    > 3. Can I avoid the state change to OFFLINE after a log restore?
    Of course - don't use recover_cancel
    As soon as you stop the recovery, the database is stopped - no way around this.
    There are some 3rd party tools available for this, like LIBELLE.
    KR Lars

Maybe you are looking for

  • Can no longer share my Mac to my windows PC

    With 10.4, sharing my Mac to my windows PC was really easy. Now that I've installed 10.5, my PC can no longer see my mac's shared folders. When I try to map network drive to \\192.168.1.4\shared_folder (for example), my PC just gives me an error that

  • Download report with Aggregates

    Hi I am using Application Express 4.0.2.00.07 I have been trying to figure out a way to print the aggregate that is in my IR to be printed/downloaded into CSV or HTML. I have no problem in printing the report but I cant get the aggregate(sum) to be p

  • Connecting the I-touch to home network

    Hello Everyone. I'm having trouble connect my wife's i-touch to the house router which is a Linksys 2.4ghz Model wrt54G. The i-touch is version 3.1.3 (7e18) model MC086LL the wireless is enable but there are about 9 different channels (speeds) that I

  • Struts 2.0 validations

    Hi , all I have a problem with struts validations, I am using struts2.0, in the button action of jsp if I won't pass any parameters valitions are happening with struts validators , if I pass any parameters in the button action it's ignoring the valid

  • Where does boot camp put the partition space?

    I tried doing bootcamp, partitioned 100GB, but was taking over an hour, everyone says a few minutes, so i quit, and now that 100GB is somewhere, but not partioned because disk utility just shows the 1, I want to know how to get it back, when I look a