Alarms stop logging

I am running Lookout 5.0 on Windows XP PRO. My system has stopped logging alarms and don't know why. I have searched this forum and found a few others that have had similar troubles but have not been resolved. Is this a problem with Citadel? or an inherent problem with Lookout?
I need to get this fixed and need some help. Thanks.

Your relay wants 12V for the coil, which one of your counters can't deal with directly.  And the coil looks like it will want around 37mA to turn on (12V/320 Ohms = 37.5mA).  What I have typically done in these situations is use a 2N2222 transistor to cause the current from a 12V source to go through the coil and through the transitor to ground.  When you apply 5V to the base, the current can flow.  When you apply 0V, the transistor will be "off" and the current won't be able to flow.  When current can flow through the coil, the relay will turn on.  When current does not flow through the coil, the relay is off.  And you will also want a protection diode in there to allow current to go from the negative side of the coil to your 12V (the coil will hold a charge).
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines

Similar Messages

  • Lookout 6.0 - alarms stopped logging

    our alarms in Lookout 6.0 have stopped logging to the database for approximately a month. I have tried to view in Max but only alarms older than approx a month can be seen. Data traces are current.

    Alarms, Events, and logging to historical database for hyper trends make the database folder too big too fast.
    Try a cleaning of the events and alarms (or avoid as much as possible unnecessary events and alarm records). Try also to create the database folder out of Lookout folder (i.e C:\Data).
    Detaching, deleting, and creating databases can create fragmentation of the Hard Drive with the time as well. As a tumb rule, keep checking if the hard drive is asking for a defragmentation and how big is the database once in a while. I had that before......
    Good luck this time......

  • Failed to query Alarms & Events, alarms not logging

    Hi all,
    I've recently run in to an issue with the DSC Alarm & Event logging. I have a number of shared variable libraries all setup to log both data and alarms to Citadel. I've double checked that the library itself is configured for logging data and alarms, as well as all the variables. Using the DSM, I can see all of the active alarms in the system across all libraries, and can receive all alarm change events programmatically.
    For some reason, alarms for only one of the libraries are logging, and attempting to query the historical Alarms & Events through MAX throws an error roughly 14 out of 15 times (with no error code or indication of why it fails). I've also attempted to archive the Citadel database, with it failing with an unspecified error (presumably when archiving the alarms side of things, the data archive seemed to work).
    Can anyone offer some insight on how to resolve the problem, preferably without data loss? Further, how can I prevent the issue in future?
    Solved!
    Go to Solution.

    Thanks for your reply Joey.
    Libraries are being deployed using the DSC Deploy Libraries VI. The libraries also have data and alarm logging enabled programmatically. At the moment they're both being logged to the same database.
    I can't find the "Read Historical Trend.vi" through the DSC Examples. The article was written in 2005 and I'm using DSC 2012 SP1, so I don't know if things have changed since then. I can view the data through MAX without issue if that's any consolation.
    The mdf file is about 60MB, with the entire Citadel database at around 6GB.
    The Citadel service is definitely running. I've also attempted restarting it (and the PC itself) but the issue is still present.
    It's entirely possible there was a power failure, as this has been during a period of plant commissioning. When I can query the alarms through MAX there's a lot of data up to a certain point (end July), then it's very sporadic after that with only certain variable's alarms being logged.
    I attempted another database archive and this time it managed to get all the way through, though the archived database also has the same problems querying alarms and events. So at this point I suspect the mdf/mds files are corrupt in some way.
    Using the DSM, I've tried manually pointing the libraries to log alarms to a newly created database but this doesn't seem to work. No alarms or events are logged to the new database, and checking the library properties shows that the alarm database which I configured the library to log to had reverted to the original database. Am I missing something obvious here? Do I need to pre-create some traces in the new database before alarms and events will get logged there?
    Given I have managed to make a 'good' archive of the database, I'd like to try just deleting the alarms and events portion of the database and start clean. Is there a correct way to do this? I did find this article, but am unsure if this will work with a potentially corrupted database: http://digital.ni.com/public.nsf/allkb/9D3A81218264C68686256DE1005D794F If I stop all of the Citadel and SQLServer related services, can I simply delete the mdf/mds files?

  • CITADEL and RELATIONAL DATABASE Alarms & Events Logging Time Resolution

    Hello,
    I would like to know how I can setup Logging Time Resolution when Logging Alarms & Events to CITADEL or RELATIONAL DATABASE.
    I tried use Logging:Time Resolution Property of Class: Variable Properties without success.
    In my application I need that timestamp of SetTime, AckTime and ClearTime will be logged using Second Time Resolution, in other words, with fractional seconds part zero.
    I am using a Event Structure to get SetTime, AckTime and ClearTime events and I want to UPDATE Area and Ack Comment Fields thru Database Connectivity but when I use SetTime timestamp supplied by Event Structure in WHERE clause Its not possible get the right alarm record because there are a different time resolution between LV SetTime timestamp and timestamp value logged in database.
    Eduardo Condemarin
    Attachments:
    Logging Time Resolution.jpg ‏271 KB

    I'm configuring the variables to be logged in the same way that appears on the file you send, but it doesn't work... I don't know what else to do.
    I'm sending you the configuration image file, the error message image and a simple vi that creates the database; after, values are logged; I generate several values for the variable, values that are above the HI limit of the acceptance value (previously configured) so alarms are generated. When I push the button STOP, the system stops logging values to the database and performs a query to the alarms database, and the corresponding error is generated... (file attached)
    The result: With the aid of MAXThe data is logged correctly on the DATA database (I can view the trace), but the alarm generated is not logged on the alarms database created programatically...
    The same vi is used but creating another database manually with the aid of MAX and configuring the library to log the alarms to that database.... same result
    I try this sabe conditions on three different PCs with the same result, and I try to reinstall LabVIEW (development and DSC) completelly (uff!) and still doesn't work... ¿what else can I do?
    I'd appreciate very much your help.
    Ignacio
    Attachments:
    error.jpg ‏56 KB
    test_db.vi ‏38 KB
    config.jpg ‏150 KB

  • Exception.log and mail.log stopped logging (MX7)

    Hi all - we have been experiencing intermittent problems with our MX 7.02 server & checked the log files to help diagnose the problem.
    However both exception.log and mail.log appear to have stopped logging information in June 2010.
    The size of the logfiles is only 178K and 34K.
    Does anyone know why they've stopped & how to restart the logging?
    Many thanks
    cf_rog

    On CF7 as far as I know you need to restart the CF server. In CF9 you
    can selectively enable/disable logging for those files and thus
    attempt to restart logging.
    Mack

  • My Iphone4 alarm stopped working since last two days. Is it the same issue which cropped up on Jan 1 2011? What is the way out?

    my iphone4 alarm stopped working since last two days (31.12.2011). is it the same issue that cropped up last year on 1.1. 2011? what is the way out? and who should i turn to for support?

    This issue was fixed in iOS5 - what is yours running?

  • When changing database directory in lk6.5, no alarms are logged anymore.

    When changing database directory in lk6.5, no alarms are logged anymore. Traces are running ok, only any query in MAX concerning alarms, will result in an empty response

    Hi Ryan,
    I tried many ways. First as you described. Second; delete all databases (with MAX) and start lookout, hoping lk would make al necessary settings.
    I tried short path name, long path name. All the same results; database will be made (or manually with Max or automatically by lk), all traces works, only NOT the Alarm logging. (database isn't corrupted, while a query done in MAX don't result in an error, just empty result).
    I the meantime I found something;
    C:\Documents and Settings\my name\my documents\Databse --> traces works, alarms not
    C:\Program Files\National Instruments\Lookout 6.5\Database --> all works
    So it seams that specificly "my documents" does'nt work for alarms, just for traces (????)
    I also found the following message when trying to export the alarms; "The given Citadel database is not currently configured to log alarms to a relational database." Does this ring a bell?

  • Stimulus profile stops logging immediately (Custom Steps)

    Greeting again all! I'm using 2012 versions of everything here (VS, TS, LV, and DIAdem), and I'm also using VS Custom Steps for TS. I have a main TS sequence that does all of the configuration and setup tasks via subsequences and VS Custom steps. The only thing I'm using the Stimulus Profile for is logging at the moment, and for some reason the resulting TDMS file only holds six data points (logging at 100Hz). It looks like it starts and stops immediately, and I can't figure out why. 
    The only steps I have in the stimulus profile are Start Logging and Channel Group steps. There's no Stop Log step, so the logging shouldn't stop until the TS sequence undeplous the project, as far as I understand it. Triggered Logging Trigger Condition is set to none, and File Segmenting is set to Do Not Segment.
    Can anyone shed any light on what's happening here? 
    Solved!
    Go to Solution.

    The best option would be to control data logging directly from the TestStand sequence, but I'm not very familiar with what steps are available. Newer versions of the steps seem to have some DataLogging palette available (just based on screenshots posted online), but they might not support VeriStand 2012.
    You could do this by calling a LabVIEW VI from TestStand that would set up the data logging session, and then calling another at the end of the TestStand sequence to stop it. We have example VIs that ship with Veristand to show you how to start and stop logging.
    Jarrod S.
    National Instruments

  • How to stop logging in stored procedure?

    With interactive ISQL ssession, there is a command "stop logging" to stop ase logging for current session.
    How to do same thing for a stored procedure?

    No, each select/into will need to create a new table.
    With some planning/designing you could do something like:
    ==========================
    select ... into mytab1 ...
    select ... into mytab2 ... union all select * from mytab1
    --drop any indexes from mytab1
    truncate table mytab1
    drop table mytab1
    select ... into mytab3 ... union all select * from mytab2
    --drop any indexes from mytab2
    truncate table mytab2
    drop table mytab2
    ==========================
    Obviously you have to make the final decision ... minimize logging with more complicated coding *vs* accept logging overhead with less complicated coding.
    Just saw Kevin's response ... which is a more efficient method assuming each insert can stand on it's own.
    My example assumes some intermediate processing between the INSERTs (eg, insert mytab1, run some queries against mytab1, insert mytab2, run some queries against mytab2, insert mytab3) ... then again perhaps you could use the #temp tables for intermediate processing, too.
    There are a lot of ways to reduce your logging which ultimately depends on your exact requirements.

  • SLG1 Transfer logs stopped logging X-fer activities

    Hello Experts,
    We are on GTS 10.0 and have come across an issue where the transfer log using SLG1 txn has stopped logging activities for the Orders flowing from ECC.
    Could this be because the log limit has exceeded and if there is a way to make it work without having to delete the historical log that has been there for last 5 months since Go-Live?
    Any help is much appreciated..
    Regards,
    Prashant.

    Hi Prashant,
    Use SLG1 to display the last logs available.  In the top pane of the results screen, you can see the Log Number.  Does it look as though the Number Range is exhausted?  If it is, then all other logging should also have stopped at that time - did you check the logs for other objects?
    Regards,
    Dave

  • Anyway to get alarms when log in if they came due when machine was off

    I am new to ical. Decided to try it this time instead of shareware I have used previously. Big negative is that if an alarm is due when the machine is off, then it never runs.
    Is there any way to make an alarm run if the machine was off when it was due --say on login.
    My old shareware kept up the notices of an event or a to do item with an alarm on log in until it was marked completed. This was a useful, almost essential feature.
    Is there any way to fix this using a script? I don't know how to do scripts yet but if this could be my fix, I would appreciate instructions.

    Send a message to your cellphone.
    First add your mobile phone and/or pager email addresses to your card in Address Book:
    Open the Address Book application program.
    Choose Show My Card from the Card menu.
    Click Edit to edit your card.
    Click the "+" button next to the email field to add an email address.
    Choose Custom from the pop-up menu.
    Type a label for the address such as "pager".
    Click OK.
    Enter the email address of your pager or mobile phone in the Email field.
    Click Edit to save the changes.
    To add an email alarm to an event:
    Create or select the event in the iCal window.
    Choose Get Info from the Edit menu.
    Click the Alarm tab (bell icon) in the Event Info window.
    Click "Send e-mail to me at" to enable that option.
    Select your pager or mobile phone email address from the pop-up menu.
    Set the alarm time using the time field and pop-up menu.
    Close the Event Info window.
    iMac,iBook   Mac OS X (10.4.6)  

  • Toplink Warning - how to stop logging?

    I'm just getting started with JPA and TopLink so I need a little help here.
    Here's the situation. I want to build a simple Rich Client Application and have it use JPA for persistence. I am developing in NetBeans and using the built in Derby database for testing.
    So I've got my Entities all set up and I've started writing a class that'll be using an EntityManager to handle the persistence. The first test I've done is working. However, every time I retrieve the EntityManager from my EntityManagerFactory and Toplink wants to create the tables for all the entities (and it realizes that they're already there), the Toplinks Logger prints this warning for every table:
    +[TopLink Warning]: 2010.04.25 03:36:16.187--ServerSession(16795115)--Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.DatabaseException+
    Internal Exception: java.sql.SQLException: Table/View '<table-name>' ist bereits in Schema '<schema-name>' vorhanden.
    Error Code: -1
    Call: CREATE TABLE <table-name> (ID BIGINT NOT NULL, MESSAGE VARCHAR(255), PRIMARY KEY (ID))
    Query: DataModifyQuery()
    I don't know why the warning is in German, but it means: Table/View ... already exists in ...
    I've already tried to disable toplink-logging via persistence.xml which I've posted below. The warnings still show, though.
    I've also tried to stop toplink wanting to create the tables by deleting the +<property name="toplink.ddl-generation" value="create-tables"/>+.
    persistance.xml
    +<?xml version="1.0" encoding="UTF-8"?>+
    +<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">+
    +<persistence-unit name="MovieDBv2.0PU" transaction-type="RESOURCE_LOCAL">+
    +<provider>oracle.toplink.essentials.PersistenceProvider</provider>+
    +<properties>+
    +<property name="toplink.logging.level" value="OFF"/>+
    +<property name="toplink.jdbc.user" value="admin"/>+
    +<property name="toplink.jdbc.password" value="adminadmin"/>+
    +<property name="toplink.jdbc.url" value="jdbc:derby://localhost:1527/movie_db1"/>+
    +<property name="toplink.jdbc.driver" value="org.apache.derby.jdbc.ClientDriver"/>+
    +<property name="toplink.ddl-generation" value="create-tables"/>+
    +</properties>+
    +</persistence-unit>+
    +</persistence>+
    This is my persistence handling class DatabaseLink.java
    package moviedbv20;
    import java.util.List;
    import javax.persistence.EntityManager;
    import javax.persistence.EntityManagerFactory;
    import javax.persistence.Persistence;
    import moviedbv20.elements.DatabaseEntry;
    +public class DatabaseLink {+
    EntityManager em;
    EntityManagerFactory emf;
    +public DatabaseLink(){+
    this.getEntityManager();
    +}+
    +public void getEntityManager (){+
    +if (this.emf == null){+
    this.emf = Persistence.createEntityManagerFactory("MovieDBv2.0PU");
    +}+
    +if(this.em == null){+
    this.em = emf.createEntityManager();
    +}+
    +}+
    +protected void createDatabaseEntry(DatabaseEntry entry){+
    em.getTransaction().begin();
    em.persist(entry);
    em.getTransaction().commit();
    +}+
    +protected void editDatabaseEntry(DatabaseEntry entry) {+
    em.merge(entry);
    +}+
    +protected void removeDatabaseEntry(DatabaseEntry entry){+
    em.remove(em.merge(entry));
    +}+
    +protected DatabaseEntry findDatabaseEntry(Object id) {+
    return em.find(DatabaseEntry.class, id);
    +}+
    +protected List<DatabaseEntry> findAllDatabaseEntries() {+
    return em.createQuery("select object(o) from DatabaseEntry as o").getResultList();
    +}+
    +}+

    Hi,
    I tried with below persistence.xml using toplink essentials but with oracle database and seems everything working as expected.
    <?xml version="1.0" encoding="Cp1252" ?>
    <persistence xmlns="http://java.sun.com/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
    version="1.0">
    <persistence-unit name="Project1">
    <provider>oracle.toplink.essentials.PersistenceProvider</provider>
    <jta-data-source>java:/app/jdbc/jdbc/Connection1DS</jta-data-source>
    <class>project1.Dept</class>
    <class>project1.Emp</class>
    <properties>
    <property name="toplink.target-server"
    value="project1.WebLogicTransactionController"/>
    <property name="toplink.logging.level" value="OFF"/>
    <property name="toplink.logging.file" value="c:\jpatoplink.log"/>
    <property name="toplink.target-database" value="Oracle"/>
    <property name="toplink.ddl-generation" value="none"/>
    </properties>
    </persistence-unit>
    </persistence>
    As Eclipselink persistence provider is being the latest and preferred ,it is recommended to shift to eclipselink persistence provider from toplink essentials.
    Regards,
    Vinay

  • Alarms stopped coming up

    Just recently (in the last couple of weeks), my alarms have stopped coming up. They are a message with sound, and have not come up when iCal is not open. I realised this today and started iCal, when I did this they all came up together, even though I have my settings in the preferences for alarms to come up even when iCal is not open. An ideas?

    Yes. This is done automatically (though you could manage it manually). All you have to do is set your iCal preferences appropriately.
    When you check "Turn off alarms when iCal is not open" in your Advanced iCal preferences, "iCalAlarmScheduler" is removed from the Login Items list in Account System Preferences.
    When you uncheck "Turn off alarms when iCal is not open" in your Advanced iCal preferences, "iCalAlarmScheduler" is added to the Login Items list in Account System Preferences.

  • DSC stopped logging

    I'm running Labview 7.1 and have up until very recently been successfully logging data using the DSC module.  The is no obvious cause for this, hence the posting.
    I have the DSC module installed on a PC with an RS-Linx OPC server.  Many tags are configured and have been logging for over a year.  The tagengine is configured and launched upon PC startup.  Database is Citadel 5.
    Basically, I cannot see any data beyond 8 aug 2007 when using MAX 4.1.  I have tried stopping an restarting the DSCEngine.  I also renamed the .SCF file and reloaded this into the tagengine.  Also I have added a new tag to see if that would log.   The tagmonitor shows all the tags are there and are updating.  However, when I try to add the new tag to a view in max, it is not present. I'm at a bit of a loss to understand how MAX cannot see it, and suspect this is the root of my problem.  Any help appreciated.
    Mark

    OK.  I tried this.  The read hisorical data .vi's behaved exactly like MAX, i.e. no data beyond a certain point.  All the tags appeared to be there, etc.
    I had a couple of other thoughts which I tried - also to no avail.  I looked at the services which were running, thinking thte Citadel server may not be running.  I found I had two services related to Citadel running.  Starting and stopping these seemed to make no difference, although I did not reboot the PC, a it is running a pretty critical application - although it can be rebooted if absolutely necessary.
    One thing: The Citadel service said it depended on something called NIPALK, which I did not see in the services list.  Any thoughts on what this is?

  • Stop logging out when logging in

    When I log in and start using my Mac, all the applications start quitting and attempting to automatically log out. How does one stop this completely pointless Yosemite 'feature'?

    I suspect it is a Yosemite bug (I was being ironic about 'feature' :-). It's set to automatically log out after an hour. On older OS, this would do a 'fast switch' log out. Yosemite seems to try and do a full log out (really annoying and pointless). When you fast log in, it seems to think the hour is up and tries to do a full log out.
    Ideally, I'd like the fast switching log out to work again, and this 'feature' log out not to exist at all as it has no useful function.

Maybe you are looking for