No historical data in database. UCCE

HI all. We are running UCCE 9.
And today faced the problem: neither AWDB or HDS database tables   _half_hour or _interval shows no data.
Real-time tables works fine. Cuic historical reports also shows no data
What i've checked:
Hds/aw databases has enough place for data.
Reporting in Configuration manager>PG explorer>Agent distribution is enabled for HDS and real-time(site name is correct)
Test calls and some activities was performed.
What i've forgot to configure here?
Any hints appreciated. TY

Check the updateaw process and all other icm processes are working fine. Please share the updateaw logs during any changes ( like Agent creation, making test calls ) which should show in reporting. In short recreate the issue and send the updateaw, rtc,rpl logs.....or you can troubleshoot with AW logs.
Hope this will help you..........:)
Thanks & Regards,
Hardik B Kansara

Similar Messages

  • Options to edit historical data in Citadel database (Lookout 6.0)

    We are running a new installation of Lookout 6.0
    I am looking for ways to edit the historical data that is found in the Citadel database. For example, if an alarm clears before I can enter comments, I would like to go to the database after the fact and enter comments regarding the alarm.
    We are running this on XP pro operating system.
    Thanks,
    Alan

    Hi Alan,
    I am afraid this will be difficult, if at all possible.  Citadel data, by design, can be written to (and edited?) only by the product using it, i.e., Lookout, LabVIEW-DSC, etc.  Outside of these products we can only retrieve the data -- not edit or add to it.  As you are probably aware, some of the options for writing user data (as opposed to IO/system data) from within these products is to use the Logger Object in Lookout and the VI-Server approach in DSC (http://zone.ni.com/devzone/conceptd.nsf/webmain/5a921a403438390f86256b9700809a53). 
    So, I guess one option is to "annotate" / write additional data separately using the Logger Object, referencing the Alarms somehow.
    Having said that, I believe Lookout 6.x (Citadel5) uses MSDE for storing Alarm data (other data is still stored in the native Citadel database).  You could explore this -- try opening the MSDE database from Query Analyzer, from instance, and see if it can be edited.  I haven't tried this. 
    Hope this gives you some ideas. 
    -Khalid

  • Cant extract historical data from Citadel database

    When using the simplest VI's to extract historical data from the Citadel database, I get error messages :
    using "Get Historical Tag List.vi"
    CIT_OpenDatabase.vi
    error code - 0x8abc0010
    Using "Read Historical Trends.vi"
    CIT_ReadTrace.vi
    error code - 0x8abc0010
    I am using dsc version 6.0, and have already tried to upgrade to ver. 6.0.2. This did not go well, and I had to return to ver. 6.0 (reinstalling NT and everything else) to make my system run again.

    Download and install the latest release of Logos from ftp://ftp.ni.com/lookout/logos. Logos is the backbone drivers for the Citadel database. The most recent releases of Logos addresses issues that are similar to this.

  • Remote historical data is not retrieved completely viewing it in MAX4

    Hi,
    since I installed LabVIEW 8 I have some problems retrieving historical data from another computer. Sometimes not all data is retrieved (if I zoom in or out or move back in time) and this missing data won't be retrieved ever.
    I already deleted the Citadel cache once, but after this even less data was retrieved... What's really weird, is, that for channels which weren't retrieved correctly, the data gets not updated anymore!
    On the remote computer I have a LabVIEW DSC Runtime 7.1 running (MAX 3.1.1.3003) on my local computer MAX 4.0.0.3010 and LabVIEW 8 DSC (development system) is installed parallel to LV DSC 7.1.1 (dev system). LV 8 is installed for testing purposes (I doubt we'll switch soon) and overall I like MAX 4. The HyperTrend.dll on my local computer is version  3.2.1017.
    This is really a quite annoying bug!
    So long,
        Carsten
    Message Edited by cs42 on 02-02-2006 09:18 AM

    Hi,
    > We've been unable to reproduce this issue. If you could provide some additional information, it might help us out.
    I did fear this, as even on my computer it is happening just sometimes...
    > 1) How many traces are you viewing?
    The views I observed this in had 2 to 13 traces.
    > 2) How often are the traces being updated?
    For some it's pretty often (about once a second), for some it's very infrequent (no change in data, that means updated because of max time between logs). I more often see this for traces that are updated very infrequently. But I think I've seen this for frequent traces as well (for these it does work currently).
    > 3) Are the traces being updated by a tag value change, or by the "maximum time between logs" setting in the engine?
    It happened for both types.
    > 4) What is the frequency of the "maximum time between logs" setting?
    Max time between logs is 10 minutes.
    > 5) Is the Hypertrend running in live mode when you zoom out/pan?
    I think it happened in both modes, but it defenitely did in live mode.
    > 6) If you disable/re-enable live mode in the Hypertrend, does the data re-appear?
    I couldn't trigger the loading of the data. All I did is wait and work with MAX (zooming, panning, looking at data) and after quite a while (some hours), the data appeared.
    Just tested this on a view where data is missing (for some days now!), and it didn't trigger data reloading. Zooming and panning don't as well. There's a gap of up to 3 days now for some traces. 7 of the 13 traces of this view are incompletely shown. All stopping at the same time but reappearing at different ones.
    AFAIR from the laboratory computer (these are temperatures and it's very plausable that these didn't change), there wasn't any change in these traces so they all got logged because of max time...
    I just created a new view and added these traces: the gap is there as well.
    (Sorry to put this all in this entry even if it is related to you other questions, but I started this live test with disable/re-enable live mode. )
    > 7)
    Are the clocks on the client and server computers synchronized? If not
    synchronized, how far apart are the times on the two computers?
    They should be (Windows 2000 Domain synchronized to ADS), but are 5 seconds apart.
    One thing I remember now: I have installed DIAdem 10 beta 2 (10.0.0b2530, USI + DataFinder 1.3.0.2526). There I had (and reported) some problems with data loading from a Citadel Database of a remote machine as well. This was accounted to some cache problem. Maybe a component is interfering?
    Thanks for investigating.
    Cheers,
        Carsten

  • Historical Data

    HI Experts, I stuck up with one Query. Hope you will help me. I need to extract Historical Data for employees from PA0000 and PA0001
    Basically I’m reading input file from Flat file using GUI_UPLOAD and storing into internal table t_txt_upload.
    Then for all the personal numbers in this internal table need to extract Historical data.
    When  I’m trying to fetch the data using select query its giving Historical. But after READ statement its displaying  only a Single record.
    can you explain how to fetch Historical records for a personal number.
    Can you please guide me.  For your clarification attaching the code. Could you please check it and inform me the necessary changes required.
    if not t_txt_upload[] is initial. "checking for the data.
    sort t_txt_upload by pernr. "sorting uploaded internal table by pernr.
    Get the data from table PA0000.
    select pernr
           begda
           endda
           massn
           from pa0000
      into table it_p0000
      for all entries in t_txt_upload
      where pernr = t_txt_upload-pernr.
      if sy-subrc ne 0.
        message e063.
      endif.
      sort it_p0000 by pernr.
    **Reading data from PA0001.
    select pernr
             begda
             endda
             persk
             stell
             plans
             werks
             btrtl
             persg
           from pa0001
    into table it_p0001
    for all entries in t_Txt_upload
    where pernr = t_txt_upload-pernr.
    if sy-subrc ne 0.
      message e063.
    endif.
    sort it_p0001 by pernr.
    endif.
    Loop at t_txt_upload.
    Read table it_p0000 with key pernr = t_txt_upload-pernr binary search.
    Read table it_p0001 with key pernr = t_txt_upload-pernr binary search.
      if sy-subrc eq 0.
        it_final-pernr = t_txt_upload-pernr.
        CONCATENATE IT_P0001-BEGDA6(2) IT_P0001-BEGDA4(2)
        IT_P0001-BEGDA+0(4) INTO V_BEGIN_DATE
        SEPARATED BY '/'.
      if sy-subrc eq 0.
        it_final-begda = v_begin_date.
      endif.
       CONCATENATE IT_P0001-ENDDA6(2) IT_P0001-ENDDA4(2)
        IT_P0001-ENDDA+0(4) INTO V_END_DATE
        SEPARATED BY '/'.
      if sy-subrc eq 0.
         it_final-endda = v_end_date.
      endif.
    append it_final.
    clear it_p0000.
    clear it_p0001.
    Endif.
    Endloop.

    Hi user_jedi,
    Yes, the data is persisted in the database and is permanent until deleted. The recommended practice is to use the Delete Data Alert Action to periodically purge historical data.
    So an overall approach that you could use is:
    * Incoming data is written both to BAM for the real time dashboards, and also to a system of record (database, datamart, data warehouse, ...).
    * Use a Delete Data Alert Action to periodically purge historical data from BAM.
    * If you want to access the historical data from BAM on occasion, you can use an External Data Object. Or use another reporting tool that's optimized for non-real time data.
    Regards, Stephen

  • Best way to store historical data

    Hello:
    I'm currently developing an application that monitors some phisycal
    variables such as temperature and differential pressure in different
    locations. I also must do historical datalogging of the values read so
    the user can generate reports of the historical data. What is the best
    way to save the data? I was thinking about using and SQL-based database
    and use a table for each variable, an make some maintenance on the
    database after certain time to avoid it getting too big, but I'm opened
    to any suggestions
    Thanks in advance!
    Robst
    Robst - CLD
    Using LabVIEW since version 7.0

    Hello:
    I've been looking for a way to install just Citadel with no success.
    I've read that it only ships with Lookout and DSC, so I guess it can't
    be installed without one of them. Si, if this is true, Do U know
    another option for doing datalogging?
    Thanks in advance
    Robst.
    Robst - CLD
    Using LabVIEW since version 7.0

  • Retrieving historical data from new ST04

    In the old ST04, you could get a nice, 3 month daily overview of key measures just by hitting "Previous Days".  I use that in my performance analyses.  With the new ST04, I have no idea how that's done.  From my understanding, the new ST04 should give you historical data if you give an initial snapshot date that's far enough back.  But I see no way to define the initial date as anything but "Database Start".  SAP_COLLECTOR_FOR_PERFMONITOR has been running consistently for months. Programs RSORAHCL and RSORAVSH are scheduled to run hourly every day.  So the history should still be there, and should be accessible.  However, the documentation on the new DBACOCKPIT is very sketchy, as far as I've seen.
    Can anyone either point me to some good documentation on this topic, or provide some hints.  I'd very much appreciate it.
    Thanks very much.
                                                       Gordon

    Stefan,
    The system I'm looking at, L6P, is on Basis 7.00 SP 13.  For troubleshooting, I compared L6P to our G8P system , which is on Basis 7.00 SP 15.
    One thing I found is that I can select some dates under the "Database Start" and "Up To Now" buttons on G8P, but not on L6P.  I further found that in table TCOLL, RSORAHCL has all 7 days marked in G8P, but no days marked in L6P.  That would explain why I see the dates in G8P, but not in L6P -- I need to flag the days for RSORAHCL in TCOLL.
    So now, I know what I need to do to pick dates going back as far as what's in AWR, according to dba_hist_snapshot.  I also see how I can change the snapshot interval and retention periods:
    begin
       dbms_workload_repository.modify_snapshot_settings (
          interval => 20,
          retention => 22460
    end;
    for example.  The data's in minutes, so I'd want interval = 1440 and retention = 902460 for 3 months of daily snapshot data.
    However, I still don't see how I can actually see the history.  I go into Statistical Information --> System Summary Metrics, select Metrics Datasource dba-view, and put in the dates I want.  I get a lot of metrics, but I don't see a way to limit them to the sepcific  ones I want (for data buffer hit rate, I think I'll need to get the physical and logical reads).  How can I clean this up to show only what I need?
    Thanks very much.
                                                                    Gordon

  • Looking for a manual for the Historical Data Viewer Lookout 6.1

    Is there a manual on the Historical Data Viewer (Measurement and Automation Explorer)?
    I have never logged any analog data, but want to log (to historical data) the battery voltage at a remote site during a power outage.
    I have the object established and have a display showing the real-time battery voltage. I have selected logging and have set the resolution and the deviation.
    I am stuck where it comes to the historical data viewer. I believe that I need to set up a trace to view the data, but can not find a manual for the  Historical Data Viewer (Measurement and Automation Explorer) so that I can read up on it.
    Does anyone have a link to a manual?
    Thanks, Alan

    You have to configure the Scale parameter of Pot in Lookout. The Historical Data Viewer just reads data from database the show it. It can't scale it.
    But I can't find a way to add a reference line to the view. Maybe this can be a new feature. 
    Ryan Shi
    National Instruments

  • Historical usage of database

    Hi,
    Can anybody help me about how to find info about database usage for last one year.
    I just need to know is it possible to find historical usage of database during last year.
    For example>
    jan 100G
    feb 150G
    desc 450G
    Thanks

    Hi,
    Which Database version are you using.
    If you are using Oracle 11g rel2 than,
    Querying History Data :
    Flashback Data Archive provides seamless access to the historical data using the ‘AS OF’ or
    ‘VERSIONS BETWEEN’ SQL constructs. You can query for the state of any row in the tracked
    table as far back as your specified retention period.
    The following is an example for querying the salary details for the employee with id=193 on June
    1, 2007:
    SELECT last_name, first_name, salary
    FROM EMPLOYEES
    AS OF TIMESTAMP TO_TIMESTAMP(‘2007-06-01 00:00:00’,’YYYY-MM-DD HH24:MI:SS’)
    WHERE employee_id=193;
    • The FLASHBACK ARCHIVE ADMINISTER system privilege is required for creating a new
    flashback data archive
    • The following static data dictionary views are available
    • DBA/USER_FLASHBACK_ARCHIVE – Displays information about flashback data
    archives
    • DBA/USER_FLASHBACK_ARCHIVE_TS – Displays tablespaces and the mapping to
    flashback data archives
    • The FLASHBACK ARCHIVE object privilege is required to enable Flashback Data Archive
    • The following static data dictionary views are available:
    • DBA/USER_FLASHBACK_ARCHIVE_TABLES – Displays information about tables
    that are enabled for Flashback Data Archive
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com/

  • Historical Data Maintenance

    Dear Members,
    This is my second post in the forum. Let me explain the scenario first,
    "We have 2 Tools -Tool1 and Tool2 which points to 2 different databases - db1 and db2 respectively. currently, the db1 performance is very poor due to huge data. we want to have only latest 18 months data in db1. The oldest data beyond 18 months should remain in db2 (in read only mode)which Tool 2 connects to. So, whenever i need historical data, i`ll use tool2. At regular intervals the data from db1 should move to db2."
    My idea is to use partitioning and logical standby. At the end of each month, the oldest one month data will be moved to db2. But please let me know whether this will be feasible to the above concept. If so, how to implement this and if not, what would be the right solution for this?
    Regards,
    Mani
    TCS

    Partitioning is great on the source side (assuming you partition by date, of course).
    I am not sure how logical standby would help on the destination. The point of logical standby is to keep the standby database up to date with the primary, so the standby database would not be read only, it would be constantly applying transactions from the primary. And when you drop a partition on the primary, you would drop the partition on the standby, so the standby wouldn't maintain history.
    Instead of logical standby, you could use Streams to replicate transactions and configure Streams to ignore certain DDL operations like partition drops. That would allow you to retain history on db2 but wouldn't give you a read-only db2 database.
    You could potentially do partition exchange in db1 at a regular interval, moving the data you want to remove into a non-partitioned staging table, move that table to db2 (via export/import, transportable tablespaces, etc), and do a partition exchange to load the data into the partitioned table on db2. That gives you a read only db2 and lets you retain history, but requires some work to move the data around every month.
    Of course, if you decide to partition db1, assuming you did it correctly, I would tend to expect that the performance problems would go away (or at least that archiving the old data wouldn't affect performance any longer). One of the points of partitioning is that Oracle can then do partition elimination for your queries so that it only needs to look at the current partition if that's all tool1 is interested in. So perhaps all you need to do is partition db1 and you don't need db2 at all.
    Justin

  • Historical data move.

    Hi,
    In my database i have various main transactional tables which contains 8 to 9 years of old data but my users hardly fetch data which is older than 2 years. Some time this much data in my tables impact the application performance. Now i want to move historical data out of my main transactional table to keep them as smaller as possible. I have written few following options to do this. Could you please explain me which options is better and why. specifically what's the advantage if i keep my historical data in different tablespace?
    1. Create history tables in same schema and keep respected table data older than 2 years in same table space.
    2. Create separate history schema and keep respected table data older than 2 years in different tablesapce
    3. dont move the data create range partitions according to date.
    Regards,
    JM

    JM_1979 wrote:
    Hi,
    In my database i have various main transactional tables which contains 8 to 9 years of old data but my users hardly fetch data which is older than 2 years. Some time this much data in my tables impact the application performance. Now i want to move historical data out of my main transactional table to keep them as smaller as possible. I have written few following options to do this. Could you please explain me which options is better and why. specifically what's the advantage if i keep my historical data in different tablespace?
    1. Create history tables in same schema and keep respected table data older than 2 years in same table space.
    2. Create separate history schema and keep respected table data older than 2 years in different tablesapce
    3. dont move the data create range partitions according to date.
    Regards,
    JMYour first two options seem to imply that you think there is some inherent correlation between schemas and tablespaces, or that one choice might have better performance than another. Not so.
    If you create a separate table for the historical data, it's a separate table. Period. As far as performance goes, it doesn't matter if it is in a separate schema or not. It doesn't matter if it is in a separate tablespace or not. The simple fact that you have moved it to a separate table (removing it from the table with the 'current' data) means that queries on the 'current' table won't have to wade through the historical data.
    But even before reading your proposed solutions, I was think of option 3. Your situation is exactly what partitioning is most often used for. The beauty of partitioning is that oracle can figure out that it doesn't have to wade through the 'historical' partitions if it can tell from the SELECT predicates that it doesn't need what's there, PLUS if you do need the historical data, you don't have to write a join of the two tables. In short, you app doesn't have to be concerned with what's historical vs. what's current.
    Take note that partitioning is an extra cost option.

  • Historical Data Models

    I'm looking for case studies on historical data models, because I'd like to know how other analysts had overcome this subject.
    Please, can anyone help me?. Thanks

    I'm looking for case studies on historical data models, because I'd like to know how other analysts had overcome this subject.
    Please, can anyone help me?. Thanks There is a book by Oracle Press about "UML and Oracle 8" (don't know the exact title) where UML vs. ER modeling is presented.
    Thru the book you can find a chapter about "Historic Data Modelling". Try a search on Amazon, you'll find the book for sure.
    There is also a book "Developong Time-Oriented Database Applications in SQL" by Richard T. Snodgrass.
    Greeting from Croatia,
    Drago Ganic

  • SCEP Historical Data

     
    1. Are SCEP Historical Data (Threat History) stored on a log on the Server?  or does SCCM 2012 read the data from the SCEP Logs on the client?
    2. If I'm looking to keep\pull SCEP historical data for 1 year, do I have to edit the "Delete Aged Threat Data" under maintenance task from the default "30" days to "365"?
    Just seems strange that the default is 30 days.  I know Auditors require tracking historical data from a year or so.
    Andrew Marcos

    Yes, I know this is an old post, but I’m trying to clean them up. Did you solve this problem, if so what was the solution?
    No There will be no problem doing this except for there will be an increase in the database size.
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

  • Error -196739070​4 when trying to read historical data from CITADEL

    I am trying to read historical data for 8 tags in a 'for loop'. If I run the historical vi while my data collection vi is running, I get the error -1967390704 CIT_ReadTrace.vi error code 0x8abc010. I have installed the DSC6.1 fixes and also updated the logos version to 4.4. Is it generally a good practice to not access historical data while new data could still be written to CITADEL?? Is there a vi or anything that could check the versions of all the installed NI software to the latest versions that are available??

    I had some major problems when I installed the latest updates. I unzipped to the wrong directory and had multiple copies of some labview files. It really made for some strange operation. Right now I am running labview 6.1 with feb26,2003 updates from the web, logos version 4.4.0.17. It sounds like your database may be corrupted. They also sent me a file to workaround a read issue with the historical vi's.
    Attachments:
    BenchReadHistTrend.llb ‏104 KB

  • Exporting historical data to text file with MAX misses columns of data?

    I am using Labview 7.1 with DSC module 7.1 and wants to export data to either excel format or .txt.
    I have tried this with the historical data export in MAX, and also programmatically with the "write traces to spreadsheet file.vi" available in the DSC module. All my tags in my tag engine file (*.scf) are defined to log data and events. Both the update tag engine deadband and the update database are set to 0 %.
    My exported excel or text file seems reasonalbe except that some columns of data are missing. I dont understand
    why data from these tags are not in the exported files since they have the same setup in the (.scf) as other tags which are okay exported?
    All defined tags can be seen using the NI hypertrend or MAX. Also the ones that are not correctly exported to file.
    Appreciate comments on this.
    Best regards,
    Ingvald Bardsen

    I am using LV and DSC 7.1 with PCI-6251 and max 4.2. In fact, just one column of values does not make sense. In attachment, follows the excel exported file. The last of column called ...V-002 that is the problem. I put probes for checking values but them shows correct. When the file is exported that to show values wrong.
    The problem of missing values in column i solved putting 0% in field of deadband.
    thank you for your help
    Attachments:
    qui, 2 de ago de 2007 - 132736.xls ‏21 KB

Maybe you are looking for