Log data for 40+ channels

I want to log the acquired channel data to database after every 6 hours?
The applicaiton continously acquires data 24/7 and writes to file.
The file data then needs to be commited after every 6hrs, without closing the file and deleting the previous logged data in the file(which is already commited). Can the open file be accessed to commit the data to database, while the applicaiton writes the acquired data?
or should the data be commited to database after every acquistion? The aim is to have minimal data loss while acquiring values.
thanks!

Hi
they say a perfect picture is an empty canvas
Maybe thats because  we all see the picture as we beleive it to be composed!
With regard to your question:
1) what is your sampling speed?
2)What instrumentation are you using?
3) Windows 2000/XP/Vista?
4) Labview version?
Using  'ball park figures:
If you have 16 channels and sample at 16Khz  for 60 seconds then a 3MB dat file is generated!
IF you do not write direct to disk then you have over 1GB data to transfer every 6hoursHopefully you power supply is reliable ALso your PC is capable of the overload.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I would suggest that you consider a real time solution, compact RIO, with the appropiate  hardware.
Check out NI products and services.
xseadog

Similar Messages

  • Log data for infotypes

    Hi folks,
    I have an task that I am trying to resolve, related to log data for the infotypes. I had posed this question earlier too. I did some research to find out an answer for this, but invain. Any help is really appreciated.
    The task is: The infotype is 0167 and there are changes made for health plan records(like inserting new plans, terminate the plans). I do not know what the endusers are doing, it is not creating log data for these changes that are happening.
    The program's logic is developed in such a way that it goes after these log records from PCL4. I believe it is not a programming issue, it might be some thing else.
    I need to find that out to resolve it.
    What might be the problem?
    Thanks,
    SK

    It is a standard setting. The infotypes, their field groups and field group charcterstics are defined in V_T585A, V_T585B and V_T585C. I believe the end users are missing some process. Because it started to happen since a week or so. The same program was picking the records fine earlier and it has not changed.
    I do not know what kind of process they follow. what changed now? Since I am the only SAP guy out here, got to find it out.
    They are using PA30/PA40 to enroll and the web application. Both these records did not create the log data. The reocrds went through to SAP.
    Could there be any step they might be missing?
    Thanks for the quick reply,
    SK

  • How to store call log data for one month?

    HI,
    I am using iPhone 4S with iOS 7.1.2. I would like to know that is there any setting for storing call log data for latest one month..?

    That's not how it works. Recents is limited to exactly 100 calls, not a time frame. If you need your call history for a specific time frame, look on your carrier's website. Most carriers will permit you to login to your account & view call history.

  • 10 KS/S data for 28 Channels

    Hi All,
    From the field I have 28 channel sensors data, the data rate is 10KS/S for each channel.
    I have to store field data upto 3 Month to generate the historical reports or to analysis the field conditions.
    The data is very huge so i want some suggestion from you all to store this much of data techinique or prefered way to store this much data.
    Thanks and Regards
    Himanshu Goyal | LabVIEW Engineer- Power System Automation
    Values that steer us ahead: Passion | Innovation | Ambition | Diligence | Teamwork
    It Only gets BETTER!!!
    Solved!
    Go to Solution.

    Thanks for your suggestions.
    The project requirements are already define by the costumer. The costumer wants complete 10K data of all 28 channels. He can't afford the less sample rate or data loss.
    I tried with TDMS file to store 10 KS/S (SGL Format) data for 28 channels(Sine Wave); the file size is around 3.5 GB for 1 hour. The complete day data come around 84 GB, I am not sure about the TDMS file size this can support upto this much of file size, if this not support then again I have to save complete day data in set of bunch of TDMS files. Now if costumer want to see complete 1 hour data report in graphical or tabular format the task to perform the action takes around 2-3 minutes or some time it come up with some error like "Not Enough Memory". So in the end of the day if user want to see the complete day report how I can provide the report that have such kind of huge data.
    I am using PXI RT system with some local hard disk. The PXI always have 7 days data as an backup or in case of connection failure with server the data backup is in PXI.
    So my issue now same what database or file format i have to chosse to store complete data.
    If there is any techinque to compress the data please suggest me.
    Thanks and Regards
    Himanshu Goyal | LabVIEW Engineer- Power System Automation
    Values that steer us ahead: Passion | Innovation | Ambition | Diligence | Teamwork
    It Only gets BETTER!!!

  • I need to start and stop logging based on a digital input event(or analog if necessary), log data for several seconds prior to the event, and have the data file close at the end of event and increment the filename for the next logging event.

    I don't know if this can be done with VI Logger or need to use Labview V7.1.

    After browsing through the VI Logger User Manual, it looks like the triggering that you are hoping to accomplish is possible. However, incrementing the filename for the next logging event is not going to be possible. VI Logger does exactly what its name tells - logs data. I don't think the automation that you are hoping to accomplish is possible.
    For help with setting up your application, if you do choose to stay with VI Logger, make sure to chek out the Getting Started with VI Logger Manual.
    Best of luck.
    Jared A

  • I want to acquire data for multiple channels using PCI 6120 that works on traditional DAQ. I cannot access more than one channel, can someone help me or if someone has data acquisition vi for PCI 6120, please send me over . Thanks

    I have PCI 6120 card and I want to acquire data for more than one channels. I'm using traditional DAQ to get it. But it does not work for more than one channels. If someone has a data acquisition vi for PCI 6120. Or some suggestion how to aquire data please let me know.
    Thanks

    Hello DSPGUY1,
    You can definetly acquire from several channels. For your convenience, I have appended below the content from help that tells you how to configure it:
    "channels specifies the set of analog input channels. The order of the channels in the scan list defines the order in which the channels are scanned during an acquisition. channels is an array of strings. You can use one channel entry per element or specify the entire scan list in a single element, or use any combination of these two methods. If x, y, and z refer to channels, you can specify a list of channels in a single element by separating the individual channels by commas, for example, x,y,z. If x refers to the first channel in a consecutive channel range and y refers to the last channel, yo
    u can specify the range by separating the first and last channels by a colon, for example, x:y."
    Hope this help.
    Serges Lemo
    Applications Engineer
    National Instruments

  • Log data for 5 min

    Hi.
    I use Labview 7.1.I am acquiring some data using the DAQ assistant and save it as as lvm file using the Write LVM express vi.Both these blocks are placed inside a while loop which runs continously until user presses the stop data.
    Now the question is that I want to log only 5 minutes of data each time i start the vi.
    How do I do this?
    Thanks."

    dear pilo.
    I think you can make a loop with two time counters.
    One in the loop, one outside of the loop.
    In the exemple i gave you, the loop is runing until the time is upper than 5 seconds.
    Attachments:
    time.vi ‏15 KB

  • How to delete Change log data from a DSO?

    Hello Experts,
    I am trying to delete the change log data for a DSO which has some 80 Cr records in it.
    I am trying to follow the standard procedure by using the process chain variant and giving the no.of days but somehow data is not getting deleted.
    However the process chain is completing successfully with G state.
    Please let me know if there are any other ways to delete the data.
    Thanks in Advance.
    Thanks & Regards,
    Anil.

    Hi,
    Then there might something wrong at your Chang log deletion variant.
    can you recreate changlog deletion variants and set them again.
    Ty to check below settings with new variant.
    Red mark - won't select
    Provide dso name and info area, older than and select blue mark.
    blue mark - it will delete only successfully loaded request which are older than N days.
    Have you tested this process type changlog deletion before moving to prod per your data flow?
    Thanks

  • How can i control NI-6115 to collect data from 2 channels and save as 2 files?

    I want to program NI-6115 card to collect data from 2 channels and save the two data into two different filenames that i specified?
    How do i write in labview codes?

    Calibur,
    LabVIEW includes a number of examples that demonstrate how to acquire analog input data and write it to disk. Dependent upon the type of file you would like to use, I would suggest that you examine one of the following examples:
    Cont Acq to File (binary).vi
    Cont Acq to File (scaled).vi
    Cont Acq to Spreadsheet File.vi
    With regards to writing each channel's data to a separate file, you will need to use the Index Array function to generate two 1-D arrays, each containing data for one channel. These arrays can then be written to separate files using two Write File functions.
    Good luck with your application.
    Spencer S.

  • Problem to calculate the coherence (using NetworkFunction-VI) with only 1 row of data for each, the stimulus and response input

    Hello,
    I am trying to calculate the coherence of a stimulus and response
    signal using the Network Functions (avg) VI's. The problem is that I
    always get a coherence of "1" at all frequencies. This problem is
    already known (see KnowledgeBase document: Why is the Network Functions (avg) VI's Coherence Function Output "1"?).
    My trouble is that the described solution (-> the stimulus and response input matrices need to have at least two rows to get non-unity coherence values) doesn't help me much, because I only have one array of stimulus data and one array of response values.
    Thus, how can I fullfil the 'coherence-criteria' to input at least two rows of data each when I just have one row of data each?
    Any hint or idea is very much appreciated. Thanks!
    Horst

    With this weird board layout, I'm not sure whether you were asking me, but, on the assumption that you were, here goes:
    I found no need to use the cross-power spectrum and power spectrum blocks
    1... I was looking for speed.
    2... I already had the component spectral data there, for other purposes. From that, it's nothing but addition and multiplication.
    3... The "easy" VIs, assume a time wave input, as I recall. Which means they would take the same spectrum of the same timewave several times, where I only do it once.
    I have attached PNGs of my code.
    The PROCESS CHANNEL vi accepts the time wave and:
    1... Removes DC value.
    2... Integrates (optional, used for certain sensors).
    3... Windows (Hanning, etc. - optional)
    4... Finds spectrum.
    5... Removes spectral mirrors.
    6... Scales into Eng. units.
    7... From there, you COULD use COMPLEX-TO-POLAR, but I don't care about the phase data, and I need the MAG^2 data anyway, so I rolled my own COMPLEX-TO-MAG code.
    The above is done on each channel. The PROCESS DATA vi calls the above with data for each channel. The 1st channel in the list is required to be the reference (stimulus) channel.
    After looping over each channel, we have the Sxx, Syy, and Sxy terms. This code contains some averaging and peak-picking stuff that's not relevant.
    From there, it's straightforward to ger XFER = Sxy/Sxx and COHERENCE = |Sxy|^2 / (Sxx * Syy)
    Note that it uses the MAGNITUDE SQUARED of Sxy. Again, if you use the "easy" stuff, it will do a square-root operation that you just have to reverse - it is obtained faster by the sum of the squares of the real and imag parts.
    Hope this helps.
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks
    Attachments:
    ProcessChannel.png ‏25 KB

  • How to extract audit log data from every document library in site collection using powershell?

    Hi All,
    I have n number of document library in one site collection,
    My query is- How to extract audit log data from every document library in a site collection using powershell?
    Please give solution as soon as possible?

    Hi inguru,
    For SharePoint audit log data, These data combine together in site collection. So there is no easy way to extract audit log data for document library.
    As a workaround, you can export the site collection audit log data to a CSV file using PowerShell Command, then you can filter the document library audit log data in Excel.
    More information:
    SharePoint 2007 \ 2010 – PowerShell script to get SharePoint audit information:
    http://sharepointhivehints.wordpress.com/2014/04/30/sharepoint-2007-2010-powershell-script-to-get-sharepoint-audit-information/
    Best Regards
    Zhengyu Guo
    TechNet Community Support

  • I need the Log Report for the Data which i am uploading from SAP R/3.

    Hi All,
    I am BI 7.0 Platform with Support Patch 20.
    I need the Log Report for the Data which i am uploading from SAP R/3.
    I extract the DATA from R/3 into BI 7.0 DSO where I am mapping the GL Accounts with the FS Item.   In the Transformation i have return a routine on the FS Item InfObject . I am checking the Gl code into Z table for the FS Item .
    I capture the FS item from the Z table then update this FS item to Infobject FS item.
    Now i  need to stop the Data upload if i do not find the GL code in the Z table, and generate report for all GL code for which the FS item is not maintained in the Z table.
    Please suggest.
    Regards
    nilesh

    Hi.
    Add a field that you will use to identify if the GL account of the record was found in the Z table or not. Fx, create ZFOUND with length 1 and no text.
    In your routine, when you do the lookup, populate ZFOUND with X when you found a match (sy-subrc = 0) and leave it blank if you don't find a match. Now create a report filtering on ZFOUND = <blank> and output the GL accounts. Those will be the ones not existing in the Z table, but coming in from your transactions.
    Regards
    Jacob

  • Not able to get data in the F4 help for disrtibution channel in ic web

    Hi Experts,
    Need some help.
    1. I have a ic web screen for complaint. Here in the header(Form view) there is a F4 for Distribution channel. The item list which is a table view, which contains one column . This column contains checkboxes.
    Issue:
    scenario 1: If in the item, if there are two checkboxes checked, and then if i click on the F4 for Distr channel of header then in the F4 its giving one record. its working fine.
    scenario 2: But If in the item, if there are more then two checkboxes checked, and then if i click on the F4 for Distr channel of header then in the F4 its not giving any record. How to correct this???
    Analysis:
    I found that in the method GET_HELP_VALUES of class CL_CRM_IC_F4HELP, there is a statement as below.
    infields = request->get_form_field( 'InFields' )."#EC NOTEXT
    For scenario 1, the above statement returns  the input fields with some values...
    For scenario 2: the above statement returns nothing. Its completely blank.As a result its not showing any record in F4.
    I dont understand why it is happening. Could any one please help in this regard.
    version: CRM 5.0 .  Standard F4 help ( ic_base-> f4_help)
    Thanks
    Sudhansu

    Hi,
    Check if the characteristics used in query are direct objects from cube or navigation attributes of any other characteristics. In case of navigation attributes you need to have master data maintained.
    And as mentioned by Vamsi, check the text data maintained for 0CUSTOMER. And for checking data you can mark Key and Text option.
    Regards,
    Durgesh.

  • DSC 8.6.1 wrong timestamps for logged data with Intel dual core

    Problem Description :
    Our LV/DCS 8.6.1 application uses shared variables to log data to Citadel. It is running on many similar computers at many companies just fine, but on one particular Intel Dual Core computer, the data in the Citadel db has strange shifting timestamps. Changing bios to startup using single cpu fixes the problem. Could possibly set only certain NI process(es) to single-cpu instead (but which?). The old DSCEngine.exe in LV/DSC 7 had to be run single-cpu... hadn't these kind of issues been fixed by LV 8.6.1 yet?? What about LV 2009, anybody know?? Or is it a problem in the OS or hardware, below the NI line??
    This seems similar to an old issue with time synch server problems for AMD processors (Knowledge Base Document ID 4BFBEIQA):
    http://digital.ni.com/public.nsf/allkb/1EFFBED34FFE66C2862573D30073C329 
    Computer info:
    - Dell desktop
    - Win XP Pro sp3
    - 2 G RAM
    - 1.58 GHz Core 2 Duo
    - LV/DSC 8.6.1 (Pro dev)
    - DAQmx, standard instrument control device drivers, serial i/o
    (Nothing else installed; OS and LV/DSC were re-installed to try to fix the problem, no luck)
    Details: 
    A test logged data at 1 Hz, with these results: for 10-30 seconds or so, the timestamps were correct. Then, the timestamps were compressed/shifted, with multiple points each second. At perfectly regular 1-minute intervals, the timestamps would be correct again. This pattern repeats, and when the data is graphed, it looks like regular 1-sec interval points, then more dense points, then no points until the next minute (not ON the minute, e.g.12:35:00, but after a minute, e.g.12:35:24, 12:36:24, 12:37:24...). Occasionally (but rarely), restarting the PC would produce accurate timestamps for several minutes running, but then the pattern would reappear in the middle of logging, no changes made. 
    Test info: 
    - shared variable configured with logging enabled
    - data changing by much more than the deadband
    - new value written by Datasocket Write at a steady 1 Hz
    - historic data retrieved by Read Traces
    - Distributed System Manager shows correct and changing values continuously as they are written

    Meg K. B. , 
    It sounds like you are experiencing Time Stamp Counter (TSC) Drift as mentioned in the KB's for the AMD Multi-Core processors. However, according to this wikipedia article on TSC's, the Intel Core 2 Duo's "time-stamp counter increments at a constant rate.......Constant TSC behavior ensures that the duration of each clock tick is
    uniform and supports the use of the TSC as a wall clock timer even if
    the processor core changes frequency." This seems to suggest that it would be not be the case that you are seeing the issue mentioned in the KBs.
    Can you provide the exact modle of the Core 2 Duo processor that you are using?
    Ben Sisney
    FlexRIO V&V Engineer
    National Instruments

  • How to see data for particular date from a alert log file

    Hi Experts,
    I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
    Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
    Thanks
    Shaan

    Hi Jaffar,
    Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
    tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
    It's not working. Here is the sample log file
    Mon Nov 26 21:42:43 2007
    Thread 1 advanced to log sequence 138
    Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Mon Nov 26 21:42:43 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 137
    Mon Nov 26 21:42:43 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 137
    ARC1: Unable to archive log 1 thread 1 sequence 137
    Log actively being archived by another process
    Mon Nov 26 21:42:43 2007
    ARCH: Beginning to archive log 1 thread 1 sequence 137
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
    .dbf'
    ARCH: Completed archiving log 1 thread 1 sequence 137
    Mon Nov 26 21:42:44 2007
    Thread 1 advanced to log sequence 139
    Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Mon Nov 26 21:42:44 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 138
    ARC0: Beginning to archive log 3 thread 1 sequence 138
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
    .dbf'
    Mon Nov 26 21:42:44 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 138
    ARCH: Unable to archive log 3 thread 1 sequence 138
    Log actively being archived by another process
    Mon Nov 26 21:42:45 2007
    ARC0: Completed archiving log 3 thread 1 sequence 138
    Mon Nov 26 21:45:12 2007
    Starting control autobackup
    Mon Nov 26 21:45:56 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0033'
    handle 'c-2861328927-20071126-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Tue Nov 27 21:23:50 2007
    Starting control autobackup
    Tue Nov 27 21:30:49 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071127-00'
    Tue Nov 27 21:30:57 2007
    ARC1: Evaluating archive log 2 thread 1 sequence 139
    ARC1: Beginning to archive log 2 thread 1 sequence 139
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
    .dbf'
    Tue Nov 27 21:30:57 2007
    Thread 1 advanced to log sequence 140
    Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Tue Nov 27 21:30:57 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 139
    ARCH: Unable to archive log 2 thread 1 sequence 139
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARC1: Completed archiving log 2 thread 1 sequence 139
    Tue Nov 27 21:30:58 2007
    Thread 1 advanced to log sequence 141
    Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Tue Nov 27 21:30:58 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 140
    ARCH: Beginning to archive log 1 thread 1 sequence 140
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
    .dbf'
    Tue Nov 27 21:30:58 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 140
    ARC1: Unable to archive log 1 thread 1 sequence 140
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARCH: Completed archiving log 1 thread 1 sequence 140
    Tue Nov 27 21:33:16 2007
    Starting control autobackup
    Tue Nov 27 21:34:29 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0205'
    handle 'c-2861328927-20071127-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Wed Nov 28 21:43:31 2007
    Starting control autobackup
    Wed Nov 28 21:43:59 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-00'
    Wed Nov 28 21:44:08 2007
    Thread 1 advanced to log sequence 142
    Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Wed Nov 28 21:44:08 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 141
    ARCH: Beginning to archive log 3 thread 1 sequence 141
    Wed Nov 28 21:44:08 2007
    ARC1: Evaluating archive log 3 thread 1 sequence 141
    ARC1: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
    .dbf'
    Wed Nov 28 21:44:08 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 141
    ARC0: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    ARCH: Completed archiving log 3 thread 1 sequence 141
    Wed Nov 28 21:44:09 2007
    Thread 1 advanced to log sequence 143
    Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Wed Nov 28 21:44:09 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 142
    ARCH: Beginning to archive log 2 thread 1 sequence 142
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
    .dbf'
    Wed Nov 28 21:44:09 2007
    ARC0: Evaluating archive log 2 thread 1 sequence 142
    ARC0: Unable to archive log 2 thread 1 sequence 142
    Log actively being archived by another process
    Wed Nov 28 21:44:09 2007
    ARCH: Completed archiving log 2 thread 1 sequence 142
    Wed Nov 28 21:44:36 2007
    Starting control autobackup
    Wed Nov 28 21:45:00 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Thu Nov 29 21:36:44 2007
    Starting control autobackup
    Thu Nov 29 21:42:53 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0206'
    handle 'c-2861328927-20071129-00'
    Thu Nov 29 21:43:01 2007
    Thread 1 advanced to log sequence 144
    Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Thu Nov 29 21:43:01 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 143
    ARCH: Beginning to archive log 1 thread 1 sequence 143
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
    .dbf'
    Thu Nov 29 21:43:01 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 143
    ARC1: Unable to archive log 1 thread 1 sequence 143
    Log actively being archived by another process
    Thu Nov 29 21:43:02 2007
    ARCH: Completed archiving log 1 thread 1 sequence 143
    Thu Nov 29 21:43:03 2007
    Thread 1 advanced to log sequence 145
    Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Thu Nov 29 21:43:03 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 144
    ARCH: Beginning to archive log 3 thread 1 sequence 144
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
    .dbf'
    Thu Nov 29 21:43:03 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 144
    ARC0: Unable to archive log 3 thread 1 sequence 144
    Log actively being archived by another process
    Thu Nov 29 21:43:03 2007
    ARCH: Completed archiving log 3 thread 1 sequence 144
    Thu Nov 29 21:49:00 2007
    Starting control autobackup
    Thu Nov 29 21:50:14 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071129-01'
    Thanks
    Shaan

Maybe you are looking for

  • How do I add an Apple ID to my ipad?

    I was instructed how to monitor my son's imessages by adding his apple account to my ipad..my apple account comes up ..the option to add an account wasn't there when I tried in settings...what am I doing wrong?

  • Why can't I download CC to MacBook Pro

    Try to download and error Operating System no longer supported.

  • I installed the ios7 in my iPhone 5 mail problem

    I installed the ios7 in my iPhone 5 and now I can not see the messages receives on my Hotmail?

  • "Loaded with an error from the server"

    Hi, I checked and the note for this message on the Adobe website is just that it means an unknown error occured while loading. I am just wondering if anyone has any idea ( so perhaps it is less unknown) what causes this? I haven't been able to access

  • Does IVI Step Type support NI PXIe-4112?

    Hello, Guys I have a PXIe-4112 and four of PXI-4130. When I try to configure my PXIe-4112 with the PowerSupply IVI Step Type, an error occurred. An error occurred calling 'RunStep' in 'ISubstep' of 'zNI TestStand Ivi Step Types' An error occurred whi