Log data for 5 min

Hi.
I use Labview 7.1.I am acquiring some data using the DAQ assistant and save it as as lvm file using the Write LVM express vi.Both these blocks are placed inside a while loop which runs continously until user presses the stop data.
Now the question is that I want to log only 5 minutes of data each time i start the vi.
How do I do this?
Thanks."

dear pilo.
I think you can make a loop with two time counters.
One in the loop, one outside of the loop.
In the exemple i gave you, the loop is runing until the time is upper than 5 seconds.
Attachments:
time.vi ‏15 KB

Similar Messages

  • Log data for infotypes

    Hi folks,
    I have an task that I am trying to resolve, related to log data for the infotypes. I had posed this question earlier too. I did some research to find out an answer for this, but invain. Any help is really appreciated.
    The task is: The infotype is 0167 and there are changes made for health plan records(like inserting new plans, terminate the plans). I do not know what the endusers are doing, it is not creating log data for these changes that are happening.
    The program's logic is developed in such a way that it goes after these log records from PCL4. I believe it is not a programming issue, it might be some thing else.
    I need to find that out to resolve it.
    What might be the problem?
    Thanks,
    SK

    It is a standard setting. The infotypes, their field groups and field group charcterstics are defined in V_T585A, V_T585B and V_T585C. I believe the end users are missing some process. Because it started to happen since a week or so. The same program was picking the records fine earlier and it has not changed.
    I do not know what kind of process they follow. what changed now? Since I am the only SAP guy out here, got to find it out.
    They are using PA30/PA40 to enroll and the web application. Both these records did not create the log data. The reocrds went through to SAP.
    Could there be any step they might be missing?
    Thanks for the quick reply,
    SK

  • How to store call log data for one month?

    HI,
    I am using iPhone 4S with iOS 7.1.2. I would like to know that is there any setting for storing call log data for latest one month..?

    That's not how it works. Recents is limited to exactly 100 calls, not a time frame. If you need your call history for a specific time frame, look on your carrier's website. Most carriers will permit you to login to your account & view call history.

  • I need to start and stop logging based on a digital input event(or analog if necessary), log data for several seconds prior to the event, and have the data file close at the end of event and increment the filename for the next logging event.

    I don't know if this can be done with VI Logger or need to use Labview V7.1.

    After browsing through the VI Logger User Manual, it looks like the triggering that you are hoping to accomplish is possible. However, incrementing the filename for the next logging event is not going to be possible. VI Logger does exactly what its name tells - logs data. I don't think the automation that you are hoping to accomplish is possible.
    For help with setting up your application, if you do choose to stay with VI Logger, make sure to chek out the Getting Started with VI Logger Manual.
    Best of luck.
    Jared A

  • Log data for 40+ channels

    I want to log the acquired channel data to database after every 6 hours?
    The applicaiton continously acquires data 24/7 and writes to file.
    The file data then needs to be commited after every 6hrs, without closing the file and deleting the previous logged data in the file(which is already commited). Can the open file be accessed to commit the data to database, while the applicaiton writes the acquired data?
    or should the data be commited to database after every acquistion? The aim is to have minimal data loss while acquiring values.
    thanks!

    Hi
    they say a perfect picture is an empty canvas
    Maybe thats because  we all see the picture as we beleive it to be composed!
    With regard to your question:
    1) what is your sampling speed?
    2)What instrumentation are you using?
    3) Windows 2000/XP/Vista?
    4) Labview version?
    Using  'ball park figures:
    If you have 16 channels and sample at 16Khz  for 60 seconds then a 3MB dat file is generated!
    IF you do not write direct to disk then you have over 1GB data to transfer every 6hoursHopefully you power supply is reliable ALso your PC is capable of the overload.
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    I would suggest that you consider a real time solution, compact RIO, with the appropiate  hardware.
    Check out NI products and services.
    xseadog

  • How to delete Change log data from a DSO?

    Hello Experts,
    I am trying to delete the change log data for a DSO which has some 80 Cr records in it.
    I am trying to follow the standard procedure by using the process chain variant and giving the no.of days but somehow data is not getting deleted.
    However the process chain is completing successfully with G state.
    Please let me know if there are any other ways to delete the data.
    Thanks in Advance.
    Thanks & Regards,
    Anil.

    Hi,
    Then there might something wrong at your Chang log deletion variant.
    can you recreate changlog deletion variants and set them again.
    Ty to check below settings with new variant.
    Red mark - won't select
    Provide dso name and info area, older than and select blue mark.
    blue mark - it will delete only successfully loaded request which are older than N days.
    Have you tested this process type changlog deletion before moving to prod per your data flow?
    Thanks

  • How to extract audit log data from every document library in site collection using powershell?

    Hi All,
    I have n number of document library in one site collection,
    My query is- How to extract audit log data from every document library in a site collection using powershell?
    Please give solution as soon as possible?

    Hi inguru,
    For SharePoint audit log data, These data combine together in site collection. So there is no easy way to extract audit log data for document library.
    As a workaround, you can export the site collection audit log data to a CSV file using PowerShell Command, then you can filter the document library audit log data in Excel.
    More information:
    SharePoint 2007 \ 2010 – PowerShell script to get SharePoint audit information:
    http://sharepointhivehints.wordpress.com/2014/04/30/sharepoint-2007-2010-powershell-script-to-get-sharepoint-audit-information/
    Best Regards
    Zhengyu Guo
    TechNet Community Support

  • I need the Log Report for the Data which i am uploading from SAP R/3.

    Hi All,
    I am BI 7.0 Platform with Support Patch 20.
    I need the Log Report for the Data which i am uploading from SAP R/3.
    I extract the DATA from R/3 into BI 7.0 DSO where I am mapping the GL Accounts with the FS Item.   In the Transformation i have return a routine on the FS Item InfObject . I am checking the Gl code into Z table for the FS Item .
    I capture the FS item from the Z table then update this FS item to Infobject FS item.
    Now i  need to stop the Data upload if i do not find the GL code in the Z table, and generate report for all GL code for which the FS item is not maintained in the Z table.
    Please suggest.
    Regards
    nilesh

    Hi.
    Add a field that you will use to identify if the GL account of the record was found in the Z table or not. Fx, create ZFOUND with length 1 and no text.
    In your routine, when you do the lookup, populate ZFOUND with X when you found a match (sy-subrc = 0) and leave it blank if you don't find a match. Now create a report filtering on ZFOUND = <blank> and output the GL accounts. Those will be the ones not existing in the Z table, but coming in from your transactions.
    Regards
    Jacob

  • DSC 8.6.1 wrong timestamps for logged data with Intel dual core

    Problem Description :
    Our LV/DCS 8.6.1 application uses shared variables to log data to Citadel. It is running on many similar computers at many companies just fine, but on one particular Intel Dual Core computer, the data in the Citadel db has strange shifting timestamps. Changing bios to startup using single cpu fixes the problem. Could possibly set only certain NI process(es) to single-cpu instead (but which?). The old DSCEngine.exe in LV/DSC 7 had to be run single-cpu... hadn't these kind of issues been fixed by LV 8.6.1 yet?? What about LV 2009, anybody know?? Or is it a problem in the OS or hardware, below the NI line??
    This seems similar to an old issue with time synch server problems for AMD processors (Knowledge Base Document ID 4BFBEIQA):
    http://digital.ni.com/public.nsf/allkb/1EFFBED34FFE66C2862573D30073C329 
    Computer info:
    - Dell desktop
    - Win XP Pro sp3
    - 2 G RAM
    - 1.58 GHz Core 2 Duo
    - LV/DSC 8.6.1 (Pro dev)
    - DAQmx, standard instrument control device drivers, serial i/o
    (Nothing else installed; OS and LV/DSC were re-installed to try to fix the problem, no luck)
    Details: 
    A test logged data at 1 Hz, with these results: for 10-30 seconds or so, the timestamps were correct. Then, the timestamps were compressed/shifted, with multiple points each second. At perfectly regular 1-minute intervals, the timestamps would be correct again. This pattern repeats, and when the data is graphed, it looks like regular 1-sec interval points, then more dense points, then no points until the next minute (not ON the minute, e.g.12:35:00, but after a minute, e.g.12:35:24, 12:36:24, 12:37:24...). Occasionally (but rarely), restarting the PC would produce accurate timestamps for several minutes running, but then the pattern would reappear in the middle of logging, no changes made. 
    Test info: 
    - shared variable configured with logging enabled
    - data changing by much more than the deadband
    - new value written by Datasocket Write at a steady 1 Hz
    - historic data retrieved by Read Traces
    - Distributed System Manager shows correct and changing values continuously as they are written

    Meg K. B. , 
    It sounds like you are experiencing Time Stamp Counter (TSC) Drift as mentioned in the KB's for the AMD Multi-Core processors. However, according to this wikipedia article on TSC's, the Intel Core 2 Duo's "time-stamp counter increments at a constant rate.......Constant TSC behavior ensures that the duration of each clock tick is
    uniform and supports the use of the TSC as a wall clock timer even if
    the processor core changes frequency." This seems to suggest that it would be not be the case that you are seeing the issue mentioned in the KBs.
    Can you provide the exact modle of the Core 2 Duo processor that you are using?
    Ben Sisney
    FlexRIO V&V Engineer
    National Instruments

  • How to see data for particular date from a alert log file

    Hi Experts,
    I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
    Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
    Thanks
    Shaan

    Hi Jaffar,
    Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
    tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
    It's not working. Here is the sample log file
    Mon Nov 26 21:42:43 2007
    Thread 1 advanced to log sequence 138
    Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Mon Nov 26 21:42:43 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 137
    Mon Nov 26 21:42:43 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 137
    ARC1: Unable to archive log 1 thread 1 sequence 137
    Log actively being archived by another process
    Mon Nov 26 21:42:43 2007
    ARCH: Beginning to archive log 1 thread 1 sequence 137
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
    .dbf'
    ARCH: Completed archiving log 1 thread 1 sequence 137
    Mon Nov 26 21:42:44 2007
    Thread 1 advanced to log sequence 139
    Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Mon Nov 26 21:42:44 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 138
    ARC0: Beginning to archive log 3 thread 1 sequence 138
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
    .dbf'
    Mon Nov 26 21:42:44 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 138
    ARCH: Unable to archive log 3 thread 1 sequence 138
    Log actively being archived by another process
    Mon Nov 26 21:42:45 2007
    ARC0: Completed archiving log 3 thread 1 sequence 138
    Mon Nov 26 21:45:12 2007
    Starting control autobackup
    Mon Nov 26 21:45:56 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0033'
    handle 'c-2861328927-20071126-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Tue Nov 27 21:23:50 2007
    Starting control autobackup
    Tue Nov 27 21:30:49 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071127-00'
    Tue Nov 27 21:30:57 2007
    ARC1: Evaluating archive log 2 thread 1 sequence 139
    ARC1: Beginning to archive log 2 thread 1 sequence 139
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
    .dbf'
    Tue Nov 27 21:30:57 2007
    Thread 1 advanced to log sequence 140
    Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Tue Nov 27 21:30:57 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 139
    ARCH: Unable to archive log 2 thread 1 sequence 139
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARC1: Completed archiving log 2 thread 1 sequence 139
    Tue Nov 27 21:30:58 2007
    Thread 1 advanced to log sequence 141
    Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Tue Nov 27 21:30:58 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 140
    ARCH: Beginning to archive log 1 thread 1 sequence 140
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
    .dbf'
    Tue Nov 27 21:30:58 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 140
    ARC1: Unable to archive log 1 thread 1 sequence 140
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARCH: Completed archiving log 1 thread 1 sequence 140
    Tue Nov 27 21:33:16 2007
    Starting control autobackup
    Tue Nov 27 21:34:29 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0205'
    handle 'c-2861328927-20071127-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Wed Nov 28 21:43:31 2007
    Starting control autobackup
    Wed Nov 28 21:43:59 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-00'
    Wed Nov 28 21:44:08 2007
    Thread 1 advanced to log sequence 142
    Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Wed Nov 28 21:44:08 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 141
    ARCH: Beginning to archive log 3 thread 1 sequence 141
    Wed Nov 28 21:44:08 2007
    ARC1: Evaluating archive log 3 thread 1 sequence 141
    ARC1: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
    .dbf'
    Wed Nov 28 21:44:08 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 141
    ARC0: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    ARCH: Completed archiving log 3 thread 1 sequence 141
    Wed Nov 28 21:44:09 2007
    Thread 1 advanced to log sequence 143
    Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Wed Nov 28 21:44:09 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 142
    ARCH: Beginning to archive log 2 thread 1 sequence 142
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
    .dbf'
    Wed Nov 28 21:44:09 2007
    ARC0: Evaluating archive log 2 thread 1 sequence 142
    ARC0: Unable to archive log 2 thread 1 sequence 142
    Log actively being archived by another process
    Wed Nov 28 21:44:09 2007
    ARCH: Completed archiving log 2 thread 1 sequence 142
    Wed Nov 28 21:44:36 2007
    Starting control autobackup
    Wed Nov 28 21:45:00 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Thu Nov 29 21:36:44 2007
    Starting control autobackup
    Thu Nov 29 21:42:53 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0206'
    handle 'c-2861328927-20071129-00'
    Thu Nov 29 21:43:01 2007
    Thread 1 advanced to log sequence 144
    Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Thu Nov 29 21:43:01 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 143
    ARCH: Beginning to archive log 1 thread 1 sequence 143
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
    .dbf'
    Thu Nov 29 21:43:01 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 143
    ARC1: Unable to archive log 1 thread 1 sequence 143
    Log actively being archived by another process
    Thu Nov 29 21:43:02 2007
    ARCH: Completed archiving log 1 thread 1 sequence 143
    Thu Nov 29 21:43:03 2007
    Thread 1 advanced to log sequence 145
    Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Thu Nov 29 21:43:03 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 144
    ARCH: Beginning to archive log 3 thread 1 sequence 144
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
    .dbf'
    Thu Nov 29 21:43:03 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 144
    ARC0: Unable to archive log 3 thread 1 sequence 144
    Log actively being archived by another process
    Thu Nov 29 21:43:03 2007
    ARCH: Completed archiving log 3 thread 1 sequence 144
    Thu Nov 29 21:49:00 2007
    Starting control autobackup
    Thu Nov 29 21:50:14 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071129-01'
    Thanks
    Shaan

  • I can't figure out how to log off of my daughter's iTunes account that has been loaded to my PC.  When I want to sync my iPhone, I get her data, not mine.

    I can't figure out how to log off of my daughter's iTunes account that has been loaded to my PC.  When I want to sync my iPhone, I get her data, not mine.

    Hi, Abril_Perez17.
    This may be related to a new feature embedded in iOS7 that shows all purchased music by default.  Go to Settings > Music, then turn off Show All Music.  See if the issue ceases once the feature has been disabled.  This information is located on page 63 of the user guide below. 
    iPhone User Guide
    Regards,
    Jason H. 

  • View RSOP data for logged on user that is not administrator

    When troubleshooting group policies I use GPResult and RSOP.msc a LOT!  Since we started deploying Windows 7 I've been having the worst time trying to use these utilities.
    Normally when a user is not getting policies I can just run rsop.msc and see if there is any error information as well as which policies have and have not applied.  In Windows 7 I am prompted for an Admin password when I run rsop.  Well that would
    be fine but now RSOP attempts to gather data for the administrator; I need to see the data for the logged on user.  The only way I've been able to work around this so far is to add the user to the local admin group then I can run rsop and gpresult.  When
    I'm done I have to remove them from the admin group.
    This seems silly to me.  Can anyone tell me how to see RSOP and GPResult data as the USER instead of the Admin.
    Also please do not chime in telling me to run rsop in planning mode as that only tells me what is supposed to happen, not what is actually happening on the system.

    Hi,
    Base on my test and research, there’s impossible to use RSOP.msc with user, and run as administrator when you login with user still doesn’t work.
    That’s necessary to login with administrator and run rsop.msc. It’s a by design feature.
    Thank you for your understanding.
    Regards,
    Leo  
    Huang
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • How to show specific data for user on redirected page once they logged in

    I and fairly new, but have a general understanding of Dreamweaver. Using CS3 or CS4, how do you get the redirected page after the user logges in to show only the data for that user? I have creater the login page and it works fine but i dont know what needs to be on the redirected page for it to show only the data for the user that just logged in.
    I am not very good with the coding part so if it requires it any help with that would also be helpful.
    thank all.

    I should be able to get the understanding if explained.
    As for scripting i believe its PHP. at lease thats the page i created the login page with.
    As for user-specific information: basiclly there account information.
    As for database i have dreamweaver linked to a MySQL database. it pulls from 2) tables
    1st table has the following: ID - User Names - Password  - account number
    2nd table has the following: ID - account number - names - address - ect
    So basiclly i want when a user loges in for it to redirect them to a page which then only shows the data for that users. if i can link the data the pulls up by the account number that would be ideal
    Thank you Murry

  • Create min(date) for Customer in BMM layer

    Hi guys,
    I need help in creating a first order date for a customer. I want to do this in BMM layer and use directly in reports.
    How should i create this?
    Any help Appreciated.

    I think its you sent me email with samilar Q.
    If you are doing in BMM you need to understand the schema
    assuming you got fact,day, custmer tables
    and these joined as star.
    you need to create a metric on fact as min(day.date) and set content tab to customer level
    for this you might have to map day dim to fact using fact source properties add day table.
    once you done as I said:
    just pull mindate metric in answers run it and check physical sql that should be like
    select min(date),cust_name from
    fact,day,customer
    where 2 joins goes
    group by cust_name
    Hope this helps, if helps mark

  • Min date for each month from list

    Hi all,
    I need to select only minimal date for each month from this sample query:
    select date '2011-01-04' as adate from dual union all
    select date '2011-01-05' as adate from dual union all
    select date '2011-01-06' as adate from dual union all
    select date '2011-02-01' as adate from dual union all
    select date '2011-02-02' as adate from dual union all
    select date '2011-02-03' as adate from dual union all
    select date '2011-10-03' as adate from dual union all
    select date '2011-10-04' as adate from dual union all
    select date '2011-10-05' as adate from dual So the result should be:
    04.01.2011
    01.02.2011
    03.10.2011How do I perform it?

    WITH dates
         AS (SELECT DATE '2011-01-04' AS adate FROM DUAL
             UNION ALL
             SELECT DATE '2011-01-05' AS adate FROM DUAL
             UNION ALL
             SELECT DATE '2011-01-06' AS adate FROM DUAL
             UNION ALL
             SELECT DATE '2011-02-01' AS adate FROM DUAL
             UNION ALL
             SELECT DATE '2011-02-02' AS adate FROM DUAL
             UNION ALL
             SELECT DATE '2011-02-03' AS adate FROM DUAL
             UNION ALL
             SELECT DATE '2011-10-03' AS adate FROM DUAL
             UNION ALL
             SELECT DATE '2011-10-04' AS adate FROM DUAL
             UNION ALL
             SELECT DATE '2011-10-05' AS adate FROM DUAL)
    SELECT TO_CHAR (MIN (adate), 'DD.MM.YYYY') adate
      FROM dates
      GROUP BY to_char(adate, 'YYYY.MM')
    ADATE
    03.10.2011
    01.02.2011
    04.01.2011

Maybe you are looking for