CHECKPOINT_TIME column in V$DATAFILE

Hello,
What really CHECKPOINT_TIME column mean in v$datafile. I understand that when SCN occurrs a checkpoint is recorded. Does this time represent the time when DBWR launches the write process on datafiles?
Pleaes concur or conquer.
R

I thought aggregating bytes from v$datafile would give me accurate info. I was basing it from following link where it says v$datafile holds current size.
http://download.oracle.com/docs/cd/B12037_01/server.101/b10755/dynviews_1057.htm#sthref3063
If above is still not correct and like you said, it holds empty space allocated then I would have to use below mentioned query, but my first preference is v$datafile because of its simplicity.
SELECT x$index_history.tablespace_name, x$index_history.owner,
x$index_history.index_name,
x$index_history.clustering_factor,
x$index_history.leaf_blocks, x$index_history.blevel,
y$segment_index.next_extent, y$segment_index.extents,
y$segment_index.segment_type,
y$segment_index.BYTES index_bytes,
e$table_history.tablespace_name table_space_name,
e$table_history.owner table_owner,
e$table_history.table_name,
z$segment_table.next_extent table_next_extent,
z$segment_table.extents table_extents,
z$segment_table.segment_type segmentt_type,
z$segment_table.BYTES table_bytes, SYSDATE
FROM dba_indexes x$index_history,
dba_segments y$segment_index,
dba_segments z$segment_table,
dba_tables e$table_history
WHERE x$index_history.table_name = e$table_history.table_name
AND y$segment_index.segment_name = x$index_history.index_name
AND y$segment_index.tablespace_name =
x$index_history.tablespace_name
AND y$segment_index.owner = x$index_history.owner
AND y$segment_index.segment_type = 'INDEX'
AND x$index_history.owner NOT IN ('SYS', 'SYSTEM')
AND z$segment_table.segment_name = e$table_history.table_name
AND z$segment_table.tablespace_name =
e$table_history.tablespace_name
AND z$segment_table.owner = e$table_history.owner
AND z$segment_table.segment_type = 'TABLE'
AND e$table_history.owner NOT IN ('SYS', 'SYSTEM')
AND e$table_history.num_rows > 500000;
Thanks,
Rich

Similar Messages

  • Looking for datafile creation date

    DB version: 11.2 / Solaris 10
    We use OMF for our datafiles stored in ASM.
    I was asked to create a 20gb tablespace. We don't create datafiles above 10g. So, I did this.
    CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO;
    ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off;Later it turns out that the Schema will be having only 7gb worth of data. So, I wanted to reduce the file size of the second file added using ALTER DATABASE DATAFILE .... RESIZE command. But I don't want to RESIZE (reduce) the size of the first datafile created when I issued the CREATE TABLESPACE command. Since, in ASM, there is no real naming like
    +DATA/orcl/datafile/fmt_data_uat01.dbf
    +DATA/orcl/datafile/fmt_data_uat02.dbf
    .It is difficult to find which was the first file created.
    And there is no create_date column in DBA_DATA_FILES. There isn't a create_date column in v$datafile either.
    SQL > select file_name from dba_data_Files where tablespace_name = 'FMT_DATA_UAT';
    FILE_NAME
    +DATA/orcl/datafile/fmt_data_uat.1415.792422709
    +DATA/orcl/datafile/fmt_data_uat.636.792422811
    SQL > select name, CHECKPOINT_TIME, LAST_TIME, FIRST_NONLOGGED_TIME, FOREIGN_CREATION_TIME
         from v$datafile where name like '+DATA/orcl/datafile/fmt_data_uat%';
    NAME                                                    CHECKPOINT_TIME      LAST_TIME            FIRST_NONL FOREIGN_CREATION_TIM
    +DATA/orcl/datafile/fmt_data_uat.1415.792422709         27 Aug 2012 18:55:06
    +DATA/orcl/datafile/fmt_data_uat.636.792422811          27 Aug 2012 18:55:06
    SQL >Alert log doesn't show file names either.
    CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO
    Mon Aug 27 13:25:37 2012
    Completed: CREATE TABLESPACE FMT_DATA_UAT DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO
    Mon Aug 27 13:26:51 2012
    ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off
    Mon Aug 27 13:27:10 2012
    Thread 1 advanced to log sequence 70745 (LGWR switch)
      Current log# 8 seq# 70745 mem# 0: +DATA/orcl/onlinelog/group_8.1410.787080847
      Current log# 8 seq# 70745 mem# 1: +FRA/orcl/onlinelog/group_8.821.787080871
    Mon Aug 27 13:27:13 2012
    Archived Log entry 123950 added for thread 1 sequence 70744 ID 0x769b5f42 dest 1:
    Mon Aug 27 13:27:21 2012
    Completed: ALTER TABLESPACE FMT_DATA_UAT ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off
    Mon Aug 27 13:28:16 2012

    There isn't a create_date column in v$datafile either.Did you check CREATION_TIME ?

  • Loading only interested columns to table

    Hi,
    I created an external table that includes all of the columns in the data file. I want to load only interested columns to external table instead of all of them. I looked at OTN documents, i saw position of columns can be choosen. But i want for example, load 2,56,67 columns of data file to external table.
    Example data file is :
    12,23,4324,32,1,DSA,23F,DSF32,FD,32DF
    112,23,432H4,3HJ2,1HR,DRSA,23F,D4SF32,FD,323DF
    125,Y23,46324,32MM,1H,DSA,2R3F,D3SF32,F2D,312DF
    External table should be :
    ID ( first col of datafile ) NAME ( 7th col of datafile ) LOCATION ( 8 th col of datafile )
    12 23F DSF32
    112 23F D4SF32
    125 2R3F D3SF32
    How can i perform this? Can you suggest any tactic or clue to do this?
    Thanks for responses.

    Hello,
    Thanks for fast response. But i have a question. I have not tried yet but i am wondering, what the means of POSITION ( 1:4), POSITION ( 6:25), why you didnt use this clause in the other column specification? Can you explain in detailly ? Also, i had not mentioned that column lengths in datafile are not fixed, for example in 1. line first column's length is 8, 2.line first column's length is 20. Can you example handle this situation?
    POSITION is for fixed data length and in this example I just wanted to load only 2 fields so I specifed POSITION for only 2 fields. Now in your case data is not fixed length, then using POSITION wont' help and result in error or ambigous dataload. So we will be using variable length but data should not be bigger than table column data length
    Other option is to use sqlldr where you have choice of usign FILLER if you want to skip a column.
    #emp.dat
    1232,AAA,OOOOOOOOOOOOOOOOOOOO,AB
    12,23232232323232323,232,B
    DROP TABLE ext_tab PURGE;
    CREATE TABLE ext_tab (
       empno CHAR (4),
       ename CHAR (20),
       job CHAR (20),
       deptno CHAR (2)
    ORGANIZATION EXTERNAL
       ( TYPE oracle_loader DEFAULT DIRECTORY EXPORT_DIR ACCESS PARAMETERS (
       RECORDS DELIMITED BY NEWLINE BADFILE 'bad_%a_%p.bad' LOGFILE
       'log_%a_%p.log' FIELDS TERMINATED BY ',' MISSING FIELD VALUES ARE NULL
       REJECT ROWS WITH ALL NULL FIELDS ( empno  , ENAME ,
        job, deptno  ) ) LOCATION ( 'emp.dat' ) );
    SELECT *
    FROM ext_tabRegards

  • How to load selected column with sql loader

    Hi all
    I want to load only few columns from a datafile not all columns and i don't know how to do from SQL LDR.
    I know we can use position but the data is not fixed length.
    I'm working with Oracle 11g and Linux OS.
    Here is an example of my data file and table.
    Data file is and the field is separated by | :
    3418483|VOU|20120609090114|555208363|0|2858185502059|1000|0||
    3418484|SR|20120609090124|551261956|0|4146314127759|200000|0||
    SQL> desc TBL1
    Name                                      Null?    Type
    CTYPE                                              VARCHAR2(5)
    BDATE                                              DATE
    PARTNUM                                             VARCHAR2(60)
    SERIALNO                                           NUMBER
    FVALUE                                             NUMBER
    I want to have:
    SQL> select * from TBL1
    CTYPE     BDATE          PARTNUM          SERIALNO          FVALUE
    VOU     09/06/2012     555208363     2858185502059          1000
    SR     09/06/2012     551261956     4146314127759          200000Thank you.

    look at FILLER
    http://www.orafaq.com/wiki/SQL*Loader_FAQ#Can_one_skip_certain_columns_while_loading_data.3F
    --add sample
      num1 FILLER,
      ctype,
      bdate "to_date(:bdate, 'YYYYMMDDHH24MISS')",
      PARTNUM,
      num2 FILLER,
      SERIALNO,
      FVALUE, 
      num3 FILLER
      )Edited by: AlexAnd on Jun 9, 2012 4:29 AM

  • Initial Size of Datafile

    Hi
    how to find the size of the Datafile, Free bytes and its used bytes. ALso i created a Datafile long back i would liketo know from which view i can find the initial size of the Datafile as its now extending up.
    Thanks

    Navneet wrote:
    Hi Aman,
    When "Create_bytes" column in V$Datafile is populated?
    Ususally it corresponds to 0.
    Regards,
    NavneetNavneet,
    I am not sure that how come it can be 0 ? Its a dynamic view and is populated immediately. Lets do the same test again, I shall dorp and recreate the tablespace from the scratch.
    SQL> select * from V$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    SQL> drop tablespace test_ts including contents and datafiles;
    Tablespace dropped.
    SQL> select * from V$tablespace;
           TS# NAME                           INC BIG FLA ENC
             0 SYSTEM                         YES NO  YES
             1 SYSAUX                         YES NO  YES
             2 UNDOTBS1                       YES NO  YES
             4 USERS                          YES NO  YES
             3 TEMP                           NO  NO  YES
             6 EXAMPLE                        YES NO  YES
             7 FLBTS                          YES NO  YES
    7 rows selected.
    SQL> create tablespace test_ts datafile 'd:\test.dbf' size 2m ;
    Tablespace created.
    SQL> select bytes,blocks,autoextensible , user_bytes,user_blocks from dba_data_files where tablespace_name='TEST_TS';
         BYTES     BLOCKS AUT USER_BYTES USER_BLOCKS
       2097152        256 NO     2031616         248
    SQL> select create_bytes from V$datafile;
    CREATE_BYTES
               0
               0
               0
               0
       104857600
        20971520
         2097152
    7 rows selected.
    SQL> select create_bytes from V$datafile where
      2
    SQL> select * from V$tablespace;
           TS# NAME                           INC BIG FLA ENC
             0 SYSTEM                         YES NO  YES
             1 SYSAUX                         YES NO  YES
             2 UNDOTBS1                       YES NO  YES
             4 USERS                          YES NO  YES
             3 TEMP                           NO  NO  YES
             6 EXAMPLE                        YES NO  YES
             7 FLBTS                          YES NO  YES
             9 TEST_TS                        YES NO  YES
    8 rows selected.
    SQL> select create_bytes from V$datafile where ts#=9;
    CREATE_BYTES
         2097152
    SQL>You can see that its immediately got populated. I don't have any other version at the moment with me but I guess it will be the same. If that's not coming up and the dynamic view is not updated, its probably a bug or somethng which I am not aware of.
    HTH
    Aman....

  • Is it possible to alter IDENTITY column?

    Dear collegues,
    I am running SAP HANA 1.00.72.00.388670 in HANA Cloud. I am trying to migrate the data from SQL Server to SAP HANA.
    The IDENTITY value support is enabled. I am trying to import data from csv file to a table.
    The table has such structure:
         CREATE COLUMN TABLE "TEST_IMPORT" ("ID" integer  NOT NULL primary key  generated always as IDENTITY, "VALUE" real NULL);
    The peace of data has such view:
    ID,VALUE
    1,1
    2,2
    3,3
    4,4
    5,5
    6,6
    7,7
    8,8
    When I start an import process it asks me to define the dependences, as it requires to point the columns from my datafile to columns in the table. Certainly I cannot link a "ID" field because it is generated automatically.
    I`ve tried another example.
    Created a table:
         CREATE COLUMN TABLE "TEST_IMPORT" ("ID" integer  NOT NULL primary key, "VALUE" real NULL);
    Imported data from previous example.
    After I tried to alter the table to asign the IDENTITY property. However I`ve got an error:
         "Could not execute 'alter table "TEST_IMPORT" ALTER ("ID" integer NOT NULL primary key generated always as IDENTITY)' in 45 ms 753 µs .
         SAP DBTech JDBC: [7] (at 34): feature not supported: cannot modify column to identity column: ID: line 1 col 35 (at pos 34) "
    How can I make that import?

    Hi there
    IDENTITY columns are supported as of Rev. 74. Check my blog post about them Quick note on IDENTITY column in SAP HANA.
    - Lars

  • RMAN Backup Restore from tape

    Hello Gurus
    We have lvl0 RMAN database backup restored from tape but due to some reasons we could not apply recovery on it. On checking the information in controlfile, it identified only the system file and no other file. So, we recreated the controlfile and decided to recover using backup controlfile option (until cancel or until change or until time). However, to my surprise, when I listed the checkpointchange# and checkpoint_time columns of v$datafile_header, I found the difference in timestamps ranging 24 hrs -- which menat we had to apply 24 hrs of archiving to bring all dbfs to a consistent state (before it allowed us to open the db) -- so far so good, it worked that way.
    My question is that, while I trigger a lvl0 RMAN backup at say 4PM on 27th July and it finishes on 4PM on 28th July (as I am directly pushing around 1 TB data to tapes) -- what is the consistency timestamp of this backup. In other words, is this backup available for a restore of 4PM 27th July? Or is it available only for a restore of 4PM 28th July and later ...?
    Isn't it that RMAN works at block level and when I fire a backup it should give me all the dbf's of that timestamp..?
    Appreciate your inputs/comments.

    When taking a online backup with RMAN, the datafiles backup is always inconsistent because it needs the archived redo logs generated during the backup to be consistent: http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/intro005.htm#sthref70.
    A full online backup can only be used to restore the database to a point in time starting after backup end: if a backup starts at 4PM on 27th July and it finishes on 4PM on 28th July you can use it to restore the database to a point in time after 28-JUL 4PM.

  • Handling "NOT NULL" in forms(4.5)

    Hi everbody,
    I am working in forms4.5,oracle 7.3 in winNT. I had some columns
    which were defined as NOT NULLs in the database. When I tried
    loading data into them I got an error.I found that there were no
    values for those columns in the datafile. So removed the NOT
    NULLs from the columns and loaded the data. These were the old
    datas. Now I should change the columns back to NOT NULLs as the
    new datas has to follow them. How will I handle the old datas in
    the form? I want the NOT NULL columns to be diplayed as blanks
    instead of showing some error.i.e when a primary key value is
    entered (if it is already present in the old data) then a blank
    should be displayed in those not null columns. How will I do
    this?
    Any help is appreciated.
    Thanks in advance,
    satish
    null

    satish (guest) wrote:
    : Hi everbody,
    : I am working in forms4.5,oracle 7.3 in winNT. I had some
    columns
    : which were defined as NOT NULLs in the database. When I tried
    : loading data into them I got an error.I found that there were
    no
    : values for those columns in the datafile. So removed the NOT
    : NULLs from the columns and loaded the data. These were the old
    : datas. Now I should change the columns back to NOT NULLs as the
    : new datas has to follow them. How will I handle the old datas
    in
    : the form? I want the NOT NULL columns to be diplayed as blanks
    : instead of showing some error.i.e when a primary key value is
    : entered (if it is already present in the old data) then a blank
    : should be displayed in those not null columns. How will I do
    : this?
    : Any help is appreciated.
    : Thanks in advance,
    : satish
    If you genuinely have some rows for your table where there are no
    values for particular columns then you should leave the columns
    null-enabled. This sounds to be the case here - a column does
    not have to be defined as NOT NULL in order for data to be
    entered; you will be able to enter your new data for these
    columns even though they are not defined as NOT NULL.
    In any case, while you have existing rows in the table with null
    values for these columns, the SQL engine will not allow you to
    change the columns to be NOT NULL.
    Hope this helps,
    Stuart Housden
    null

  • How to write control file below data

    Hi,
    Here is my table
    create table sample1 (name varchar2(5), num number(2));
    sample.txt(Datafile)
    vikram12
    sharma13
    sonu 14
    Here is my control file
    load data
    infile 'C:\sample.txt'
    into table sample1
    fields terminated by "," optionally enclosed by '"'          
    ( name char(5), num char(2) )
    But it rejecting all the records. Just let me know where i went wrong.
    Thanks in advance

    fields terminated by "," optionally enclosed by
    '"'          As per your control file, the fields have to be seperated by commas but in your datafile it is seperated by space. Either change the delimiter in your control file or in your datafile
    Using Comma Seperated file,
    sample.txt(Datafile)
    vikram,12
    sharma,13
    sonu, 14
    load data
    infile 'C:\sample.txt'
    into table sample1
    fields terminated by ","
    optionally enclosed by '"'
    ( name char(5),
    num char(2)
    )Note:
    1. You have to increase the length of name column as your datafile has values greater than 5 characters. First and second records will be rejected with NAME VARCHAR2(5)

  • User parameters in Sqlloader

    Hi I need pass some dynamic values to my ctl file, which either i would register as concurrent pogram in apps
    or would call in shell script.
    And that value i need to enter to some column when data is loading.
    Please let me know the way.

    Unfortunately if those parameter are not a column of your datafile you cannot do that.
    rather what you can do is that in your shell script ( I hope you will be creating custom shell scripts as you need to pass parameters) you need to call update_procedure which will update your cstom table based on request id just after your sqlload is done.
    after SQLLOAD command

  • SqlLoader Help!

    I'm trying to load a comma-delimited file thru Sqlloader. The problem is due to date columns. The datafile has yyyy-mm-dd hh:mi:ss but all the time part is 00:00:00. In the sqlloader this is what I have given;
    EFFECTIVE_DATE "to_date(:EFFECTIVE_DATE, 'YYYY-MM-DD HH:MI:SS')",
    But I get an error that HH has to be between 1 and 12. I tried omitting the time part in script but it didn't help.
    Any suggestions,
    Thx

    Tried changing the syntax as per the manual,
    EFFECTIVE_DATE DATE "YYYY-MM-DD HH:MI:SS"
    but getting the same error - hour must be between 1 and 12

  • Unknown command beginning "recover da..." - rest of line ignored

    Hi.
    I have one corrupted datafile (status column from v$datafile have value 'RECOVER' for it and I get ORA-01113, ORA-01110 when I try to get it online). It seems I need manually recover datafile via 'RECOVER DATAFILE' command. However when I issue it in SQL*Plus I get: 'unknown command beginning "recover da..." - rest of line ignored.'.
    I have Oracle 8.0.5 Database.
    Thx

    Is this production server If yes 'DON'T touch and get away from that key board' in tom words.
    and take a oracle support if you are not familer with recover command.
    In case its your testing enviornment the you can take the corrupt datafile offline restore the new from latest backup and for this you should have all archive log from the date of last backup.
    after restoring datafile use recover command.
    http://www.iselfschooling.com/mc4articles/mc4recovery.htm
    go through following link for step by step recovery.

  • Does RMAN trigger checkpoint before a full backup?

    Hello,
    My test as follows:
    1 insert some data to a table
    2 backup full database using rman
    3 shutdown abort
    4 delete all datafiles(including redo logs)
    5 restore and recover database
    6 the inserted data can still be found.
    I think RMAN triggers checkpoint before a full backup,but I can't verify it,can someone help me?
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Thanks.

    Dear user!
    As Mr. Baker already stated in his very friendly way RMAN fires a checkpoint before a backup. You may verify it like that:
    1.) Use v$datafile_header to get the checkpoint_time of an arbitrary datafile. In my example I've used datafile number 8.
    SELECT file#, name, TO_CHAR(checkpoint_time, 'dd.mm.yyyy HH24:MI:SS') as checkpoint_time
    from   v$datafile
    where  file# = 8;
         FILE#
    NAME
    TO_CHAR(CHECKPOINT_
             8
    /u01/app/oracle/oradata/ORCL/myts1000_01.dbf
    17.06.2009 06:58:312.) Export your NLS_DATE_FORMAT environment variable like that:
    set NLS_DATE_FORMAT='DD.MM.YYYY HH24:MI:SS'3.) Start RMAN and backup datafile number 8
    BACKUP DATAFILE 8;
    Starten backup um 17.06.2009 06:58:29
    Kontrolldatei der Zieldatenbank wird anstelle des Recovery-Katalogs verwendet
    Zugewiesener Kanal: ORA_DISK_1RMAN will show the time when it started the backup.
    4.) Query again v$datafile_header like you did in step 1.
    SELECT file#, name, TO_CHAR(checkpoint_time, 'dd.mm.yyyy HH24:MI:SS') as checkpoint_time
    from   v$datafile
    where  file# = 8;
         FILE#
    NAME
    TO_CHAR(CHECKPOINT_
             8
    /u01/app/oracle/oradata/ORCL/myts1000_01.dbf
    17.06.2009 06:58:31As you can see the last checkpoint_time of datafile 8 is near to the startingtime of the backup.
    I hope that's what you wanna know.
    Yours sincerely
    Florian W.

  • Doubts abt understanding units

    Hi everyone,
    In what unit is this output?
    select sum(bytes)/1024/1024 from dba_data_files;
    Please guide me
    Thanks.

    In what unit is this output?
    select sum(bytes)/1024/1024 from dba_data_files;in dba_data_files, there are three columns
    when creating datafile, you will give size that belongs to bytes and also you set maxbytes in autoextend that points to maxbytes, and the user_bytes column occupies, used bytes in maxbytes/bytes according to autoextend.
    1) bytes
    2) user bytes
    3) maxbytes
    it will give you all the sum of the datafiles size in MB
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_3083.htm

  • Checkpoint time differs from file listing timestamp

    Hi,
    I just came across the timestamp of my datafiles of an Oracle 10.2.0.4 database. Some of the files continuously reflect the current time when i do a "ls -l".
    15:44:20 SQL> select name,checkpoint_time from v$datafile;
    NAME CHECKPOINT_TIME
    /aora2/oradata/aPRD/system01.dbf 01-JUN-2011 15:22:02
    */aora5/oradata/aPRD/undotbs01.dbf 01-JUN-2011 15:22:02*
    */aora2/oradata/aPRD/sysaux01.dbf 01-JUN-2011 15:22:02*
    /aora2/oradata/aPRD/users01.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/d_sm01a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/d_sm02a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/d_md01a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/d_lg01a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/d_lg02a.dbf 01-JUN-2011 15:22:02
    /aora3/oradata/aPRD/x_lg01a.dbf 01-JUN-2011 15:22:02
    /aora3/oradata/aPRD/x_lg02a.dbf 01-JUN-2011 15:22:02
    /aora3/oradata/aPRD/x_sm01a.dbf 01-JUN-2011 15:22:02
    /aora3/oradata/aPRD/x_sm02a.dbf 01-JUN-2011 15:22:02
    /aora3/oradata/aPRD/x_md01a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/tools01a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/d_lg03a.dbf 01-JUN-2011 15:22:02
    /aora3/oradata/aPRD/x_lg03a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/d_lg04a.dbf 01-JUN-2011 15:22:02
    /aora3/oradata/aPRD/x_lg04a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/d_lg05a.dbf 01-JUN-2011 15:22:02
    /aora2/oradata/aPRD/x_lg05a.dbf 01-JUN-2011 15:22:02
    21 rows selected.
    db $ ls -lrt /aor*/oradata/aPRD/*
    -rw-r----- 1 oracle dba 2097160192 May 31 22:00 /aora4/oradata/aPRD/temp01.dbf
    -rw-r----- 1 oracle dba 104858112 Jun 01 12:32 /aora5/oradata/aPRD/redo03a.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 12:32 /aora5/oradata/aPRD/redo02b.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 12:32 /aora4/oradata/aPRD/redo03c.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 12:32 /aora4/oradata/aPRD/redo02a.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 12:32 /aora1/oradata/aPRD/redo03b.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 12:32 /aora1/oradata/aPRD/redo02c.rdo
    -rw-r----- 1 oracle dba 1258299392 Jun 01 15:22 /aora3/oradata/aPRD/x_sm02a.dbf
    -rw-r----- 1 oracle dba 157294592 Jun 01 15:22 /aora3/oradata/aPRD/x_sm01a.dbf
    -rw-r----- 1 oracle dba 147456 Jun 01 15:22 /aora3/oradata/aPRD/x_md01a.dbf
    -rw-r----- 1 oracle dba 314679296 Jun 01 15:22 /aora3/oradata/aPRD/x_lg04a.dbf
    -rw-r----- 1 oracle dba 106496 Jun 01 15:22 /aora3/oradata/aPRD/x_lg03a.dbf
    -rw-r----- 1 oracle dba 524296192 Jun 01 15:22 /aora3/oradata/aPRD/x_lg02a.dbf
    -rw-r----- 1 oracle dba 838868992 Jun 01 15:22 /aora3/oradata/aPRD/x_lg01a.dbf
    -rw-r----- 1 oracle dba 104996864 Jun 01 15:22 /aora2/oradata/aPRD/x_lg05a.dbf
    -rw-r----- 1 oracle dba 26222592 Jun 01 15:22 /aora2/oradata/aPRD/users01.dbf
    -rw-r----- 1 oracle dba 2097291264 Jun 01 15:22 /aora2/oradata/aPRD/tools01a.dbf
    -rw-r----- 1 oracle dba 1887444992 Jun 01 15:22 /aora2/oradata/aPRD/d_sm02a.dbf
    -rw-r----- 1 oracle dba 314580992 Jun 01 15:22 /aora2/oradata/aPRD/d_sm01a.dbf
    -rw-r----- 1 oracle dba 147456 Jun 01 15:22 /aora2/oradata/aPRD/d_md01a.dbf
    -rw-r----- 1 oracle dba 104996864 Jun 01 15:22 /aora2/oradata/aPRD/d_lg05a.dbf
    -rw-r----- 1 oracle dba 209821696 Jun 01 15:22 /aora2/oradata/aPRD/d_lg04a.dbf
    -rw-r----- 1 oracle dba 106496 Jun 01 15:22 /aora2/oradata/aPRD/d_lg03a.dbf
    -rw-r----- 1 oracle dba 1048584192 Jun 01 15:22 /aora2/oradata/aPRD/d_lg02a.dbf
    -rw-r----- 1 oracle dba 1363156992 Jun 01 15:22 /aora2/oradata/aPRD/d_lg01a.dbf
    -rw-r-----    1 oracle   dba       545267712 Jun 01 15:39 /aora2/oradata/aPRD/system01.dbf
    -rw-r-----    1 oracle   dba      1598038016 Jun 01 15:43 /aora5/oradata/aPRD/undotbs01.dbf
    -rw-r-----    1 oracle   dba      1468014592 Jun 01 15:43 /aora2/oradata/aPRD/sysaux01.dbf
    -rw-r-----    1 oracle   dba       104858112 Jun 01 15:44 /aora5/oradata/aPRD/redo01c.rdo
    -rw-r-----    1 oracle   dba       104858112 Jun 01 15:44 /aora4/oradata/aPRD/redo01b.rdo
    -rw-r-----    1 oracle   dba       104858112 Jun 01 15:44 /aora1/oradata/aPRD/redo01a.rdo
    -rw-r-----    1 oracle   dba         5586944 Jun 01 15:44 /aora5/oradata/aPRD/control03.ctl
    -rw-r-----    1 oracle   dba         5586944 Jun 01 15:44 /aora4/oradata/aPRD/control02.ctl
    -rw-r-----    1 oracle   dba         5586944 Jun 01 15:44 /aora1/oradata/aPRD/control01.ctl
    why do the same few files constantly change its timestamp even though i'm just doing a "ls -l" of it? no DMLs nor DDLs or being done on the database at the time.

    hi ckpt,
    It is the same as the checkpoint_time as in v$datafile. I just bounced the database.
    SQL> select name,checkpoint_time from v$datafile;
    NAME CHECKPOINT_TIME
    /aora2/oradata/aPRD/system01.dbf 01-JUN-2011 16:10:00
    /aora5/oradata/aPRD/undotbs01.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/sysaux01.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/users01.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/d_sm01a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/d_sm02a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/d_md01a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/d_lg01a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/d_lg02a.dbf 01-JUN-2011 16:10:00
    /aora3/oradata/aPRD/x_lg01a.dbf 01-JUN-2011 16:10:00
    /aora3/oradata/aPRD/x_lg02a.dbf 01-JUN-2011 16:10:00
    /aora3/oradata/aPRD/x_sm01a.dbf 01-JUN-2011 16:10:00
    /aora3/oradata/aPRD/x_sm02a.dbf 01-JUN-2011 16:10:00
    /aora3/oradata/aPRD/x_md01a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/tools01a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/d_lg03a.dbf 01-JUN-2011 16:10:00
    /aora3/oradata/aPRD/x_lg03a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/d_lg04a.dbf 01-JUN-2011 16:10:00
    /aora3/oradata/aPRD/x_lg04a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/d_lg05a.dbf 01-JUN-2011 16:10:00
    /aora2/oradata/aPRD/x_lg05a.dbf 01-JUN-2011 16:10:00
    21 rows selected.
    SQL> select checkpoint_time from v$datafile_header;
    CHECKPOINT_TIME
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    01-JUN-2011 16:10:00
    21 rows selected.
    $ ls -lrt /aora*/oradata/aPRD/*
    -rw-r----- 1 oracle dba 2097160192 May 31 22:00 /aora4/oradata/aPRD/temp01.dbf
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:09 /aora5/oradata/aPRD/redo03a.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:09 /aora5/oradata/aPRD/redo02b.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:09 /aora4/oradata/aPRD/redo03c.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:09 /aora4/oradata/aPRD/redo02a.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:09 /aora1/oradata/aPRD/redo03b.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:09 /aora1/oradata/aPRD/redo02c.rdo
    -rw-r----- 1 oracle dba 1258299392 Jun 01 16:10 /aora3/oradata/aPRD/x_sm02a.dbf
    -rw-r----- 1 oracle dba 157294592 Jun 01 16:10 /aora3/oradata/aPRD/x_sm01a.dbf
    -rw-r----- 1 oracle dba 147456 Jun 01 16:10 /aora3/oradata/aPRD/x_md01a.dbf
    -rw-r----- 1 oracle dba 314679296 Jun 01 16:10 /aora3/oradata/aPRD/x_lg04a.dbf
    -rw-r----- 1 oracle dba 106496 Jun 01 16:10 /aora3/oradata/aPRD/x_lg03a.dbf
    -rw-r----- 1 oracle dba 524296192 Jun 01 16:10 /aora3/oradata/aPRD/x_lg02a.dbf
    -rw-r----- 1 oracle dba 838868992 Jun 01 16:10 /aora3/oradata/aPRD/x_lg01a.dbf
    -rw-r----- 1 oracle dba 104996864 Jun 01 16:10 /aora2/oradata/aPRD/x_lg05a.dbf
    -rw-r----- 1 oracle dba 26222592 Jun 01 16:10 /aora2/oradata/aPRD/users01.dbf
    -rw-r----- 1 oracle dba 2097291264 Jun 01 16:10 /aora2/oradata/aPRD/tools01a.dbf
    -rw-r----- 1 oracle dba 1887444992 Jun 01 16:10 /aora2/oradata/aPRD/d_sm02a.dbf
    -rw-r----- 1 oracle dba 314580992 Jun 01 16:10 /aora2/oradata/aPRD/d_sm01a.dbf
    -rw-r----- 1 oracle dba 147456 Jun 01 16:10 /aora2/oradata/aPRD/d_md01a.dbf
    -rw-r----- 1 oracle dba 104996864 Jun 01 16:10 /aora2/oradata/aPRD/d_lg05a.dbf
    -rw-r----- 1 oracle dba 209821696 Jun 01 16:10 /aora2/oradata/aPRD/d_lg04a.dbf
    -rw-r----- 1 oracle dba 106496 Jun 01 16:10 /aora2/oradata/aPRD/d_lg03a.dbf
    -rw-r----- 1 oracle dba 1048584192 Jun 01 16:10 /aora2/oradata/aPRD/d_lg02a.dbf
    -rw-r----- 1 oracle dba 1363156992 Jun 01 16:10 /aora2/oradata/aPRD/d_lg01a.dbf
    -rw-r----- 1 oracle dba 1468014592 Jun 01 16:21 /aora2/oradata/aPRD/sysaux01.dbf
    -rw-r----- 1 oracle dba 1598038016 Jun 01 16:24 /aora5/oradata/aPRD/undotbs01.dbf
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:25 /aora5/oradata/aPRD/redo01c.rdo
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:25 /aora4/oradata/aPRD/redo01b.rdo
    -rw-r----- 1 oracle dba 545267712 Jun 01 16:25 /aora2/oradata/aPRD/system01.dbf
    -rw-r----- 1 oracle dba 104858112 Jun 01 16:25 /aora1/oradata/aPRD/redo01a.rdo
    -rw-r----- 1 oracle dba 5586944 Jun 01 16:25 /aora5/oradata/aPRD/control03.ctl
    -rw-r----- 1 oracle dba 5586944 Jun 01 16:25 /aora4/oradata/aPRD/control02.ctl
    -rw-r----- 1 oracle dba 5586944 Jun 01 16:25 /aora1/oradata/aPRD/control01.ctl

Maybe you are looking for

  • Data not coming on second page of smartform

    Dear All, I am having a smartform . The smartform displays 7 items on first page 7 on next and so on. The problem is if there are total 10 items , the smartform is showing 7 items correctly on first page but showing only one item i.e 8th item on the

  • Multi-mapping....2nd mapping not executing

    Hi all.. I am doing Proxy to file scenario.... I am using multi-mapping in this.. one source..... 2 targets i am triggering the taregt based on a field value( using equalsS and IF) if i am testing it with single value then the mapping is success and

  • Deleting all rows in detached mode hangs application

    Hi all, If we click on Detach button of table and do select all and delete , and click on detach button again (i.e to return to main screen), the screen gets hanged. Steps to reproduce: 1.Go to [http://jdevadf.oracle.com/adf-richclient-demo/faces/com

  • ORA-445 error

    Hi all, I am getting below error very frequently on some of the targets: Incident (ORA-445) detected in /u01/app/oracle/diag/rdbms/devrdb2/DEVRDB2/alert/log.xml at time/line number When contacted Oracle support, they say that its an OS issue. I am no

  • After a clean install of Yosemite in iCloud remained links, which were opened in Safari on the Mavericks

    After a clean install of Yosemite in iCloud remained links, which were opened in Safari on the Mavericks. Please help remove old and obsolete references to iCloud for Safari.