Alert log message and RMAN

Hi ,
following is the message from Alert log file and trace files.
This error encountered during RMAN backup but the backup was fine.
Didn't get any errors in RMAN log file.
Can anybody tell me what's the problem and solution?
----------- alert log file --------------------
Thu Mar 9 02:14:13 2006
Errors in file /app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_950328.trc:
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
Thu Mar 9 02:14:13 2006
Errors in file /app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_1769540.trc:
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
Thu Mar 9 02:14:13 2006
Errors in file /app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_1392642.trc:
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
----------- trace files -------------------
/app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_950328.trc
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
ORACLE_HOME = /app/oracle/product/9.2.0
System name: AIX
Node name: sof016
Release: 1
Version: 5
Machine: 000CEF9C4C00
Instance name: SOFPDWB4
Redo thread mounted by this instance: 1
Oracle process number: 24
Unix process pid: 950328, image: oracle@sof016 (TNS V1-V3)
*** 2006-03-09 02:14:13.053
*** SESSION ID:(77.60408) 2006-03-09 02:14:13.040
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
~
/app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_1769540.trc
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
ORACLE_HOME = /app/oracle/product/9.2.0
System name: AIX
Node name: sof016
Release: 1
Version: 5
Machine: 000CEF9C4C00
Instance name: SOFPDWB4
Redo thread mounted by this instance: 1
Oracle process number: 25
Unix process pid: 1769540, image: oracle@sof016 (TNS V1-V3)
*** 2006-03-09 02:14:13.311
*** SESSION ID:(53.5721) 2006-03-09 02:14:13.310
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4
/app/oracle/admin/SOFPDWB4/udump/sofpdwb4_ora_1392642.trc
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
ORACLE_HOME = /app/oracle/product/9.2.0
System name: AIX
Node name: sof016
Release: 1
Version: 5
Machine: 000CEF9C4C00
Instance name: SOFPDWB4
Redo thread mounted by this instance: 1
Oracle process number: 27
Unix process pid: 1392642, image: oracle@sof016 (TNS V1-V3)
*** 2006-03-09 02:14:13.549
*** SESSION ID:(59.52476) 2006-03-09 02:14:13.548
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: No such file or directory
Additional information: 4

Hello looks like when backing up database using RMAN it did not find the archivelogs hence you got the errors. Try doing "Crosscheck archivelog all" and see.
-Sri
<< Symptoms >>
Archivelog backup using RMAN failed with error :
RMAN-03002: failure of backup command at 08/18/2005 14:51:16
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file /arch/arch2/1_266_563489673.dbf
ORA-27037: unable to obtain file status
IBM AIX RISC System/6000 Error: 2: A file or directory in the path name does not exist.
Additional information: 3
<<Cause>>
RMAN get the information about what archivelog files are required to back up from the v$archived_log view.
RMAN cannot find the file archivelog in the archivelog destination. So itscannot continue taking the backup because this file does not exist.
<<Solution>>
Check if the archivelog exist in the log archive destination. If the is moved to some other location then restore the file back to its original location and then do RMAN back.
else
RMAN> crosscheck archivelog all;
After the crosscheck all take the rman backup.

Similar Messages

  • RMAN ALert Log Message: ALTER SYSTEM ARCHIVE LOG

    Created a new Database on Oracle 10.2.0.4 and now seeing "ALTER SYSTEM ARCHIVE LOG" in the Alert Log only when the online RMAN backup runs:
    Wed Aug 26 21:52:03 2009
    ALTER SYSTEM ARCHIVE LOG
    Wed Aug 26 21:52:03 2009
    Thread 1 advanced to log sequence 35 (LGWR switch)
    Current log# 2 seq# 35 mem# 0: /u01/app/oracle/oradata/aatest/redo02.log
    Current log# 2 seq# 35 mem# 1: /u03/oradata/aatest/redo02a.log
    Wed Aug 26 21:53:37 2009
    ALTER SYSTEM ARCHIVE LOG
    Wed Aug 26 21:53:37 2009
    Thread 1 advanced to log sequence 36 (LGWR switch)
    Current log# 3 seq# 36 mem# 0: /u01/app/oracle/oradata/aatest/redo03.log
    Current log# 3 seq# 36 mem# 1: /u03/oradata/aatest/redo03a.log
    Wed Aug 26 21:53:40 2009
    Starting control autobackup
    Control autobackup written to DISK device
         handle '/u03/exports/backups/aatest/c-2538018370-20090826-00'
    I am not issuing a log swiitch command. The RMAN commands I am running are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/exports/backups/aatest/%F';
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u03/exports/backups/aatest/%d_%U';
    BACKUP DATABASE PLUS ARCHIVELOG;
    DELETE NOPROMPT OBSOLETE;
    DELETE NOPROMPT ARCHIVELOG UNTIL TIME 'SYSDATE-2';
    I do not see this message on any other 10.2.0.4 instances. Has anyone seen this and if so why is this showing in the log?
    Thank you,
    Curt Swartzlander

    There's no problem with log switch. Please refer to documentation for more information on syntax "PLUS ARCHIVELOG"
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup003.htm#sthref377
    Adding BACKUP ... PLUS ARCHIVELOG causes RMAN to do the following:
    *1. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
    *2. Runs BACKUP ARCHIVELOG ALL. Note that if backup optimization is enabled, then RMAN skips logs that it has already backed up to the specified device.*
    *3. Backs up the rest of the files specified in BACKUP command.*
    *4. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
    *5. Backs up any remaining archived logs generated during the backup.*
    This guarantees that datafile backups taken during the command are recoverable to a consistent state.

  • Defer log shipping and RMAN-08137 message when doing archivelog backup

    Hello,
    in a Primary & Dataguard scenario in which we set to DEFER the log shipping to the Dataguard during a high impact process done in the Primary, we receive the message "RMAN-08137: WARNING: archive log not deleted as it is still needed" when doing archive log backup with rman.
    Is it the expected behaviour due to the DEFER status with the Dataguard? or even in this scenario  we might we able to delete the logs and therefore we have to look more in deep to find our problem?
    Thanks in advance for your help.

    Hello,
    Had a look on v$archived_log and found the archivelogs that are not deleted and the state for them is
    DEST_ID=1 
    STANDBY_DEST=NO
    ARCHIVED=YES
    APPLIED=NO
    as http://docs.oracle.com/cd/B12037_01/server.101/b10755/dynviews_1015.htm says , they are all Local, were archived and there is no apply needed for them as they are not defined to go to the StandBy Database.
    Therefore, why can not still archive them?  How can we check if they had been processed by a Streams process?
    Regards

  • 10.2.0.4 Streams Alert log message

    Good afternoon everyone,
    Wanted to get some input on a message that ocasionally is logged to alert log of the owning QT instance in our RAC environment. In summary we have two RAC environments and perform bi-directional replication between the two environments. The message in question is usually logged in the busier environment.
    krvxenq: Failed to acquire logminer dictionary lock
    During that time it it seems that the Logminer process in not mining for changes, which could lead to latency between the two environments.
    I have looked at AWR reports for times during the occurence of these errors and its common to see the following procedure running.
    BEGIN dbms_capture_adm_internal.enforce_checkpoint_retention(:1, :2, :3); END;
    This procedure, which I assume purges Capture chekcpoints that exceed the retention duration, takes between 10-20 minutes to run. The table which stores the checkpoints is logmnr_restart_ckpt$. I suspect that the issue could be caused by the size of the logmnr_restart_ckpt$ whcih is 12GB in our environemnt. A purge job needs to be scheduled to shrink the table.
    If anyone has seen anything similar in her or his environment please offer any additional knowledge you may have about the topic.
    Thank you,

    There are 2 possibilities : either you have too much load on the table LOMNR_RESTART_CKPT$
    due to the heavy system load or there is a bad streams query.
    At this stage I would not like to admit the 'we-can't-cope-the-load' without having investigated more common issues.
    Let's assume optimistically that one or many streams internal SQL do not run as intended. This is our best bet.
    The table LOMNR_RESTART_CKPT$ is big is a perfect suspect.
    If there is a problem with one of the queries involving this table, we find it using sys.col_usage$,
    identify which columns are used and from there jump to SQL_ID checking plans:
    set linesize 132 head on pagesize 33
    col obj format a35 head "Table name"
    col col1 format a26 head "Column"
    col equijoin_preds format 9999999 head "equijoin|Preds" justify c
    col nonequijoin_preds format 9999999 head "non|equijoin|Preds" justify c
    col range_preds format 999999 head "Range|Pred" justify c
    col equality_preds format 9999999 head "Equality|Preds" justify c
    col like_preds format 999999 head "Like|Preds" justify c
    col null_preds format 999999 head "Null|Preds" justify c
    select r.name ||'.'|| o.name "obj" , c.name "col1",
          equality_preds, equijoin_preds, nonequijoin_preds, range_preds,
          like_preds, null_preds, to_char(timestamp,'DD-MM-YY HH24:MI:SS') "Date"
    from sys.col_usage$ u, sys.obj$ o, sys.col$ c, sys.user$ r
      where o.obj# = u.obj#    and o.name = 'LOMNR_RESTART_CKPT$'    -- and $AND_OWNER 
            c.obj# = u.obj#    and
            c.col# = u.intcol# and
            o.owner# = r.user# and
           (u.equijoin_preds > 0 or u.nonequijoin_preds > 0)
       order by 4 desc
    For each column predicate checks for full table scan :
    define COL_NAME='col to check'
    col PLAN_HASH_VALUE for 999999999999 head 'Plan hash |value' justify c
         col id for 999 head 'Id'
         col child for 99 head 'Ch|ld'
         col cost for 999999 head 'Oper|Cost'
         col tot_cost for 999999 head 'Plan|cost' justify c
         col est_car for 999999999 head 'Estimed| card' justify c
         col cur_car for 999999999 head 'Avg seen| card' justify c
         col ACC for A3 head 'Acc|ess'
         col FIL for A3 head 'Fil|ter'
         col OTHER for A3 head 'Oth|er'
         col ope for a30 head 'Operation'
         col exec for 999999 head 'Execution'
         break on PLAN_HASH_VALUE on sql_id on child
         select distinct
           a.PLAN_HASH_VALUE, a.id , a.sql_id, a.CHILD_NUMBER child , a.cost, c.cost tot_cost,
           a.cardinality est_car,  b.output_rows/decode(b.EXECUTIONS,0,1,b.EXECUTIONS) cur_car,
           b.EXECUTIONS exec,
           case when length(a.ACCESS_PREDICATES) > 0 then ' Y' else ' N' end ACC,
           case when length(a.FILTER_PREDICATES) > 0 then ' Y' else ' N' end FIL,
           case when length(a.projection) > 0 then ' Y' else ' N' end OTHER,
            a.operation||' '|| a.options ope
    from
        v$sql_plan  a,
        v$sql_plan_statistics_all b ,
        v$sql_plan_statistics_all c
    where
            a.PLAN_HASH_VALUE =  b.PLAN_HASH_VALUE
        and a.sql_id = b.sql_id
        and a.child_number = b.child_number
        and a.id = b.id
        and a.PLAN_HASH_VALUE=  c.PLAN_HASH_VALUE (+)
         and a.sql_id = c.sql_id
         and a.child_number = c.child_number and c.id=0
        and  a.OBJECT_NAME = 'LOMNR_RESTART_CKPT$'       -- $AND_A_OWNER
        and   (instr(a.FILTER_PREDICATES,'&&COL_NAME') > 0
            or instr(a.ACCESS_PREDICATES,'&&COL_NAME') > 0
            or instr(a.PROJECTION, '$COL_NAME') > 0 
    order by sql_id, PLAN_HASH_VALUE, id
    now for each query with a FULL table scan check the predicate and
    see if adding and index will not improveAnother possibility :
    One of the structure associated to streams, that you don't necessary see, may have remained too big.
    Many of these structures are only accessed by streams maintenance using Full table scan.
    They are supposed to remain small tables. But after a big streams crash, they inflate and whenever
    the problem is solved, These supposed-to-be-small remain as the crash made them : too big.
    In consequence, the FTS intended to run on small tables suddenly loop over stretches of empty blocks.
    This is very frequent on big streams environment. You find these structures, if any exists using this query
    Note that a long running of this query, imply big structures. You will have to assess if the number of rows is realistic with the number of blocks.
    break on parent_table
    col type format a30
    col owner format a16
    col index_name head 'Related object'
    col parent_table format a30
    prompt
    prompt To shrink a lob associated to a queue type : alter table AQ$_<queue_table>_P modify lob(USER_DATA) ( shrink space  ) cascade ;
    prompt
    select a.owner,a.table_name parent_table,index_name ,
           decode(index_type,'LOB','LOB INDEX',index_type) type,
          (select blocks from dba_segments where segment_name=index_name and owner=b.owner) blocks
       from
          dba_indexes  a,
          ( select owner, queue_table table_name from dba_queue_tables
               where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
             union
             select owner, queue_table table_name from dba_queue_tables
                    where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
         ) b
       where   a.owner=b.owner
           and a.table_name = b.table_name
           and a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
    union
    -- LOB Segment  for QT
    select a.owner,a.segment_name parent_table,l.segment_name index_name, 'LOB SEG('||l.column_name||')' type,
                      (select sum(blocks) from dba_segments where segment_name = l.segment_name ) blob_blocks
                 from dba_segments  a,
                      dba_lobs l,
                      ( select owner, queue_table table_name from dba_queue_tables
                               where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
                        union
                        select owner, queue_table table_name from dba_queue_tables
                                where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
                      ) b
                 where a.owner=b.owner and
                       a.SEGMENT_name = b.table_name  and
                       l.table_name = a.segment_name and
                       a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
    union
    -- LOB Segment of QT.._P
    select a.owner,a.segment_name parent_table,l.segment_name index_name, 'LOB SEG('||l.column_name||')',
           (select sum(blocks) from dba_segments where segment_name = l.segment_name ) blob_blocks
       from dba_segments  a,
              dba_lobs l,
              ( select owner, queue_table table_name from dba_queue_tables
                       where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
                union
                select owner, queue_table table_name from dba_queue_tables
                        where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%')
              ) b
       where a.owner=b.owner and
               a.SEGMENT_name = 'AQ$_'||b.table_name||'_P'  and
               l.table_name = a.segment_name and
               a.owner not like 'SYS%' and a.owner not like 'WMSYS%'
    union
    -- Related QT
    select a2.owner, a2.table_name parent_table,  '-' index_name , decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') type,
              case
                   when decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') = 'IOT TABLE'
                        then ( select sum(leaf_blocks) from dba_indexes where table_name=a2.table_name and owner=a2.owner)
                   when decode(nvl(a2.initial_extent,-1), -1, 'IOT TABLE','NORMAL') = 'NORMAL'
                        then (select blocks from dba_segments where segment_name=a2.table_name and owner=a2.owner)
               end blocks
       from dba_tables a2,
           ( select owner, queue_table table_name from dba_queue_tables
                     where recipients='SINGLE' and owner NOT IN ('SYSTEM') and (compatible LIKE '8.%' or compatible LIKE '10.%')
              union all
              select owner, queue_table table_name from dba_queue_tables
                     where recipients='MULTIPLE' and (compatible LIKE '8.1%' or compatible LIKE '10.%' )
           ) b2
       where
             a2.table_name in ( 'AQ$_'||b2.table_name ||'_T' , 'AQ$_'||b2.table_name ||'_S', 'AQ$_'||b2.table_name ||'_H' , 'AQ$_'||b2.table_name ||'_G' ,
                                'AQ$_'|| b2.table_name ||'_I'  , 'AQ$_'||b2.table_name ||'_C', 'AQ$_'||b2.table_name ||'_D', 'AQ$_'||b2.table_name ||'_P')
             and a2.owner not like 'SYS%' and a2.owner not like 'WMSYS%'
    union
    -- IOT Table normal
    select
             u.name owner , o.name parent_table, c.table_name index_name, 'RELATED IOT' type,
             (select blocks from dba_segments where segment_name=c.table_name and owner=c.owner) blocks
       from sys.obj$ o,
            user$ u,
            (select table_name, to_number(substr(table_name,14)) as object_id  , owner
                    from dba_tables where table_name like 'SYS_IOT_OVER_%'  and owner not like '%SYS') c
      where
              o.obj#=c.object_id
          and o.owner#=u.user#
          and obj# in (
               select to_number(substr(table_name,14)) as object_id from dba_tables where table_name like 'SYS_IOT_OVER_%'  and owner not like '%SYS')
    order by parent_table , index_name desc;
    "I hope it is one of the above case, otherwise thing may become more complicates.

  • Db_writer_processes alert log message

    Oracle 11.2.0.3 running on HP-UX Itanium RX8640 with 16 CPUs and OS B.11.31
    Upgraded to 11.2.0.3 last night and now I am receiving the following message in alert.log when starting database:
    "NOTE: db_writer_processes has been changed from 4 to 1 due to NUMA requirements"
    Any thoughts on what this means?

    Is your system NUMA-enabled?
    In a NUMA-enabled box, the minimum number of DBWR process is the number of processor groups, oracle MUST start this minimum of DBWR no matter the parameter you set.
    You seem to have the opposite case, as in oracle is forcing it to 1. In this case, I'm led to believe that maybe oracle is mistakenly identifiying your system as NUMA perphaps.
    Upload the results for this:
    select  a.ksppinm  "Parameter",
    b.ksppstvl "Session Value",
    c.ksppstvl "Instance Value"
    from x$ksppi a, x$ksppcv b, x$ksppsv c
    where a.indx = b.indx and a.indx = c.indx
    and a.ksppinm = '_db_block_numa';You may try to do this:
    - Set the following OS variable in your database OS owner user profile: DISABLE_NUMA = true
    - Set the DBWR to 4 in the SPFILE and bounce the database.
    - Verify if the issue continues. In the OS: ps -ef | grep dbwr to see how many dbwr the instance spawned.

  • Production database Alert log Message

    Hi,
    I am using oracle 10gR1 on windows.Recently i have created a physical standby database (sid=smtm) and production database (sid=mtm).My production alter log file dispaly some thing like please suggest my what it shows.
    Mon Mar 02 00:18:31 2009
    Private_strands 7 at log switch
    Thread 1 advanced to log sequence 35722
    Current log# 4 seq# 35722 mem# 0: D:\ORACLE\PRODUCT\10.1.0\ORADATA\MTM\REDO04.LOG
    Current log# 4 seq# 35722 mem# 1: D:\ORACLE\PRODUCT\10.1.0\ORADATA\MTM\REDO04_A.LOG
    Mon Mar 02 00:18:31 2009
    ARC1: Evaluating archive   log 2 thread 1 sequence 35721
    ARC1: Destination LOG_ARCHIVE_DEST_2 archival not expedited
    Committing creation of archivelog 'E:\ORACLE\MTM\ARCHIVES\MTM_945F37AC_1_35721_500044525.ARC'
    Invoking non-expedited destination LOG_ARCHIVE_DEST_2 thread 1 sequence 35721 host SMTM
    *FAL[server, ARC1]: Begin FAL noexpedite archive (branch 500044525 thread 1 sequence 35721 dest SMTM)*
    *FAL[server, ARC1]: Complete FAL noexpedite archive (thread 1 sequence 35721 destination SMTM)*Mon Mar 02 00:29:42 2009
    Private_strands 7 at log switch
    Thread 1 advanced to log sequence 35723
    Current log# 3 seq# 35723 mem# 0: D:\ORACLE\PRODUCT\10.1.0\ORADATA\MTM\REDO03.LOG
    Current log# 3 seq# 35723 mem# 1: D:\ORACLE\PRODUCT\10.1.0\ORADATA\MTM\REDO03_A.LOG
    Mon Mar 02 00:29:42 2009
    ARC1: Evaluating archive   log 4 thread 1 sequence 35722
    ARC1: Destination LOG_ARCHIVE_DEST_2 archival not expedited
    Committing creation of archivelog 'E:\ORACLE\MTM\ARCHIVES\MTM_945F37AC_1_35722_500044525.ARC'
    Invoking non-expedited destination LOG_ARCHIVE_DEST_2 thread 1 sequence 35722 host SMTM
    *FAL[server, ARC1]: Begin FAL noexpedite archive (branch 500044525 thread 1 sequence 35722 dest SMTM)*
    *FAL[server, ARC1]: Complete FAL noexpedite archive (thread 1 sequence 35722 destination SMTM)*
    Thanks

    Sorry i have no metalink account.
    On production database there is no any ORA error and on standby database Alert log shows.
    Mon Mar 02 03:40:38 2009
    RFS[1]: No standby redo logfiles created
    RFS[1]: Archived Log: 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35728_500044525.ARC'
    Committing creation of archivelog 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35728_500044525.ARC'
    RFS[1]: No standby redo logfiles created
    RFS[1]: Archived Log: 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35729_500044525.ARC'
    Committing creation of archivelog 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35729_500044525.ARC'
    RFS[1]: No standby redo logfiles created
    RFS[1]: Archived Log: 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35730_500044525.ARC'
    Committing creation of archivelog 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35730_500044525.ARC'Mon Mar 02 04:29:14 2009
    RFS[1]: No standby redo logfiles created
    RFS[1]: Archived Log: 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35731_500044525.ARC'
    Committing creation of archivelog 'E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35731_500044525.ARC'
    Media Recovery Log
    ORA-279 signalled during: ALTER DATABASE RECOVER  standby database  ...Mon Mar 02 11:01:57 2009
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Mon Mar 02 11:01:57 2009
    Media Recovery Log E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35553_500044525.ARC
    ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...Mon Mar 02 11:02:05 2009
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Media Recovery Log E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35554_500044525.ARC
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    Mon Mar 02 11:02:14 2009
    ALTER DATABASE RECOVER CONTINUE DEFAULT
    Media Recovery Log E:\ORACLE\PRODUCT\10.1.0\ARCHIVES\MTM_945F37AC_1_35555_500044525.ARC
    ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
    Regards
    Thanks for reply.

  • Question regarding alert log file and trace files

    What should be the alert log file size ? And when should it be deleted? And for how many days user trace files should be kept?
    Also will anyone please tell me the importance of these files.
    Thanks

    This may help: http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/manproc.htm#sthref729
    There are a few discussions on it here:
    Re: Alert Log File
    alert log file contents viewing
    Re: how to read alert log file? is there any tool available?

  • Rac Alert log message '

    Dear All
    My database is running on RAC configuration and the version is 10.1.0.5
    Everyday i come across this error in the alert log of both the instances,
    Unable to restore resource manager plan to '':
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-00439: feature not enabled: Database resource manager
    pls help me if u have come across this error

    Check metalink note 735798.1
    HTH...

  • Updated WLCs shows wierd log messages and most APs do not associate

    Hi, I recently updated my 4402 WLC to latest Software Version                 (7.0.98.0).
    This first seamed to have worked fine. WLCs rebooted fine, then APs rebooted and upgraded their software images.
    All fine as it seamed.
    The I went on to also upgrade to latest Emergency Image Version(5.2.157.0).
    After rebooting the WLCs most APs won't associate again.
    Logs from WLCs shows a lot of messages like:
    Oct  7 20:11:38 wlc-1 WLC-1: *mmListen: Oct 07 22:11:38.857: %MM-3-INVALID_PKT_RECVD: mm_listen.c:6691 Received an invalid packet from 192.168.128.18. Source member:0.0.0.0. source member unknown.
    Oct  7 20:11:38 wlc-1 WLC-1: *mmListen: Oct 07 22:11:38.857: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:38 wlc-1 WLC-1: -Traceback:  105fbe18 102cb318 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:38 wlc-1 WLC-1:
    Oct  7 20:11:38 wlc-1 WLC-1: *mmListen: Oct 07 22:11:38.857: %MM-3-INVALID_PKT_RECVD: mm_listen.c:6691 Received an invalid packet from 192.168.128.18. Source member:0.0.0.0. source member unknown.
    Oct  7 20:11:38 wlc-1 WLC-1: *mmListen: Oct 07 22:11:38.857: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:38 wlc-1 WLC-1: -Traceback:  105fbe18 102cb318 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:38 wlc-1 WLC-1:
    Oct  7 20:11:39 wlc-1 WLC-1: *mmListen: Oct 07 22:11:39.749: %MM-3-INVALID_PKT_RECVD: mm_listen.c:6691 Received an invalid packet from 192.168.128.18. Source member:0.0.0.0. source member unknown.
    Oct  7 20:11:39 wlc-1 WLC-1: *mmListen: Oct 07 22:11:39.749: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:39 wlc-1 WLC-1: -Traceback:  105fbe18 102cb318 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:39 wlc-1 WLC-1:
    Oct  7 20:11:39 wlc-1 WLC-1: *mmListen: Oct 07 22:11:39.749: %MM-3-INVALID_PKT_RECVD: mm_listen.c:6691 Received an invalid packet from 192.168.128.18. Source member:0.0.0.0. source member unknown.
    Oct  7 20:11:39 wlc-1 WLC-1: *mmListen: Oct 07 22:11:39.749: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:39 wlc-1 WLC-1: -Traceback:  105fbe18 102cb318 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:39 wlc-1 WLC-1:
    Oct  7 20:11:40 wlc-1 WLC-1: *mmListen: Oct 07 22:11:40.749: %MM-3-INVALID_PKT_RECVD: mm_listen.c:6691 Received an invalid packet from 192.168.128.18. Source member:0.0.0.0. source member unknown.
    Oct  7 20:11:40 wlc-1 WLC-1: *mmListen: Oct 07 22:11:40.749: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:40 wlc-1 WLC-1: -Traceback:  105fbe18 102cb318 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:40 wlc-1 WLC-1:
    Oct  7 20:11:40 wlc-1 WLC-1: *mmListen: Oct 07 22:11:40.749: %MM-3-INVALID_PKT_RECVD: mm_listen.c:6691 Received an invalid packet from 192.168.128.18. Source member:0.0.0.0. source member unknown.
    Oct  7 20:11:40 wlc-1 WLC-1: *mmListen: Oct 07 22:11:40.749: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:40 wlc-1 WLC-1: -Traceback:  105fbe18 102cb318 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:40 wlc-1 WLC-1:
    Oct  7 20:11:40 wlc-1 WLC-1: *osapiReaper: Oct 07 22:11:40.905: %OSAPI-6-FILE_DOES_NOT_EXIST: osapi_file.c:348 File : /proc/755/stat does not exist.(errno 2)
    Oct  7 20:11:40 wlc-1 WLC-1: -Traceback:  105eaae4 105f4d44 105f7848 105fa648 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:40 wlc-1 WLC-1:
    Oct  7 20:11:43 wlc-1 WLC-1: *mmMobility: Oct 07 22:11:43.210: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:43 wlc-1 WLC-1: -Traceback:  105fbe18 102d8be0 102bc81c 102d5d20 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:43 wlc-1 WLC-1:
    Oct  7 20:11:43 wlc-1 WLC-1: *mmListen: Oct 07 22:11:43.210: %MM-3-INVALID_PKT_RECVD: mm_listen.c:6691 Received an invalid packet from 192.168.128.18. Source member:0.0.0.0. source member unknown.
    Oct  7 20:11:43 wlc-1 WLC-1: *mmListen: Oct 07 22:11:43.211: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:43 wlc-1 WLC-1: -Traceback:  105fbe18 102cb318 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:43 wlc-1 WLC-1:
    Oct  7 20:11:50 wlc-1 WLC-1: *osapiReaper: Oct 07 22:11:50.913: %OSAPI-6-FILE_DOES_NOT_EXIST: osapi_file.c:348 File : /proc/755/stat does not exist.(errno 2)
    Oct  7 20:11:50 wlc-1 WLC-1: -Traceback:  105eaae4 105f4d44 105f7848 105fa648 105f3ab0 10c0d250 111cd0cc
    When looking back a bit in the logs it looks like this started after upgrading the Software version. But after this first reload it the APs came back and worked. Now they don't.
    The case seams to be the same with both my WLCs.
    What could have gone wrong?
    Please advice.

    Not sure which messages are concerning you the most...
    Regarding the message:
    Oct  7 20:11:39 wlc-1 WLC-1: *mmListen: Oct 07 22:11:39.749: %OSAPI-5-OSAPI_INVALID_TIMER: timerlib.c:542 Failed to retrive timer.
    Oct  7 20:11:39 wlc-1 WLC-1: -Traceback:  105fbe18 102cb318 105f3ab0 10c0d250 111cd0cc
    Oct  7 20:11:39 wlc-1 WLC-1:
    There is already a bug for it: CSCth64522
    And for:
    Oct  7 20:11:50 wlc-1 WLC-1: *osapiReaper: Oct 07 22:11:50.913: %OSAPI-6-FILE_DOES_NOT_EXIST: osapi_file.c:348 File : /proc/755/stat does not exist.(errno 2)
    Oct  7 20:11:50 wlc-1 WLC-1: -Traceback:  105eaae4 105f4d44 105f7848 105fa648 105f3ab0 10c0d250 111cd0cc
    Looks like it's matching CSCtf39550
    Both bug fixes should be included in the next 7.0 release and should not impact the WLC behavior.
    Hope this helps...

  • Alert log message : Load Indicator not supported by OS

    I am running Oracle 8.1.5 on Red Hat Linux 6.1 , I am getting an alert message
    "Load Indicator not supported by OS"
    Kindly help me urgently on the below email address
    [email protected]
    Thanks

    Try turning of the Multi-threaded server option (i.e. comment out "mts" references in the init[SID].ora file). This appears to be an MTS-related bug.
    HTH
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by (Viral Shah):
    I am running Oracle 8.1.5 on Red Hat Linux 6.1 , I am getting an alert message
    "Load Indicator not supported by OS"
    Kindly help me urgently on the below email address
    [email protected]
    Thanks<HR></BLOCKQUOTE>
    null

  • Strange error in /var/log/messages and /var/log/warn

    Hi,
    I get following error continuosly:
    kernel: sg_write: data in/out 404/404 bytes for SCSI command 0xd--guessing data in;
    kernel: program java not setting count and/or reply_len properly
    Any idea?
    Ty
    Bye

    Hi,
    #h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
    by commenting above line in initttab file would stop messaging, is this would have any adverse affect on the database. As its a production server, so i am really taking time to resolve it. Your suggestions are welcome. If there is no harm in commenting the above line then i would go forward to comment that line.
    Thanks
    Jafar>

  • Need to find the way to get the actual error message in the alert log.

    Hi,
    I have configured OEM 11G and monitoring target versions are from 9i to 11g. Now my problem is i have defined the metrics for monitoring the alert log contents and OEM is sending alert if there is any error in the alert log but it is not showing the actual error message just it is showing as below.
    ============================
    Target Name=IDMPRD
    Target type=Database Instance
    Host=oidmprd01.ho.abc.com
    Occurred At=Dec 21, 2011 12:05:21 AM GMT+03:00
    Message=1 distinct types of ORA- errors have been found in the alert log.
    Metric=Generic Alert Log Error Status
    Metric value=1
    Severity=Warning
    Acknowledged=No
    Notification Rule Name=RULE_4_PROD_DATABASES
    Notification Rule Owner=SYSMAN
    ============================
    Is there any way to get the complete error details in the OEM alert itself.
    Regards
    DBA.

    You need to look at the Alert Log error messages, not the "status" messages. See doc http://docs.oracle.com/cd/E11857_01/em.111/e16285/oracle_database.htm#autoId2

  • Customize alerts on ALERT.log file? And another question

    We just setup and have started to use Grid Control. So far, I have been very pleased with what I have seen.
    Just some questions though.
    Is there a way to customize the "Generic Alert Log Error Status"?
    Can we have exclusions for that alert? We get are getting messages on alerts for a bug that exists in one of our DB's. If we can some how exclude this particular error, that would be great. That possible?
    Secondly, can you schedule checks on a database to check the state of the database? This is something I would like to do if possible.
    THanks.

    Go to targets - databases - <database>
    On this page on the left side, you'll see a heading "Diagnostic Summary"
    The you'll see something like:Alert Log <date>
    Click on the date, go to the bottom of the page and click on "Generic Alert Log Error Monitoring Configuration"
    Here you can configure exactly what alert log messages you want.
    Yes, depending on what it is you want to do. Look at User Defined Metrics, Reports and possibly Jobs to see what fits best with want you want to do.

  • Errors appeared in alert log file should send an email

    Hi,
    I have one requirement to do, as i am new dba, i dont know how to do this,
    sendmail is configured on my unix machine, but dont know how to send the errors appeared in alert logfile should send an email to administrator.
    daily, it has to check the errors in alert log file, if any errors occurs(ORA- errors or WARNING errors) should send an email to administrator.
    please help me how to do it

    Hi,
    There are many methods for interrogating the alert log and sending e-mail. Here are my notes:
    http://www.dba-oracle.com/t_alert_log_monitoring_errors.htm
    - PL/SQL or Java with e-mail alert: http://www.dba-village.com/village/dvp_papers.PaperDetails?PaperIdA=2383
    - Shell script - Database independent and on same level as alert log file and e-mail.
    - OEM - Too inflexible for complex alert log analysis rules
    - SQL against the alert log - You can define the alert log file as an external table and detect messages with SQL and then e-mail.
    Because the alert log is a server side flat file and because e-mail is also at the OS-level, I like to use a server side scell script. It's also far more robust then OEM, especially when combining and evaluating multiple alert log events. Jon Emmons has great notes on this:
    http://www.lifeaftercoffee.com/2007/12/04/when-to-use-shell-scripts/
    If you are on Windows, see here:
    http://www.dba-oracle.com/t_windows_alert_log_script.htm
    For UNIX, Linux, Jon Emmons has a great alert log e-mail script.
    Hope this helps . . . .
    Donald K. Burleson
    Oracle Press author

  • DG Observer triggering SIGSEGV Address not mapped to object errors in alert log

    Hi,
    I've got a Data Guard configuration using two 11.2.0.3 single instance databases.  The configuration has been configured for automatic failover and I have an observer running on a separate box.
    This fast-start failover configuration has been in place for about a month and in the last week, numerous SEGSEGV (address not mapped to object) errors are reported in the alert log.  This is happening quite frequently (every 4/5 minutes or so).
    The corresponding trace files show the process triggering the error coming from the observer.
    Has anyone experienced this problem?  I'm at my wits end trying to figure out how to fix the configuration to eliminate this error.
    I must also note that even though this error is occurring a lot, it doesn't seem to be affecting any of the database functionality.
    Help?
    Thanks in advance.
    Beth

    Hi..   The following is the alert log message, the traced file generated, and the current values of the data guard configuration.  In addition, as part of my research, I attempted to apply patch 12615660 which did not take care of the issue.  I also set the inbound_connection_timeout parameter to 0 and that didn't help either.  I'm still researching but any pointer in the right direction is very much appreciated.
    Error in Alert Log
    Thu Apr 09 10:28:59 2015
    Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x9] [PC:0x85CE503, nstimexp()+71] [flags: 0x0, count: 1]
    Errors in file /u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/trace/<SID>_ora_29902.trc  (incident=69298):
    ORA-07445: exception encountered: core dump [nstimexp()+71] [SIGSEGV] [ADDR:0x9] [PC:0x85CE503] [Address not mapped to object] []
    Use ADRCI or Support Workbench to package the incident.
    See Note 411.1 at My Oracle Support for error and packaging details.
    Thu Apr 09 10:29:02 2015
    Sweep [inc][69298]: completed
    Trace file:
    Trace file /u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/trace/<SID>_ora_29902.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning and Oracle Label Security options
    ORACLE_HOME = /u01/app/oracle/product/11.2.0.3/dbhome_1
    System name:    Linux
    Node name:      <host name>
    Release:        2.6.32-431.17.1.el6.x86_64
    Version:        #1 SMP Wed May 7 14:14:17 CDT 2014
    Machine:        x86_64
    Instance name: <SID>
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Unix process pid: 29902, image: oracle@<host name>
    *** 2015-04-09 10:28:59.966
    *** SESSION ID:(416.127) 2015-04-09 10:28:59.966
    *** CLIENT ID:() 2015-04-09 10:28:59.966
    *** SERVICE NAME:(<db_unq_name>) 2015-04-09 10:28:59.966
    *** MODULE NAME:(dgmgrl@<observer host> (TNS V1-V3)) 2015-04-09 10:28:59.966
    *** ACTION NAME:() 2015-04-09 10:28:59.966
    Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x9] [PC:0x85CE503, nstimexp()+71] [flags: 0x0, count: 1]
    DDE: Problem Key 'ORA 7445 [nstimexp()+71]' was flood controlled (0x6) (incident: 69298)
    ORA-07445: exception encountered: core dump [nstimexp()+71] [SIGSEGV] [ADDR:0x9] [PC:0x85CE503] [Address not mapped to object] []
    ssexhd: crashing the process...
    Shadow_Core_Dump = PARTIAL
    ksdbgcra: writing core file to directory '/u01/app/oracle/diag/rdbms/<db_unq_name>/<SID>/cdump'
    Data Guard Configuration
    DGMGRL> show configuration verbose;
    Configuration - dg_config
      Protection Mode: MaxPerformance
      Databases:
        dbprim - Primary database
        dbstby - (*) Physical standby database
      (*) Fast-Start Failover target
      Properties:
        FastStartFailoverThreshold      = '30'
        OperationTimeout                = '30'
        FastStartFailoverLagLimit       = '180'
        CommunicationTimeout            = '180'
        FastStartFailoverAutoReinstate  = 'TRUE'
        FastStartFailoverPmyShutdown    = 'TRUE'
        BystandersFollowRoleChange      = 'ALL'
    Fast-Start Failover: ENABLED
      Threshold:        30 seconds
      Target:           dbstby
      Observer:         observer_host
      Lag Limit:        180 seconds
      Shutdown Primary: TRUE
      Auto-reinstate:   TRUE
    Configuration Status:
    SUCCESS
    DGMGRL> show database verbose dbprim
    Database - dbprim
      Role:            PRIMARY
      Intended State:  TRANSPORT-ON
      Instance(s):
        DG_CONFIG
      Properties:
        DGConnectIdentifier             = 'dbprim'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'optional'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'MANUAL'
        ArchiveLagTarget                = '0'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = 'dbstby'
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName                         = ‘<sid>’
        StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<db host name>)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=<service_name>)(INSTANCE_NAME=<sid>)(SERVER=DEDICATED)))'
        StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
        AlternateLocation               = ''
        LogArchiveTrace                 = '0'
        LogArchiveFormat                = '%t_%s_%r.dbf'
        TopWaitEvents                   = '(monitor)'
    Database Status:
    SUCCESS
    DGMGRL> show database verbose dbstby
    Database - dbstby
      Role:            PHYSICAL STANDBY
      Intended State:  APPLY-ON
      Transport Lag:   0 seconds
      Apply Lag:       0 seconds
      Real Time Query: ON
      Instance(s):
        DG_CONFIG
      Properties:
        DGConnectIdentifier             = 'dbstby'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'optional'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'AUTO'
        ArchiveLagTarget                = '0'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = 'dbprim'
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName                         = ‘<sid>’
        StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<db host name>)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=<service_name>)(INSTANCE_NAME=<sid>)(SERVER=DEDICATED)))'
        StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
        AlternateLocation               = ''
        LogArchiveTrace                 = '0'
        LogArchiveFormat                = '%t_%s_%r.dbf'
        TopWaitEvents                   = '(monitor)'
    Database Status:
    SUCCESS

Maybe you are looking for

  • Batch/Powershell file wont run via Task Scheduler

    Hello, I'm having trouble getting my Powershell file to run from Task Schedule during logoff. In short, I'm trying to upload my NTUSER.DAT (roaming profile) to my profiles folder on the file server. Dynamic VLAN switching does not support roaming pro

  • HELP, CANT  VIEW VIDEO DOWNLOADED FROM ITUNES

    hi just bought an ipod video, and downloaded a video podcast from itunes, it shows on my ipod in itunes but not on the actual ipod when i disconnect and search for it, also my podcasts are not listed in the podcast list there just listed in the music

  • How to disable full screen "fade" transition?

    Hello, Just got my new MBP and upgraded from Snow Leopard to Mountain Lion in the process. Computer seems to work great, but Aperture now has this odd "slideshow" like fade when I toggle to my images in full screen. It's slow and distracting, more gi

  • Whiteboard App in flash

    Hello All, I am new to flash and trying to create a white board kind of application in it. Can any body please verify the approach which i have taken. Please suggest in case there is any better way for doing the same. Here is the scnerio 1. I need to

  • Linking Frames Problem

    Here's the deal, I have a left frame, right frame, and mainframe. In the left frame I have a list of main categories that I am linking to subcategories in the right frame. To do this, I created new HTML pages and typed the lists on those. So when I c