Unable to Cancel Failed Open Archive Process

Weeks ago, I installed AJA Macintosh QuickTime Codec v4.0 from the AJA website ( http://www.aja.com/products/software/ ), per their instructions. I was expecting to be able to import AJA QuickTime media files, particularly AJA Kona 10-bit RGB (R10k), and convert to ProRes upon import. The 10-bit RGB media files were provided by a film transfer service.
However, using Import File . . . always results in a message saying "No importable files. . . None of the selected files or folders can be imported.  Change the selection and try again."
So I decided to try importing the AJA QuickTime media files using Import from Camera, Open Archives . . .  This has resulted in Camera Import showing a spinning wheel non stop, for several hours now.  I have been unable to cancel the process.  When I quit and restart FCPX, a "Loading Compressor Support" message gets stuck on top of everything.  Even restarting my computer doesn't stop it.
I don't have Compressor yet.  For the time being, how do I stop Camera Import from ceaselessly trying to import? 

I tried something else before trashing preferences, and it seems to have worked.  With FCPX closed, I moved the folder with the media that Open Archives was pointing to into a different folder.  Symptoms no longer occurred upon reopening FCPX.
Then, with FCPX closed, I moved the media back to where it was.  Symptoms have not reoccurred upon opening FCPX.
Tom, thanks for the tip about using MPEG StreamClip going forward.

Similar Messages

  • Unable to Cancel/Suspend few processes in NWA

    Hi All,
    I have few processes in which the Context Data is missing and I am unable to Cancel/Suspend such processes.
    I have remove the task from users work list for which I am trying to Cancel this processes.Can anyone please let me know how to do the same . Thank you so much in advance .

    usually u cannot do too much in the event of this kind of error
    u may find a note if lucky.
    if not, just open ticket to sap

  • PR_Accept() failed, error -5971 (Process open FD table is full.)

    I am working on a web app which uses the SunONE directory server for
    some authorization. Sometimes it happens that the webserver just hangs
    with no errors in the webserver log. In the slapd error logs I do see
    the following exceptions :
    PR_Accept() failed, error -5971 (Process open FD table is full.)
    I am not sure why is this happening? What could be the problem? I am
    assuming FD means the file descriptor? One bug we found in the app is
    that it tries to add a new user in the LDAP even if its there. I do get
    "add value to attribute type nsRoleDN in entry .....: duplicate value"
    exceptions, but thought its harmless. Could this exception be causing
    something?

    Hi,
    I had exactly the same error message. I did not find the cause of this after spending a lot of time looking around. I only know it is a file descriptor table problem. The sun one directory server access log, however, did not have the number of file descriptors count reached maximum. I am very much puzzled by this. Did you find out why yet?
    u4me2

  • Problem with DS5.1 patch1( PR_Accept() failed, error -5971 (Process open FD

    Hi all,
    I install iplanet Directory Server5.1p1 on Solaris 8. These errors fill full in my error log files
    "PR_Accept() failed, error -5971 (Process open FD table is full.)"
    I use idsktune tool and detect i face the problem file descripton. I have increase to 4096 ( in /etc/system file and ulimit -n 4096), but these errors still exist in my error logs.
    Who know this prolem, pls give us a solution to fix this errors
    Thank alot
    Best Regards

    I suggest you try the Directory Server forum.

  • Shutting down archive processes after open the upgraded database.

    Hi,
    i upgraded the database from 9.2.0.5.0 to 10.2.0.2.0. after upgrading the database, i opened the database
    i am getting following information in my alert log file .. i will cause any problem
    ARC0: STARTING ARCH PROCESSES COMPLETE
    ARC0: Becoming the heartbeat ARCH
    ARC2 started with pid=16, OS id=18763
    Sun Nov 9 16:53:20 2008
    Successfully onlined Undo Tablespace 1.
    Sun Nov 9 16:53:20 2008
    SMON: enabling tx recovery
    Sun Nov 9 16:53:20 2008
    Database Characterset is UTF8
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=17, OS id=18770
    Sun Nov 9 16:53:21 2008
    Completed: ALTER DATABASE OPEN
    Sun Nov 9 16:54:19 2008
    Shutting down archive processes
    Sun Nov 9 16:54:25 2008
    ARCH shutting down
    ARC2: Archival stopped
    Sun Nov 9 20:58:04 2008
    Thanks,

    Hi Mohammed,
    is this a cut-out from the alert.log DURING the upgrade?
    How did you do the upgrade - DBUA or with catupgrd.sql?
    Has the archivelog mode switched off during the upgrade?
    Does the archiver start up again (do an 'archive log list' in SQL*Plus)?
    Thanks and regards
    Mike

  • Pls Help. DS5.1p1 errors PR_Accept() failed, error -5971 (Process open FD t

    Hi all,
    I install iplanet Directory Server5.1p1 on Solaris 8. These errors fill full in my error log files
    "PR_Accept() failed, error -5971 (Process open FD table is full.)"
    I use idsktune tool and detect i face the problem file descripton. I have increase to 4096 ( in /etc/system file and ulimit -n 4096), but these errors still exist in my error logs.
    Who know this prolem, pls give us a solution to fix this errors
    Thank alot
    Best Regards

    Hi!
    To increase the FDs available for DS, edit the Configuration --> Performance Settings and increase the no. of FDs available for DS. please check if you havent done this.
    I faced the similar problem. Increased the system limit(/etc/system) and DS setting, but was still recieving the error message time to time, which disappears automatically, when FDs get freed. The server is quite a busy LDAP server.
    Can someone comment how can i see the actual Fd utilisation on a Solaris 8 system.
    I have been checking like:
    ls -l /procs/fd | wc -l
    and by:
    ls -l /procs/pid-of-slapd/fd | wc -l
    But always find the Fds in use to be quite lesser than the limit specified 4096.
    Thanks for sharing.
    Cheers!
    VIvek

  • Unable to cancel - open purchase order where all items are removed

    Hi all,
    When We try to cancel/close open purchase order where all items are removed from item master ?
    It showing error message,
    "No matching records found  'Items' (OITM) "
    Purcahse order was created on may-2007.
    As purchase order showing in open item list.
    Any solution for this problem ?
    Jeyakanthan

    SAP does allow items to be removed from the item master with open documents, but only if the documents have not created journals.
    It does create headaches though, and this situation is one of them.
    If there are not many items, I suggest you recreate the items, close the PO and just cancel the items rather than remove them.
    The other workaround is to change the items on the PO to something different that is in the item master, then close it.

  • Unable to open data process control

    Hi,
    While I open Data Process Control, it prompt error as below. How to fix it?
    An error occurred while initializing POV data. Confirm that metadata has been loaded and that you have sufficient security rights.+

    Hi
    According to Oracle Support, the cause of the issue is:
    The file HsvWebSessionWSP.dll was not registered properly. This file will be in HFM Webserver under the location <MiddlewareHome>\EPMSystem11R1\products\FinancialManagement\WebServices.
    And the solution is:
    Re-registering the HsvWebSessionWSP.dll file should resolve the issue, by using this command: regsvr32 "<MiddlewareHome>\EPMSystem11R1\products\FinancialManagement\WebServices\HsvWebSessionWSP.dll"
    Hope this helps.
    Cheers,
    Lu

  • ORA-00308: cannot open archived log '+DATA'

    Hello all,
    I created new physical standby but I facing problem with shipping archived file between primary and standby.
    Primary : RAC (4 nodes)
    Standby : single node with ASM
    when I run :
    alter database recover managed standby database disconnect from session;
    in alert log file :
    Managed Standby Recovery not using Real Time Apply
    Parallel Media Recovery started with 24 slaves
    Waiting for all non-current ORLs to be archived...
    All non-current ORLs have been archived.
    Media Recovery Waiting for thread 1 sequence 25738
    Tue Mar 03 12:21:13 2015
    Completed: alter database recover managed standby database disconnect from session
    and when i checked archived files by
    select max(sequence#)  from v$archivanded_log;
    It was null.
    I understand thatno shipping between primary and standby till this point i deiced to use manual recovery by :
    alter database recover automatic standby database;
    But i get this error in alert log file :
    alter database recover automatic standby database
    Media Recovery Start
    started logmerger process
    Tue Mar 03 12:38:38 2015
    Managed Standby Recovery not using Real Time Apply
    Parallel Media Recovery started with 24 slaves
    Media Recovery Log +DATA
    Errors with log +DATA
    Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4989.trc:
    ORA-00308: cannot open archived log '+DATA'
    ORA-17503: ksfdopn:2 Failed to open file +DATA
    ORA-15045: ASM file name '+DATA' is not in reference form
    ORA-279 signalled during: alter database recover automatic standby database..
    when i opened oracledrs_pr00_4989.trc: file i find :
    *** 2015-03-03 12:38:39.478
    Media Recovery add redo thread 4
    ORA-00308: cannot open archived log '+DATA'
    ORA-17503: ksfdopn:2 Failed to open file +DATA
    ORA-15045: ASM file name '+DATA' is not in reference form
    When I created i set these parameter in duplicate command:
    set db_file_name_convert='+ASM_ORADATA/oracle','+DATA/oracledrs'
    set log_file_name_convert='+ASM_ARCHIVE/oracle','+DATA/oracledrs','+ASM_ORADATA/oracle','+DATA/oracledrs'
    set control_files='+DATA'
    set db_create_file_dest='+DATA'
    set db_recovery_file_dest='+DATA'
    What please the mistake here
    Thanks in advance,

    Yes I have datafiles under +DATA
    ASMCMD> cd +DATA/ORACLEDRS/DATAFILE
    ASMCMD> ls
    ASD.282.873258045
    CATALOG.288.873258217
    DEVTS.283.873258091
    EXAMPLE.281.873258043
    FEED.260.873227069
    FEED.279.873257713
    INDX.272.873251345
    INDX.273.873252239
    INDX.278.873257337
    SYSAUX.262.873227071
    SYSTEM.277.873256531
    SYSTEM_2.280.873257849
    TB_WEBSITE.284.873258135
    TB_WEBSITE.285.873258135
    TB_WEBSITE.286.873258181
    TB_WEBSITE.287.873258183
    UNDOTBS1.275.873253421
    UNDOTBS2.276.873255247
    UNDOTBS3.261.873227069
    UNDOTBS4.271.873245967
    USERS.263.873227071
    USERS.264.873235507
    USERS.265.873235893
    USERS.266.873237079
    USERS.267.873238225
    USERS.268.873243661
    USERS.269.873244307
    USERS.270.873244931
    USERS.274.873252585
    asd01.dbf
    catalog01
    dev01.dbf
    example.dbf
    feed01.dbf
    feed02.dbf
    indx01.dbf
    indx02.dbf
    indx03.dbf
    sysaux01.dbf
    system01.dbf
    system02.dbf
    undotbs01.dbf
    undotbs02.dbf
    undotbs03.dbf
    undotbs04.dbf
    user1.dbf
    users01.dbf
    users02.dbf
    users03.dbf
    users04.dbf
    users05.dbf
    users06.dbf
    users07.dbf
    users08.dbf
    website01.dbf
    website02.dbf
    website03.dbf
    website04.dbf
    ASMCMD>
    Standby :
    [root@oracledrs ~]# id oracle
    uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
    Primary :
    [root@dbn-prod-1 disks]# id oracle
    uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
    1- yes i have needed archived files at my primary
    2-  select inst_id,thread#,group# from gv$log;
    Primary :
    INST_ID,THREAD#,GROUP#
    1,1,1
    1,1,2
    1,2,3
    1,2,4
    1,3,5
    1,3,6
    1,4,7
    1,4,8
    1,1,9
    1,2,10
    1,3,11
    1,4,12
    3,1,1
    3,1,2
    3,2,3
    3,2,4
    3,3,5
    3,3,6
    3,4,7
    3,4,8
    3,1,9
    3,2,10
    3,3,11
    3,4,12
    2,1,1
    2,1,2
    2,2,3
    2,2,4
    2,3,5
    2,3,6
    2,4,7
    2,4,8
    2,1,9
    2,2,10
    2,3,11
    2,4,12
    4,1,1
    4,1,2
    4,2,3
    4,2,4
    4,3,5
    4,3,6
    4,4,7
    4,4,8
    4,1,9
    4,2,10
    4,3,11
    4,4,12
    Standby :
    INST_ID,THREAD#,GROUP#
    1,1,9
    1,1,2
    1,1,1
    1,2,3
    1,2,4
    1,2,10
    1,3,5
    1,3,6
    1,3,11
    1,4,12
    1,4,7
    1,4,8
    3- That's sample from alert log since i started the standby (for standby and primary)
    Standby :
    alter database mount standby database
    NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libasm.so
    NOTE: Loaded library: System
    SUCCESS: diskgroup DATA was mounted
    ERROR: failed to establish dependency between database oracledrs and diskgroup resource ora.DATA.dg
    ARCH: STARTING ARCH PROCESSES
    Tue Mar 03 18:38:16 2015
    ARC0 started with pid=128, OS id=4461
    ARC0: Archival started
    ARCH: STARTING ARCH PROCESSES COMPLETE
    ARC0: STARTING ARCH PROCESSES
    Tue Mar 03 18:38:17 2015
    Successful mount of redo thread 1, with mount id 1746490068
    Physical Standby Database mounted.
    Lost write protection disabled
    Tue Mar 03 18:38:17 2015
    ARC1 started with pid=129, OS id=4464
    Tue Mar 03 18:38:17 2015
    ARC2 started with pid=130, OS id=4466
    Tue Mar 03 18:38:17 2015
    ARC3 started with pid=131, OS id=4468
    Tue Mar 03 18:38:17 2015
    ARC4 started with pid=132, OS id=4470
    Tue Mar 03 18:38:17 2015
    ARC5 started with pid=133, OS id=4472
    Tue Mar 03 18:38:17 2015
    ARC6 started with pid=134, OS id=4474
    Tue Mar 03 18:38:17 2015
    ARC7 started with pid=135, OS id=4476
    Completed: alter database mount standby database
    Tue Mar 03 18:38:17 2015
    ARC8 started with pid=136, OS id=4478
    Tue Mar 03 18:38:17 2015
    ARC9 started with pid=137, OS id=4480
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC8: Becoming the 'no FAL' ARCH
    ARC2: Becoming the heartbeat ARCH
    ARC2: Becoming the active heartbeat ARCH
    Tue Mar 03 18:38:18 2015
    Starting Data Guard Broker (DMON)
    ARC9: Archival started
    ARC0: STARTING ARCH PROCESSES COMPLETE
    Tue Mar 03 18:38:23 2015
    INSV started with pid=141, OS id=4494
    Tue Mar 03 18:39:11 2015
    alter database recover managed standby database disconnect from session
    Attempt to start background Managed Standby Recovery process (oracledrs)
    Tue Mar 03 18:39:11 2015
    MRP0 started with pid=142, OS id=4498
    MRP0: Background Managed Standby Recovery process started (oracledrs)
    started logmerger process
    Tue Mar 03 18:39:16 2015
    Managed Standby Recovery not using Real Time Apply
    Parallel Media Recovery started with 24 slaves
    Waiting for all non-current ORLs to be archived...
    All non-current ORLs have been archived.
    Media Recovery Waiting for thread 1 sequence 25738
    Completed: alter database recover managed standby database disconnect from session
    Tue Mar 03 18:41:17 2015
    WARN: ARCH: Terminating pid 4476 hung on an I/O operation
    Killing 1 processes with pids 4476 (Process by index) in order to remove hung processes. Requested by OS process 4224
    ARCH: Detected ARCH process failure
    Tue Mar 03 18:45:17 2015
    ARC2: STARTING ARCH PROCESSES
    Tue Mar 03 18:45:17 2015
    ARC7 started with pid=127, OS id=4586
    Tue Mar 03 18:45:18 2015
    Fatal NI connect error 12170.
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 03-MAR-2015 18:45:18
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12535
    TNS-12535: TNS:operation timed out
        ns secondary err code: 12560
        nt main err code: 505
    TNS-00505: Operation timed out
        nt secondary err code: 0
        nt OS err code: 0
      Client address: <unknown>
    ARC7: Archival started
    ARC2: STARTING ARCH PROCESSES COMPLETE
    Tue Mar 03 18:47:14 2015
    alter database recover managed standby database cancel
    Tue Mar 03 18:48:18 2015
    Fatal NI connect error 12170.
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 03-MAR-2015 18:48:18
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12535
    TNS-12535: TNS:operation timed out
        ns secondary err code: 12560
        nt main err code: 505
    TNS-00505: Operation timed out
        nt secondary err code: 0
        nt OS err code: 0
      Client address: <unknown>
    Tue Mar 03 18:51:18 2015
    Fatal NI connect error 12170.
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.4.0 - Production
            Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
      Time: 03-MAR-2015 18:51:18
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12535
    TNS-12535: TNS:operation timed out
        ns secondary err code: 12560
        nt main err code: 505
    TNS-00505: Operation timed out
        nt secondary err code: 0
        nt OS err code: 0
      Client address: <unknown>
    Error 12170 received logging on to the standby
    FAL[client, USER]: Error 12170 connecting to oracle for fetching gap sequence
    MRP0: Background Media Recovery cancelled with status 16037
    Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4500.trc:
    ORA-16037: user requested cancel of managed recovery operation
    Recovery interrupted!
    Tue Mar 03 18:51:18 2015
    MRP0: Background Media Recovery process shutdown (oracledrs)
    Tue Mar 03 18:51:19 2015
    Managed Standby Recovery Canceled (oracledrs)
    Completed: alter database recover managed standby database cancel
    Tue Mar 03 18:51:56 2015
    alter database recover automatic standby database
    Media Recovery Start
    started logmerger process
    Tue Mar 03 18:51:56 2015
    Managed Standby Recovery not using Real Time Apply
    Parallel Media Recovery started with 24 slaves
    Media Recovery Log +DATA
    Errors with log +DATA
    Errors in file /u01/app/oracle/diag/rdbms/oracledrs/oracledrs/trace/oracledrs_pr00_4617.trc:
    ORA-00308: cannot open archived log '+DATA'
    ORA-17503: ksfdopn:2 Failed to open file +DATA
    ORA-15045: ASM file name '+DATA' is not in reference form
    ORA-279 signalled during: alter database recover automatic standby database...
    Tue Mar 03 18:53:06 2015
    db_recovery_file_dest_size of 512000 MB is 0.13% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Primary :
    Tue Mar 03 17:13:43 2015
    Thread 1 advanced to log sequence 26005 (LGWR switch)
      Current log# 1 seq# 26005 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 17:13:44 2015
    Archived Log entry 87387 added for thread 1 sequence 26004 ID 0x66aa5a0d dest 1:
    Tue Mar 03 18:00:18 2015
    Thread 1 advanced to log sequence 26006 (LGWR switch)
      Current log# 2 seq# 26006 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 18:00:18 2015
    Archived Log entry 87392 added for thread 1 sequence 26005 ID 0x66aa5a0d dest 1:
    Tue Mar 03 18:55:33 2015
    Thread 1 advanced to log sequence 26007 (LGWR switch)
      Current log# 9 seq# 26007 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26007 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 18:55:33 2015
    Archived Log entry 87395 added for thread 1 sequence 26006 ID 0x66aa5a0d dest 1:
    Tue Mar 03 19:14:22 2015
    Dumping diagnostic data in directory=[cdmp_20150303191422], requested by (instance=4, osid=10234), summary=[incident=1692472].
    Dumping diagnostic data in directory=[cdmp_20150303191425], requested by (instance=4, osid=10234), summary=[incident=1692473].
    Tue Mar 03 20:00:06 2015
    Thread 1 advanced to log sequence 26008 (LGWR switch)
      Current log# 1 seq# 26008 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 20:00:07 2015
    Archived Log entry 87401 added for thread 1 sequence 26007 ID 0x66aa5a0d dest 1:
    Tue Mar 03 21:00:02 2015
    Thread 1 advanced to log sequence 26009 (LGWR switch)
      Current log# 2 seq# 26009 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 21:00:03 2015
    Archived Log entry 87403 added for thread 1 sequence 26008 ID 0x66aa5a0d dest 1:
    Thread 1 advanced to log sequence 26010 (LGWR switch)
      Current log# 9 seq# 26010 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26010 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 21:00:06 2015
    Archived Log entry 87404 added for thread 1 sequence 26009 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:00:00 2015
    Setting Resource Manager plan SCHEDULER[0x32DA]:DEFAULT_MAINTENANCE_PLAN via scheduler window
    Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
    Tue Mar 03 22:00:00 2015
    Starting background process VKRM
    Tue Mar 03 22:00:00 2015
    VKRM started with pid=184, OS id=4838
    Tue Mar 03 22:00:07 2015
    Begin automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
    Tue Mar 03 22:00:25 2015
    Thread 1 advanced to log sequence 26011 (LGWR switch)
      Current log# 1 seq# 26011 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:00:26 2015
    Archived Log entry 87408 added for thread 1 sequence 26010 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:00:58 2015
    Thread 1 advanced to log sequence 26012 (LGWR switch)
      Current log# 2 seq# 26012 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 22:01:00 2015
    Archived Log entry 87412 added for thread 1 sequence 26011 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:02:37 2015
    Thread 1 cannot allocate new log, sequence 26013
    Checkpoint not complete
      Current log# 2 seq# 26012 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Thread 1 advanced to log sequence 26013 (LGWR switch)
      Current log# 9 seq# 26013 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26013 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 22:02:41 2015
    Archived Log entry 87415 added for thread 1 sequence 26012 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:03:26 2015
    Thread 1 cannot allocate new log, sequence 26014
    Checkpoint not complete
      Current log# 9 seq# 26013 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26013 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Thread 1 advanced to log sequence 26014 (LGWR switch)
      Current log# 1 seq# 26014 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:03:29 2015
    Archived Log entry 87416 added for thread 1 sequence 26013 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:05:50 2015
    Thread 1 cannot allocate new log, sequence 26015
    Checkpoint not complete
      Current log# 1 seq# 26014 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:05:52 2015
    End automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
    Thread 1 advanced to log sequence 26015 (LGWR switch)
      Current log# 2 seq# 26015 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 22:05:54 2015
    Archived Log entry 87418 added for thread 1 sequence 26014 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:07:29 2015
    Thread 1 cannot allocate new log, sequence 26016
    Checkpoint not complete
      Current log# 2 seq# 26015 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Thread 1 advanced to log sequence 26016 (LGWR switch)
      Current log# 9 seq# 26016 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26016 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 22:07:33 2015
    Archived Log entry 87421 added for thread 1 sequence 26015 ID 0x66aa5a0d dest 1:
    Thread 1 cannot allocate new log, sequence 26017
    Checkpoint not complete
      Current log# 9 seq# 26016 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26016 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Thread 1 advanced to log sequence 26017 (LGWR switch)
      Current log# 1 seq# 26017 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:07:39 2015
    Archived Log entry 87422 added for thread 1 sequence 26016 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:16:36 2015
    Thread 1 advanced to log sequence 26018 (LGWR switch)
      Current log# 2 seq# 26018 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_2.270.849356883
    Tue Mar 03 22:16:37 2015
    Archived Log entry 87424 added for thread 1 sequence 26017 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:30:06 2015
    Thread 1 advanced to log sequence 26019 (LGWR switch)
      Current log# 9 seq# 26019 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_9.295.853755983
      Current log# 9 seq# 26019 mem# 1: +ASM_ARCHIVE/oracle/onlinelog/group_9.10902.853755985
    Tue Mar 03 22:30:07 2015
    Archived Log entry 87427 added for thread 1 sequence 26018 ID 0x66aa5a0d dest 1:
    Tue Mar 03 22:30:18 2015
    Thread 1 advanced to log sequence 26020 (LGWR switch)
      Current log# 1 seq# 26020 mem# 0: +ASM_ORADATA/oracle/onlinelog/group_1.271.849356883
    Tue Mar 03 22:30:19 2015
    Archived Log entry 87428 added for thread 1 sequence 26019 ID 0x66aa5a0d dest 1:
    Tue Mar 03 23:07:27 2015
    Dumping diagnostic data in directory=[cdmp_20150303230727], requested by (instance=4, osid=25140), summary=[incident=1692496].
    Dumping diagnostic data in directory=[cdmp_20150303230730], requested by (instance=4, osid=25140), summary=[incident=1692497].
    Thanks in advance sir ,

  • Error 1019002 - Unable To Find Or Open .esm file.

    We use nightly MaxL scripts to re-set and re-load our cubes each night. (We are on Essbase 7.1.3) Same process has been working for years...this week we start getting errors on some of the cube builds. Some cubes build fine, but 1 or 2 throw on error in the middle of the build when trying to access the .ems file.
    1019002 - Unable To Find Or Open [E:\Hyperion\essbase\APP\CHRGBCK\CB_db\CB_db.esm].
    1019041 - Unable to write information to file [E:\Hyperion\essbase\APP\CHRGBCK\CB_db\CB_db.esm], adWriteObject returns [1019002]
    When you go into the Database folder for the cubes that fail, sure enough, there is no .esm file. However, when you manually kick-off the MaxL, it runs fine and somehow creates this .esm file.

    Hi.
    As per dbag-
    .esm file(Essbase Kernel file) is one of the critical files that manages pointers to data blocks, and contains control information that is used for database recovery.
    If there is a problem with any one of the following essential database files, the entire database becomes corrupted and Essbase Server cannot start the database:
    ● essn.pag
    ● essn.ind
    ● dbname.esm
    ● dbname.tct
    ● dbname.ind
    To restore the database, delete these file, restart the database, and reload from data files or from export files backed up prior to the corruption.
    Hope this helps.
    - Natesh

  • ORA-00308: cannot open archived log, ORA-27041

    hi All,
    here I am making DR server and I copied  SAP Data file1,2,3,4 from PRD to DR server and created control file. when I try to run following command for applied archives which i copy from PRD then getting below error.
    SQL>Startup
    Database mounted.
    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
    SQL> recover database using backup controlfile;
    ORA-00279: change 1209554452 generated at 02/09/2012 17:02:57 needed for thre
    1
    ORA-00289: suggestion : F:\ORACLE\PRD\ORAARCH\PRDARCH1_59501_657865393.DBF
    ORA-00280: change 1209554452 for thread 1 is in sequence #59501
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    auto
    ORA-00308: cannot open archived log
    'F:\ORACLE\PRD\ORAARCH\PRDARCH1_59501_657865393.DBF'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00308: cannot open archived log
    'F:\ORACLE\PRD\ORAARCH\PRDARCH1_59501_657865393.DBF'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    here required PRDARCH1_59501_657865393.DBF while PRD server have PRDARCHARC59501_0657865393.001
    I did change the LOG_ARCHIVE_DEST parameter in initSID.ora of DR server and not in PRD.
    please tell that, how can i apply PRD's archive (.001 extension) on DR.
    Regards,

    hi all,
    after recommendation , now parameter values on DR server is :
    *.log_archive_dest_1='LOCATION=F:\oracle\PRD\oraarch\PRDarch'
    *.log_archive_format='%t_%s_%r.001'
    when i try to apply
    SQL> recover database using backup controlfile;
    ORA-00279: change 1211443555 generated at 02/10/2012 23:36:05 needed for thread
    1
    ORA-00289: suggestion : F:\ORACLE\PRD\ORAARCH\PRDARCH\1_59557_657865393.001
    ORA-00280: change 1211443555 for thread 1 is in sequence #59557
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    auto
    ORA-00308: cannot open archived log
    'F:\ORACLE\PRD\ORAARCH\PRDARCH\1_59557_657865393.001'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    if you see that the required archive is:       1_59557_657865393.001
    while copied archive exist with this name: PRDARCHARC59557_0657865393.001
    I tried to change in Format parameter to make required format but
    Exp:
    %t_%s_%r.001
    1_59557_657865393.001
    ARC%s%r%t.001
    ARC595576578653931.001'
    ARC%s_%r%t.001
    ARC59557_6578653931.001
    ARC%s_%r.%t
    ARC59557_657865393.1
    PRDARCHARC%s_%r.%t
    PRDARCHARC59557_657865393.1
    Required Format: PRDARCHARC59540_0657865393.001
    how can i resolve this different name issue? please suggest me required format.
    Regards,

  • Unable to clear vendor open item.

    Hi experts,
    Have posted vendor document  under document type KZ directly, there is no open item invoice to clear the document.
    posted entry as below. now we are unable to clear the open item document through F-44. we did not find open item for the posted document which is posted in document type KZ.
    Vendor account entry
    25 vendor account 10000
    38 vendor account  8000
    50 bank account     2000
    document has been cleared only 8000 not 2000.  Please advice me how  to clear the KZ document.
    Regards,
    Anji Reddy.

    HI Anji Reddy,
    There is some Process to do Clearing function
    first of all Open item should be there in Vendor account with the Invoice.
    For that invoice there should be a payment which is full payment or else Partial Pament.
    Then you can do clearing function to clear the Open items.
    Through F-51 you can do both Payment and clearing which is Post with clearing
    if you have done payment throgh any other Tcodes you have to come F-44 and do mannual Clearing.
    Try to explain your senario more Clear so that your doubt can solve
    Regards
    Mahesh

  • Bpel deployment fails for all processes that have revision other than 1.0.

    Using: Release *10.1.3.3.1*
    Hello All,
    Bpel deployment fails for all processes that have revision other than *1.0*.
    We have been attempting to deploy several BPEL projects via ANT script to a target environment and are encountering failures to deploy for every project which isn’t a (revision 1.0). We are getting the following error whenever we try to deploy a process with a revision other than 1.0:
    D:\TJ_AutoDeploy\BPEL_AutoDeploy_BETA\build.xml:65: BPEL archive doesnt exist in directory "{0}"
         at com.collaxa.cube.ant.taskdefs.DeployRemote.getJarFile(DeployRemote.java:254)
         at com.collaxa.cube.ant.taskdefs.DeployRemote.deployProcess(DeployRemote.java:409)
         at com.collaxa.cube.ant.taskdefs.DeployRemote.execute(DeployRemote.java:211)
         at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275)
         at org.apache.tools.ant.Task.perform(Task.java:364)
         at org.apache.tools.ant.Target.execute(Target.java:341)
         at org.apache.tools.ant.Target.performTasks(Target.java:369)
         at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216)
         at org.apache.tools.ant.Project.executeTarget(Project.java:1185)
         at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:40)
         at org.apache.tools.ant.Project.executeTargets(Project.java:1068)
         at org.apache.tools.ant.Main.runBuild(Main.java:668)
         at org.apache.tools.ant.Main.startAnt(Main.java:187)
         at org.apache.tools.ant.launch.Launcher.run(Launcher.java:246)
         at org.apache.tools.ant.launch.Launcher.main(Launcher.java:67)
    The structure of our automated deployment script is as follows:
    First, a batch script calls (Jdeveloper_BPEL_Prompt.bat) in order to set all necessary environment variables i.e. ORACLE_HOME, BPEL_HOME, ANT_HOME, etc for ant.
    Next, the script lists every .jar file within the directory to an .ini file called BPEL_List.ini. Furthermore, BPEL_DIR, ADMIN_USER and ADMIN_PSWD variables are set and initialized respectively to:
    -     “.” – point to directory where script is running from because all the BPEL processes are located here
    -     “oc4jadmin”
    -     “*********” (whatever the password for out environment is)
    We’ve developed a method to have the script prompt the user to select the target environment to deploy to. Once the user selects the appropriate environment, the script goes through the BPEL_List.ini files and a loop tells it that for every BPEL process listed:
    DO ant
    -Dprocess.name=%%b
    -Drev= !Rev!
    -Dpath=%BPEL_DIR%
    -Ddomain=default
    -Dadmin.user=%ADMIN_USER%
    -Dadmin.password=%ADMIN_PWD%
    -Dhttp.hostname=%HOST%
    -Dhttp.port=%PORT%
    -Dverbose=true
    (What’s happening is that the variables in the batch file are being passed on to the ANT script where *%%b* is the process name, !rev! is revision #, and so on…)
    The loop goes through each line in the BPEL_List.ini and tokenizes the BPEL process into 3 parts *(%%a, %%b, and %%c)* but we only extract 2 parts: *%%b* (process name) and *%%c* which becomes !Rev! (revision number).
    Example:
    Sample BPEL process:
    bpel_ThisIsProcess1_1.0.jar
    bpel_ThisIsProcess2_SOAv2.19.0.001B.jar
    After tokenizing:
    %%a     %%b     %%c
    bpel     ThisIsProcess1     1.0.jar
    bpel     ThisIsProcess2     SOAv2.19.0.001B.jar
    *!Rev!* and not *%%c* because *%%c* will return whatever the revision number is + the “.jar” file extension as illustrated above. So to circumvent this, we parse *%%c* so that the last 4 characters are stripped. Such is done like this:
    set RevN=%%c
    set RevN=!RevN:~0,-4!
    Hence, the usage of !Rev!.
    Below is a screenshot post of the ANT build.xml that goes with our script:
    <!--<?xml version="1.0"?>-->
    <!--BUILD.XML-->
    <project name="bpel.deploy" default="deployProcess" basedir=".">
         <!--
         This ant build file was generated by JDev to deploy the BPEL process.
         DONOT EDIT THIS JDEV GENERATED FILE. Any customization should be done
         in default target in user created pre-build.xml or post-build.xml
         -->
         <property name="process.dir" value="${basedir}" />
              <!-- Set BPEL process name -->
              <!--
              <xmlproperty file="${process.dir}/bpel/bpel.xml"/>
              <property name="process.name" value="${BPELSuitcase.BPELProcess(id)}"/>
              <property name="rev" value="${BPELSuitcase(rev)}"/>
              -->
         <property environment="env"/>
         <!-- Set bpel.home from developer prompt's environment variable BPEL_HOME -->
              <condition property="bpel.home" value="${env.BPEL_HOME}">
                   <available file="${env.BPEL_HOME}/utilities/ant-orabpel.xml" />
              </condition>
         <!-- show that both bpel and oracle.home are located (TESTING purposes ONLY) -->
         <!-- <echo>HERE:${env.BPEL_HOME} ${env.ORACLE_HOME}</echo> -->
         <!-- END TESTING -->
         <!--If bpel.home is not yet using env.BPEL_HOME, set it for JDev -->
         <property name="oracle.home" value="${env.ORACLE_HOME}" />
         <property name="bpel.home" value="${oracle.home}/bpel" />
         <!--First override from build.properties in process.dir, if available-->
         <property file="${process.dir}/build.properties"/>
         <!--import custom ant tasks for the BPEL PM-->
         <import file="${bpel.home}/utilities/ant-orabpel.xml" />
         <!--Use deployment related default properties-->
         <property file="${bpel.home}/utilities/ant-orabpel.properties" />
         <!-- *************************************************************************************** -->
         <target name="deployProcess">
              <tstamp>
                   <format property="timestamp" pattern="MM-dd-yyyy HH:mm:ss" />
              </tstamp>
              <!-- WRITE TO LOG FILE #tjas -->
              <record name="build_verbose.log" loglevel="verbose" append="true" />
              <record name="build_debug.log" loglevel="debug" append="true" />
              <echo></echo>
              <echo>####################################################################</echo>
              <echo>BPEL_AutoDeploy initiated @ ${timestamp}</echo>
              <echo>--------------------------------------------------------------------</echo>
              <echo>Deploying ${process.name} on ${http.hostname} port ${http.port} </echo>
              <echo>--------------------------------------------------------------------</echo>
              <deployProcess
                   user="${admin.user}"
                   password="${admin.password}"
                   domain="${domain}"
                   process="${process.name}"
                   rev="${rev}"
                   dir="${process.dir}/${path}"
                   hostname="${http.hostname}"
                   httpport="${http.port}"
                   verbose="${verbose}" />
              <sleep seconds="30" />
              <!--<echo message="${process.name} deployment logged to ${build_verbose.log}"/>
              <echo message="${process.name} deployment logged to ${build.log}"/> -->
         </target>
         <!-- *************************************************************************************** -->
    </project>
    SUMMARY OF ISSUE AT HAND:
    ~ Every bpel process w/ 1.0 revision deploys with no problems
    ~ At first I would get an invalid character error most likely due to the “!” preceding “Rev”, but then I decided to set rev=”false” in the build.xml file. That didn’t work quite well. In another attempt, I decided to leave the –Drev= attribute within the batch script blank. That still led to 1.0s going through. My next thought was deploying something other than a 1.0, such as 1.2 or 2.0 and that’s when I realized that if it wasn’t a 1.0, it refused to go through.
    QUESTIONS:
    1.     IS THERE A WAY TO HAVE ANT LOOK INTO THE BPEL PROCESS AND PULL THE REVISION ID?
    2.     WHAT ARE WE DOING WRONG? ARE WE MISSING ANYTHING?
    3.     DID WE GO TOO FAR? MEANING, IS THERE A MUCH EASIER WAY WE OVERLOOKED/FORGOT/OR DON’T KNOW ABOUT THAT EXISTS?
    Edited by: 793292 on Jul 28, 2011 12:38 PM

    Only thing i can think of is instead of using a MAC ACL , u cud jus use the default class
    Policy Map Test
    class class-default
    police 56000 8000 exceed-action drop
    Class Map match-any class-default (id 0)
    Match any
    You would be saving a MAC-ACL ;-).

  • Unable to cancel busines completion

    Hi Guus
       Order is closed, Status is CLSD, Iam unable to cancel this status ,
    When I go to function--> cancel the business completion system throws an error as
    Technically complete not allowed.
    I know its not possible , is thier any way to cancel this, either through user exit or some Tcode.
    regards
    Krish

    Hi Pete
    Error is
    "Technically complete" is not allowed (ORD 600036803)
    Message no. BS007
    Diagnosis
    The current status of object 'ORD 600036803' prohibits business transaction 'Technically complete'.
    Procedure
    To process business transaction 'Technically complete', you first have to change the status of object 'ORD 600036803' to allow the transaction 'Technically complete'.
    This gives you an overview of the system and user statuses that affect the transaction. A transaction can only be executed if there is at least one status that allows it and there is no status that forbids it.
    Transaction analysis
    regards
    Krish

  • Date range for archive process for infotype

    Hello All,
    My User wants to remove  employee information in info type for terminated employees, so we decided to go for archive process but basis team is asking for date range.
    Can anyone suggest me how to give date range, even to provide to-date unable to get any points.
    Regards
    Ananthi.M

    Hello Kenneth,
    As you aware, we cannot delete info types Basic pay, Garnishment document  likewise there are around 15 info types which I am unable to delete due to time constraints etc.., for this process also the SM35 batch session is the better way.
    User wants to remove from table level ,the information is not required and no one should see the information.

Maybe you are looking for