Error while taking dump using datapump

getting following error -
Export: Release 10.2.0.1.0 - Production on Friday, 15 September, 2006 10:31:41
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "XX"."SYS_EXPORT_SCHEMA_02": XX/********@XXX directory=dpdump dumpfile=XXX150906.dmp logfile=XXX150906.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-31642: the following SQL statement fails:
BEGIN "DMSYS"."DBMS_DM_MODEL_EXP".SCHEMA_CALLOUT(:1,0,0,'10.02.00.01.00'); END;
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_METADATA", line 907
ORA-06550: line 1, column 7:
PLS-00201: identifier 'DMSYS.DBMS_DM_MODEL_EXP' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235
----- PL/SQL Call Stack -----
object line object
handle number name
2A68E610 14916 package body SYS.KUPW$WORKER
2A68E610 6300 package body SYS.KUPW$WORKER
2A68E610 9120 package body SYS.KUPW$WORKER
2A68E610 1880 package body SYS.KUPW$WORKER
2A68E610 6861 package body SYS.KUPW$WORKER
2A68E610 1262 package body SYS.KUPW$WORKER
255541A8 2 anonymous block
Job "XX"."SYS_EXPORT_SCHEMA_02" stopped due to fatal error at 10:33:12
Action required is contact customer support. And on metalink found a link that states it a bug in 10g release 1 that was suppose to be fixed in 10g release 1 version 4.
some of the default schemas were purposely dropped from the database. The only default schema available now are -
DBSNMP, DIP, OUTLN, PUBLIC, SCOTT, SYS, SYSMAN, SYSTEM, TSMSYS.
DIP, OUTLN, TSMSYS were created again.
Could this be a cause of problem??
Thanks in adv.

Hi,
Below is the DDL taken from different database. Will this be enough ? One more thing please, what shall be the password should it be DMSYS.....since this will not be used by me but system.
CREATE USER "DMSYS" PROFILE "DEFAULT" IDENTIFIED BY "*******" PASSWORD EXPIRE DEFAULT TABLESPACE "SYSAUX" TEMPORARY TABLESPACE "TEMP" QUOTA 204800 K ON "SYSAUX" ACCOUNT LOCK
GRANT ALTER SESSION TO "DMSYS"
GRANT ALTER SYSTEM TO "DMSYS"
GRANT CREATE JOB TO "DMSYS"
GRANT CREATE LIBRARY TO "DMSYS"
GRANT CREATE PROCEDURE TO "DMSYS"
GRANT CREATE PUBLIC SYNONYM TO "DMSYS"
GRANT CREATE SEQUENCE TO "DMSYS"
GRANT CREATE SESSION TO "DMSYS"
GRANT CREATE SYNONYM TO "DMSYS"
GRANT CREATE TABLE TO "DMSYS"
GRANT CREATE TRIGGER TO "DMSYS"
GRANT CREATE TYPE TO "DMSYS"
GRANT CREATE VIEW TO "DMSYS"
GRANT DROP PUBLIC SYNONYM TO "DMSYS"
GRANT QUERY REWRITE TO "DMSYS"
GRANT SELECT ON "SYS"."DBA_JOBS_RUNNING" TO "DMSYS"
GRANT SELECT ON "SYS"."DBA_REGISTRY" TO "DMSYS"
GRANT SELECT ON "SYS"."DBA_SYS_PRIVS" TO "DMSYS"
GRANT SELECT ON "SYS"."DBA_TAB_PRIVS" TO "DMSYS"
GRANT SELECT ON "SYS"."DBA_TEMP_FILES" TO "DMSYS"
GRANT EXECUTE ON "SYS"."DBMS_LOCK" TO "DMSYS"
GRANT EXECUTE ON "SYS"."DBMS_REGISTRY" TO "DMSYS"
GRANT EXECUTE ON "SYS"."DBMS_SYSTEM" TO "DMSYS"
GRANT EXECUTE ON "SYS"."DBMS_SYS_ERROR" TO "DMSYS"
GRANT DELETE ON "SYS"."EXPDEPACT$" TO "DMSYS"
GRANT INSERT ON "SYS"."EXPDEPACT$" TO "DMSYS"
GRANT SELECT ON "SYS"."EXPDEPACT$" TO "DMSYS"
GRANT UPDATE ON "SYS"."EXPDEPACT$" TO "DMSYS"
GRANT SELECT ON "SYS"."V_$PARAMETER" TO "DMSYS"
GRANT SELECT ON "SYS"."V_$SESSION" TO "DMSYS"
The other database has the DMSYS and the status is EXPIRED & LOCKED but I'm still able to take the dump using datapump??

Similar Messages

  • Error while taking Backup using hpux tape drive

    I got the following error while taking a backup.
    ERROR:   System Error, /opt/ignite/bin/make_medialif failed creating boot LIF 
    ERROR:   Failed to generate LIF on tape.
    ERROR:   System Error, /opt/ignite/bin/make_medialif failed creating boot LIF
    ERROR:   Failed to generate LIF on tape.
    make_medialif: ERROR: Cannot tar and gzip script files; possibly insufficient di
    [24;1H[0K[7mscbackup.log (13%)[m[24;1H[0K/11[24;1H[24;1H[0Ksk space in /var/tmp/make_medialif8916.
    =======  11/10/10 12:17:49 IST  make_tape_recovery completed unsuccessfully
    ERROR:     Ignite tape writing phase had errors
               Exit status: 1
    How can this problem be resolved? Kindly drop in your comments and solutions. 

    Hello Kulvinderbuttar,
    I understand that you are not able to make the recovery media.
    Are you using recovery media creation?
    Here is a link to walk you through the steps to create the recovery media.
    The flash drive that you are using needs to be empty and may need to be reformatted before you can make the recovery media.
    Let me know how everything goes.

  • Error while importing schemas using datapump

    Hi,
    I am trying to import schema from qc to development. after importing i got the following error attached below:
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/WITH_GRANT_OPTION/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    ORA-39065: unexpected master process exception in RECEIVE
    ORA-39078: unable to dequeue message for agent MCP from queue "KUPC$C_2_20090421161917"
    Job "SYS"."uat.210409" stopped due to fatal error at 20:15:13
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 2 with process name "DW02" prematurely terminated
    ORA-31671: Worker process DW02 had an unhandled exception.
    ORA-39078: unable to dequeue message for agent KUPC$A_2_20090421161934 from queue "KUPC$C_2_20090421161917"
    ORA-06512: at "SYS.KUPW$WORKER", line 1397
    ORA-06512: at line 2
    ORA-39029: worker 3 with process name "DW03" prematurely terminated
    ORA-31671: Worker process DW03 had an unhandled exception.
    ORA-39078: unable to dequeue message for agent KUPC$A_2_20090421162030 from queue "KUPC$C_2_20090421161917"
    ORA-06512: at "SYS.KUPW$WORKER", line 1397
    ORA-06512: at line 2
    ORA-39029: worker 4 with process name "DW04" prematurely terminated
    ORA-31671: Worker process DW04 had an unhandled exception.
    ORA-39078: unable to dequeue message for agent KUPC$A_2_20090421162031 from queue "KUPC$C_2_20090421161917"
    ORA-06512: at "SYS.KUPW$WORKER", line 1397
    ORA-06512: at line 2
    Is my import completed successfully or not??. please help...

    When a datapump job runs, it creates a table called the master table. It has the same name as the job name. This is used to keep track of where all of the information in the dumpfile is located. It is also used when restarting a job. For some reason, this table got dropped. I'm not sure why, but in most cases, datapump jobs are restartable. I don't know why the original message was reported, but I was hoping the job woudl be restartable. You could always just rerun the job. Since the job that failed already created tables and indexes, if you restart the job, all of the objects that are dependent on those objects will not be created by default.
    Let's say you have table tab1 with an index ind1 and both table and index are anaylized. Since tab1 is already created, the datapump job will mark all of the objects dependent on tab1 to skip. This includes the index, table_statistics, and index_statistics. To get around this, you could say
    table_exists_action=replace
    but this will replace all tables that are in the dumpfile. Your other options are:
    table_exists_action=
    truncate -- to truncate the data in the table and then just reload the data, but not the dependent objects
    append -- to append the data from the dumpfile to the existing table, but do not import the dependent objects
    skip -- skip the data and dependent objects from the dumpfile.
    Hope this helps.
    Dean

  • Error while Exporting dump using Expdp

    Hi Guru's
    When i started to export schema by giving the following command
    staging/********@XXXXX schemas=abcdirectory=DUMPDIR dumpfile=tnfhc_291210.dmp logfile=exp_tnfhc_291210.log schema is not exported successfully. it started to give the following error.
    ORA-31694: master table "STAGING"."SYS_EXPORT_SCHEMA_10" failed to load/unload
    ORA-19502: write error on file "/home/oracle/dumpdir/tnfhc_291210.dmp", blockno 263773 (blocksize=4096)
    ORA-27072: File I/O error
    Additional information: 4
    Additional information: 263773
    Additional information: 131072
    Job "STAGING"."SYS_EXPORT_SCH
    {code}
    Kindly help to take backup..
    Thanks & Regards
    Sami.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    ORA-19502: write error on file "/home/oracle/dumpdir/tnfhc_291210.dmp", blockno 263773 (blocksize=4096)
    ORA-27072: File I/O error
    Additional information: 4some sort of Operating System error; like volume is 100% full
    what clues exist in OS log files?

  • Error while taking GC heap Dump using Microsoft PerfView

    Hello,
    When I try to take heap dump for an application process using Microsoft
    PerfView , error observed
    and the error log given below. Can you please let me know the root cause for his issue ?
    Steps followed
    From the PerfView UI, choose “Take Heap Snapshot,” located on the Memory menu.
    And choose the process to capture
    Click the “Dump GC Heap” button or simply double click on the process name.
    Error Log
    Completed: Dumping GC Heap to C:\Install\TM\PerfView\TestProcess.gcDump   (Elapsed Time: 1.156 sec)
    Error: HeapDump failed with exit code 1
    Directory TestProcess.gcdump does not exist
    Started: Dumping GC Heap to C:\Install\TM\PerfView\TestProcess.1.gcDump
    Collecting a GC Heap SnapShot for process 1704
    [Taking heap snapshot of process '1704' ID 1704 to TestProcess.1.gcdump.  This can take 10s of seconds to minutes.]
    During the dump the process will be frozen.   If the dump is aborted, the process being dumped will need to be killed.
    Starting dump at 8/08/2014 3:55:15 PM
    Starting Heap dump on Process 1704 running architecture AMD64.
    set _NT_SYMBOL_PATH=SRV*C:\Users\UserId\AppData\Local\Temp\3\symbols*http://msdl.microsoft.com/download/symbols
    Exec: "C:\Users\UserId\AppData\Roaming\PerfView\VER.2014-08-08.13.49.17.346\AMD64\HeapDump.exe"  /MaxDumpCountK=250 "1704" "TestProcess.1.gcdump"
    Looking for C:\Users\UserId\AppData\Roaming\PerfView\VER.2014-08-08.13.49.17.346\Microsoft.Diagnostics.FastSerialization.dll
    Dumping process 1704 with id 1704.
    Process Has DotNet: False Has JScript: False Has ClrDll: False
    HeapDump Error: Could not dump either a .NET or JavaScript Heap.  See log file for details
    Completed: Dumping GC Heap to C:\Install\TM\PerfView\TestProcess.1.gcDump   (Elapsed Time: 1.172 sec)
    Error: HeapDump failed with exit code 1
    Directory TestProcess.1.gcdump does not exist

    Hi,
    Below is the DDL taken from different database. Will this be enough ? One more thing please, what shall be the password should it be DMSYS.....since this will not be used by me but system.
    CREATE USER "DMSYS" PROFILE "DEFAULT" IDENTIFIED BY "*******" PASSWORD EXPIRE DEFAULT TABLESPACE "SYSAUX" TEMPORARY TABLESPACE "TEMP" QUOTA 204800 K ON "SYSAUX" ACCOUNT LOCK
    GRANT ALTER SESSION TO "DMSYS"
    GRANT ALTER SYSTEM TO "DMSYS"
    GRANT CREATE JOB TO "DMSYS"
    GRANT CREATE LIBRARY TO "DMSYS"
    GRANT CREATE PROCEDURE TO "DMSYS"
    GRANT CREATE PUBLIC SYNONYM TO "DMSYS"
    GRANT CREATE SEQUENCE TO "DMSYS"
    GRANT CREATE SESSION TO "DMSYS"
    GRANT CREATE SYNONYM TO "DMSYS"
    GRANT CREATE TABLE TO "DMSYS"
    GRANT CREATE TRIGGER TO "DMSYS"
    GRANT CREATE TYPE TO "DMSYS"
    GRANT CREATE VIEW TO "DMSYS"
    GRANT DROP PUBLIC SYNONYM TO "DMSYS"
    GRANT QUERY REWRITE TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_JOBS_RUNNING" TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_REGISTRY" TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_SYS_PRIVS" TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_TAB_PRIVS" TO "DMSYS"
    GRANT SELECT ON "SYS"."DBA_TEMP_FILES" TO "DMSYS"
    GRANT EXECUTE ON "SYS"."DBMS_LOCK" TO "DMSYS"
    GRANT EXECUTE ON "SYS"."DBMS_REGISTRY" TO "DMSYS"
    GRANT EXECUTE ON "SYS"."DBMS_SYSTEM" TO "DMSYS"
    GRANT EXECUTE ON "SYS"."DBMS_SYS_ERROR" TO "DMSYS"
    GRANT DELETE ON "SYS"."EXPDEPACT$" TO "DMSYS"
    GRANT INSERT ON "SYS"."EXPDEPACT$" TO "DMSYS"
    GRANT SELECT ON "SYS"."EXPDEPACT$" TO "DMSYS"
    GRANT UPDATE ON "SYS"."EXPDEPACT$" TO "DMSYS"
    GRANT SELECT ON "SYS"."V_$PARAMETER" TO "DMSYS"
    GRANT SELECT ON "SYS"."V_$SESSION" TO "DMSYS"
    The other database has the DMSYS and the status is EXPIRED & LOCKED but I'm still able to take the dump using datapump??

  • Some tables missing while taking dump

    hi
    Version 11.2.1
    When i'm taking dump using exp. some tables are missing.But it's actually there in the schema
    What could be wrong?

    If a table has 0 rows and was created with the instance parameter 'deferred_segment_creation'=TRUE, it would have been a segment-less table. Export expects a segment to be present.
    See Oracle Support Doc#960216.1
    If you attempt a schema export, the table is silently ignored.
    If you attempt a table export, export raises the error EXP-0011 "<tablename> does not exist".
    You need to either
    1. Insert at least 1 row in the table
    OR
    2. Set DEFERRED_SEGMENT_CREATION to FALSE and create the table
    OR
    3. Use DataPump (expdp)
    OR
    see the workaround using ALTER TABLE <tablename> MOVE that I mention in the discussion at
    Import the table with 0 rows
    (Note : You need to ALTER INDEX <indexname> REBUILD if you MOVE a Table)
    Hemant K Chitale
    Edited by: Hemant K Chitale on Feb 1, 2011 1:14 PM

  • Ora-00604 error while taking tkprof of a trace file

    Sorry i am giving the full erro but omitting exact table names
    Hi ,
    I have an error while taking tkprof of a trace file.
    I gave the following command ---
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela explain= /
    error is --
    Error in create table of EXPLAIN PLAN table : unix_session_user.prof$paln_table
    ORA-00604: error occurred at recursive SQL level 1
    ORA-20001: Step-6:DDL
    Event Security. You are not permitted to perform the requested structural
    changes to PROF (TABLE)
    Event triggered : CREATE
    ora_login_user
    (session_user) : unix_session_user(dummy)
    Search : select count(*) from
    tabl(dummy table name) where obj_name like '%\%%' escape '\' and obj_type =
    'TABLE' and obj_type = 'USER' and ( event_CREATE = 'Y' or status =
    'Override')
    ORA-06512: at line 162
    ORA-06510: PL/SQL: unhandled
    user-defined exception
    EXPLAIN PLAN option disabled.
    i searched for the error and in oracle forum i found a solution .. http://forums.oracle.com/forums/thread.jspa?threadID=844287&tstart=0
    but after giving the table option it is giving the same error
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela table=old_schema.plan_table explain= /
    it again gave the same error.
    In both two cases it gives elapsed time results,library cache missing etc but before giving this it throws ORA-00604 error as stated above
    then i again correct the tkprof statement ..
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela table=new_schema.plan_table explain= /
    say this schema name here i used is dummy schema name.
    My question is did this error came as we had not sufficient previlages in the old_schema but that previleges we have in new_schema?
    My databse version is 9.2.0.4.0
    Thanks in advance
    Edited by: bp on Feb 3, 2009 11:36 PM
    Edited by: bp on Feb 3, 2009 11:40 PM

    Please post here full error message, there should be lines with ORA-00604 and then some other ORA as well.
    And are there any trace files generated during this error?
    And as You can see from error description, probably You will have to contact with Oracle support in order to solve this case:
    oerr ora 00604
    00604, 00000, "error occurred at recursive SQL level %s"
    // *Cause:  An error occurred while processing a recursive SQL statement
    // (a statement applying to internal dictionary tables).
    // *Action: If the situation described in the next error on the stack
    // can be corrected, do so; otherwise contact Oracle Support.

  • Getting error while taking MAX DB trans log backup.

    Hi,
    I am getting error while taking trans log backup of Maxdb database for archived log through data protector as below,
    [Critical] From: OB2BAR_SAPDBBAR@ttcmaxdb "MAX" Time: 08/19/10 02:10:41
    Unable to back up archive logs: no autolog medium found in media list
    But i am able to take complete data and incremental backup through data protector.
    I have already enabled the autolog for MAX DB database and it is writing that log file directly to HP-UX file system. Now i want to take backup of this archived log backup through data protector i.e. through trans log backup. So that the archived log which is on the file system after trans log backup completed will delete the archived logs in filesystem.  So that i don;t have to manually delete the archived logs from file system.
    Thanks,
    Subba

    Hi Lars,
    Thanks for the reply...
    Now i am able to take archive log backup but the problem is i can take only one archive file backup. Not multiple arhive log files generated by autolog at filesystem i.e /sapdb/MAX/saparch.
    I have enabled autolog and it is putting auto log file at unix directory i.e. /sapdb/MAX/saparch
    And then i am using the DataProtector 6.11 with trans log backup to backup the archived files in /sapdb/MAX/saparch. When i start the trans backup session through data protector it uses the archive stage command as "archive_stage BACKDP-Archive LOGBackup NOVERIFY REMOVE" If /sapdb/MAX/saparch has only one archive file it will backup and remove the file successfully. But if /sapdb/MAX/saparch has multiple archive files it gives an error as below,
      Preparing backup.
                Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
                Setting environment variable 'BI_REQUEST' to value 'OLD'.
                Setting environment variable 'BI_BACKUP' to value 'ARCHIVE'.
                Constructed Backint for MaxDB call '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.
    bsi_in -c'.
                Created temporary file '/var/opt/omni/tmp/MAX.bsi_out' as output for Backint for MaxDB.
                Created temporary file '/var/opt/omni/tmp/MAX.bsi_err' as error output for Backint for MaxDB.
                Writing '/sapdb/data/wrk/MAX/dbm.ebf' to the input file.
                Writing '/sapdb/data/wrk/MAX/dbm.knl' to the input file.
            Prepare passed successfully.
            Starting Backint for MaxDB.
                Starting Backint for MaxDB process '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.
    bsi_in -c >>/var/opt/omni/tmp/MAX.bsi_out 2>>/var/opt/omni/tmp/MAX.bsi_err'.
                Process was started successfully.
            Backint for MaxDB has been started successfully.
            Waiting for the end of Backint for MaxDB.
                2010-09-06 03:15:21 The backup tool is running.
                2010-09-06 03:15:24 The backup tool process has finished work with return code 0.
            Ended the waiting.
            Checking output of Backint for MaxDB.
            Have found all BID's as expected.
        Have saved the Backup History files successfully.
        Cleaning up.
            Removing data transfer pipes.
                Removing data transfer pipe /var/opt/omni/tmp/MAX.BACKDP-Archive.1 ... Done.
            Removed data transfer pipes successfully.
            Copying output of Backint for MaxDB to this file.
    Begin of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
                #SAVED 1009067:1 /sapdb/data/wrk/MAX/dbm.ebf
                #SAVED 1009067:1 /sapdb/data/wrk/MAX/dbm.knl
    End of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
            Removed Backint for MaxDB's temporary output file '/var/opt/omni/tmp/MAX.bsi_out'.
            Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
    End of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
            Removed Backint for MaxDB's temporary error output file '/var/opt/omni/tmp/MAX.bsi_err'.
            Removed the Backint for MaxDB input file '/var/opt/omni/tmp/MAX.bsi_in'.
        Have finished clean up successfully.
    The backup of stage file '/export/sapdb/arch/MAX_LOG.040' was successful.
    2010-09-06 03:15:24
    Backing up stage file '/export/sapdb/arch/MAX_LOG.041'.
        Creating pipes for data transfer.
            Creating pipe '/var/opt/omni/tmp/MAX.BACKDP-Archive.1' ... Done.
        All data transfer pipes have been created.
        Preparing backup tool.
            Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
            Setting environment variable 'BI_REQUEST' to value 'OLD'.
            Setting environment variable 'BI_BACKUP' to value 'ARCHIVE'.
            Constructed Backint for MaxDB call '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.bsi_
    in -c'.
            Created temporary file '/var/opt/omni/tmp/MAX.bsi_out' as output for Backint for MaxDB.
            Created temporary file '/var/opt/omni/tmp/MAX.bsi_err' as error output for Backint for MaxDB.
            Writing '/var/opt/omni/tmp/MAX.BACKDP-Archive.1 #PIPE' to the input file.
        Prepare passed successfully.
        Constructed pipe2file call 'pipe2file -d file2pipe -f /export/sapdb/arch/MAX_LOG.041 -p /var/opt/omni/tmp/MAX.BACKDP-Archive.1 -nowait'.
        Starting pipe2file for stage file '/export/sapdb/arch/MAX_LOG.041'.
            Starting pipe2file process 'pipe2file -d file2pipe -f /export/sapdb/arch/MAX_LOG.041 -p /var/opt/omni/tmp/MAX.BACKDP-Archive.1 -nowait >>/var/tmp/tem
    p1283767880-0 2>>/var/tmp/temp1283767880-1'.
            Process was started successfully.
        Pipe2file has been started successfully.
        Starting Backint for MaxDB.
            Starting Backint for MaxDB process '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.bsi_
    in -c >>/var/opt/omni/tmp/MAX.bsi_out 2>>/var/opt/omni/tmp/MAX.bsi_err'.
            Process was started successfully.
        Backint for MaxDB has been started successfully.
        Waiting for end of the backup operation.
            2010-09-06 03:15:25 The backup tool process has finished work with return code 2.
            2010-09-06 03:15:25 The backup tool is not running.
            2010-09-06 03:15:25 Pipe2file is running.
            2010-09-06 03:15:25 Pipe2file is running.
            2010-09-06 03:15:30 Pipe2file is running.
            2010-09-06 03:15:40 Pipe2file is running.
            2010-09-06 03:15:55 Pipe2file is running.
            2010-09-06 03:16:15 Pipe2file is running.
            Killing not reacting pipe2file process.
            Pipe2file killed successfully.
            2010-09-06 03:16:26 The pipe2file process has finished work with return code -1.
        The backup operation has ended.
        Filling reply buffer.
            Have encountered error -24920:
                The backup tool failed with 2 as sum of exit codes and pipe2file was killed.
            Constructed the following reply:
                ERR
                -24920,ERR_BACKUPOP: backup operation was unsuccessful
                The backup tool failed with 2 as sum of exit codes and pipe2file was killed.
        Reply buffer filled.
        Cleaning up.
            Removing data transfer pipes.
                Removing data transfer pipe /var/opt/omni/tmp/MAX.BACKDP-Archive.1 ... Done.
            Removed data transfer pipes successfully.
            Copying output of Backint for MaxDB to this file.
    Begin of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
    End of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
            Removed Backint for MaxDB's temporary output file '/var/opt/omni/tmp/MAX.bsi_out'.
            Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
    End of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
            Removed Backint for MaxDB's temporary error output file '/var/opt/omni/tmp/MAX.bsi_err'.
            Removed the Backint for MaxDB input file '/var/opt/omni/tmp/MAX.bsi_in'.
            Copying pipe2file output to this file.
    Begin of pipe2file output (/var/tmp/temp1283767880-0)----
    End of pipe2file output (/var/tmp/temp1283767880-0)----
            Removed pipe2file output '/var/tmp/temp1283767880-0'.
            Copying pipe2file error output to this file.
    Begin of pipe2file error output (/var/tmp/temp1283767880-1)----
    End of pipe2file error output (/var/tmp/temp1283767880-1)----
            Removed pipe2file error output '/var/tmp/temp1283767880-1'.
        Have finished clean up successfully.
    The backup of stage file '/export/sapdb/arch/MAX_LOG.041' was unsuccessful.
    2010-09-06 03:16:26
    Cleaning up.
        Have encountered error -24919:
            Can not remove file '/var/tmp/temp1283767880-0'.
            (System error 2; No such file or directory)
        Could not remove temporary output file of pipe2file ('/var/tmp/temp1283767880-0' ).
        Have encountered error -24919:
            Can not remove file '/var/tmp/temp1283767880-1'.
            (System error 2; No such file or directory)
        Could not remove temporary output file of pipe2file ('/var/tmp/temp1283767880-1' ).
    Have finished clean up successfully.
    Thanks,
    Subba

  • Error while taking archive log backup

    Dear all,
    We are getting the below mentioned error while taking the archive log backup
    ============================================================================
    BR0208I Volume with name RRPA02 required in device /dev/rmt0.1
    BR0210I Please mount BRARCHIVE volume, if you have not already done so
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.41
    BR0256I Enter 'c[ont]' to continue, 's[top]' to cancel BRARCHIVE:
    c
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
    BR0257I Your reply: 'c'
    BR0259I Program execution will be continued...
    BR0280I BRARCHIVE time stamp: 2010-05-27 16.43.46
    BR0226I Rewinding tape volume in device /dev/rmt0 ...
    BR0351I Restoring /oracle/RRP/sapreorg/.tape.hdr0
    BR0355I from /dev/rmt0.1 ...
    BR0278W Command output of 'LANG=C cd /oracle/RRP/sapreorg && LANG=C cpio -iuvB .tape.hdr0 < /dev/rmt0.1':
    Can't read input
    ===========================================================================
    We are able to take offline, online backups but we are facing the above mentioned problem while taking archive log backup
    We are on ECC 6 / Oracle / AIX
    The kernel is latest
    The drive is working fine and there is no problem with the tapes as we have tried using diffrent tapes
    can this be a permissions issue?
    I ran saproot.sh but somehow it is setting owner as sidadm and group as sapsys to some of the br* files
    I tried by changing the permissions to oraSID : dba but still the error is the same
    Any suggestions?

    Means you have not initialized the medias but trying to take backups.
    First check how many medias you have entered in your tape count parameter for archive log backups (just go to initSID.sap and check)
    Then increase/reduce them to according to your archive backup plan >> Initialize all the tapes according to their name (same as you have initialized in initSID.sap) >> stick physical label to all the medias according to name >> Schedule archive backups
    It will not ask you for initialization as already you have initialized in second step.
    Suggestion: Use 7 medias per week (one tape per day)
    Regards,
    Nick Loy

  • Error while creating stock using tcode: MB1C

    Error while creating stock using tcode: MB1C
    The Error is: Check table T004F:entry G006 does not exist

    Hi ,
    Please check the FSV (field status variant) for your company code in OBY6 .
    then go to transaction code OB14 --> enter the FSV --> Check if field status group G006(Material account) is maintained there or not.
    if not please maintain it.
    Thanks & Regards
    Anshu

  • Error while migrating users using CSSImportExportUtility

    Error while migrating users using CSSImportExportUtility
    I'm tring to export all user and group information from a Hyperion Shared Services 9.2.1 by using CSSExport.bat
    When there was only native directory in HSS, i can export these information successfully.
    But when I enabled NTLM external user authentication following error occurred:
    Exception in thread "main" java.lang.UnsatisfiedLinkError: getOSVersion
    at com.hyperion.css.spi.impl.ntlm.NTLMProvider.getOSVersion(Native Metho
    d)
    at com.hyperion.css.spi.impl.ntlm.NTLMProvider.<clinit>(Unknown Source)
    at com.hyperion.css.spi.impl.ntlm.NTLMConnectionClient.getUsers(Unknown
    Source)
    at com.hyperion.css.CSSAPIExtnImpl.getUsers(Unknown Source)
    at com.hyperion.css.CSSAPIImpl.getUsers(Unknown Source)
    at com.hyperion.css.CSSAPIImpl.initialize(Unknown Source)
    at com.hyperion.css.exchange.NativeProviderManager.<init>(Unknown Source
    at com.hyperion.css.exchange.ImportExportManager.cssExport(Unknown Sourc
    e)
    at com.hyperion.css.exchange.CommandUtility.run(Unknown Source)
    at com.hyperion.css.exchange.CommandUtility.main(Unknown Source)
    I searched reference documents on the web, found this article: (http://download.oracle.com/docs/cd/E12825_01/epm.111/readme/mdm_111110_readme.html)
    Troubleshooting Tip: If HSS is configured for an NTLM provider, DRM services may not start due to error: "Exception Emdm_Exception with message 'Could not Initialize CSS. Error: 'getOSVersion'."
    You may receive the following error after clicking the "Enable CSS" button in DRM Console: “LoadLibrary("C:\Hyperion\Master Data Management\mdm_ntier_css_validator.dll") failed - The specified module could not be found.”
    To resolve both of these conditions, update the Windows System Path on the Data Relationship Management server with the applicable JRE and CSS pathing below.
    NOTE: Reboot the Data Relationship Management server machine after making any changes to the Windows Path.
    NOTE: Ensure that only one JRE version and one CSS version are referenced in the Windows Path.
    ? For HSS 9.3.1:
    %HYPERION_HOME%\common\JRE\Sun\1.5.0\bin;%HYPERION_HOME%\common\JRE\Sun\1.5.0\bin\client;%HYPERION_HOME%\common\CSS\9.3.1\bin;
    ? For HSS 9.3.0:
    %HYPERION_HOME%\common\JRE\Sun\1.5.0\bin;%HYPERION_HOME%\common\JRE\Sun\1.5.0\bin\client;%HYPERION_HOME%\common\CSS\9.3.0\bin;
    ? For HSS 9.2.0.3:
    %HYPERION_HOME%\common\JDK\Sun\1.4.2\jre\bin;%HYPERION_HOME%\common\JDK\Sun\1.4.2\jre\bin\client;%HYPERION_HOME%\common\CSS\9.2.0.3\bin;
    ? For HSS 9.2.0:
    %HYPERION_HOME%\common\JDK\Sun\1.4.2\jre\bin;%HYPERION_HOME%\common\JDK\Sun\1.4.2\jre\bin\client;%HYPERION_HOME%\common\CSS\9.2.0\bin;
    I found these is no directory "%HYPERION_HOME%\common\CSS\9.2.0\bin;" exists but "%HYPERION_HOME%\common\CSS\9.2.1\bin;"
    I configured PATH by setting to above, and tried CSSExport again, still failed.
    Than I disabled the NTLM is HSS, tried CSSExport again. It was successful.
    So I am convinced that the problem caused by NTLM or PATH environment variable or some files associated.
    Does anybody know the solution ?

    I recommend you upgrade at least to 10.1.0.5. 10.1.0.2 comes with the very first version of csalter.plb, which has not the current implementation. From and to which character set do you try to migrate?
    -- Sergiusz

  • Error while Loading Budgets Using AMG

    Error while Loading Budgets Using AMG
    Dear All,
    I am trying to load Budgets into ORACLE Projects using AMG API's..
    I have developed PL/SQL which uses below procedures of pa_budget_pub package.
    init_budget;
    load_budget_line;
    execute_draft_budget;
    baseline_budget;
    Procedure execute_create_draft_budget fails with following error message.
    error message: Please enter a valid product code for this project.
    error message: You entered an invalid API parameter.Please enter valid parameter and try again. Parameter Name: Resource List Parameter Value: 1446
    error message: Project: AC044. Please specify a valid resource list.
    error message: Project: AC044. Please specify a valid resource list.
    Could somebody explain me how to resolve this error message and load the budget lines.
    Thanks in Advance.
    Afsal Basha.

    What I'm saying is, verify the column name. Dont post if not possible.
    Example follows, with one table intentionally "hidden".
    SQL> create table "tEsT" ("MaxNumber" float, "MaxnumbeR" number);
    SQL> select table_name,column_name from user_tab_columns where table_name like 't%';
    TABLE_NAME                     COLUMN_NAME
    tEsT                           MaxNumber
    tEsT                           MaxnumbeR
    teST                           iD
    teST                           MaxNumberHth,
    Fredrik

  • Error while insert data using execute immediate in dynamic table in oracle

    Error while insert data using execute immediate in dynamic table created in oracle 11g .
    first the dynamic nested table (op_sample) was created using the executed immediate...
    object is
    CREATE OR REPLACE TYPE ASI.sub_mark AS OBJECT (
    mark1 number,
    mark2 number
    t_sub_mark is a class of type sub_mark
    CREATE OR REPLACE TYPE ASI.t_sub_mark is table of sub_mark;
    create table sam1(id number,name varchar2(30));
    nested table is created below:
    begin
    EXECUTE IMMEDIATE ' create table '||op_sample||'
    (id number,name varchar2(30),subject_obj t_sub_mark) nested table subject_obj store as nest_tab return as value';
    end;
    now data from sam1 table and object (subject_obj) are inserted into the dynamic table
    declare
    subject_obj t_sub_mark;
    begin
    subject_obj:= t_sub_mark();
    EXECUTE IMMEDIATE 'insert into op_sample (select id,name,subject_obj from sam1) ';
    end;
    and got the below error:
    ORA-00904: "SUBJECT_OBJ": invalid identifier
    ORA-06512: at line 7
    then when we tried to insert the data into the dynam_table with the subject_marks object as null,we received the following error..
    execute immediate 'insert into '||dynam_table ||'
    (SELECT

    887684 wrote:
    ORA-00904: "SUBJECT_OBJ": invalid identifier
    ORA-06512: at line 7The problem is that your variable subject_obj is not in scope inside the dynamic SQL you are building. The SQL engine does not know your PL/SQL variable, so it tries to find a column named SUBJECT_OBJ in your SAM1 table.
    If you need to use dynamic SQL for this, then you must bind the variable. Something like this:
    EXECUTE IMMEDIATE 'insert into op_sample (select id,name,:bind_subject_obj from sam1) ' USING subject_obj;Alternatively you might figure out to use static SQL rather than dynamic SQL (if possible for your project.) In static SQL the PL/SQL engine binds the variables for you automatically.

  • Getting error while creating subsite using custom template in sharepoint2013

    Hi,
    I am getting the following error while creating subsite using custom template in sharpoint2013. even publish features are enabled.
    Please suggest me on this.
    Thanks in advance.

    You need to enable the PerformancePoint Service Site Collection Features(PPSMonDatasourceCtype)
    on the target site collection. go to site action > site settings> site collections features > and enable it and now try again.
    Similar case: http://imughal.wordpress.com/2012/09/20/dependency-feature-ppsmondatasourcectype-id-05891451-f0c4-4d4e-81b1-0dabd840bad4-for-feature-bicenterdataconnections-id-3d8210e9-1e89-4f12-98ef-643995339ed4-is-not-activated-at-this-scop/
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • INVALID_QUEUE_NAME :  Error while scheduling message using qRFC

    Hello SDNers
    We are currently performing our PI 7.1 upgrade and one of our scenario uses a Sender SOAP of the type EOIO. We tried executing this scenario in PI 7.1 XID environment and in it worked fine without any errors but in our PI 7.1 QA environment it is giving the following errors
      <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Message Split According to Receiver List
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>XIServer</SAP:Category>
      <SAP:Code area="INTERNAL">SCHEDULE_ERROR</SAP:Code>
      <SAP:P1>XBQOC___*</SAP:P1>
      <SAP:P2>INVALID_QUEUE_NAME</SAP:P2>
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:Stack>Error while scheduling message using qRFC (queue name = XBQOC___*, exception = INVALID_QUEUE_NAME)</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    Can you please exaplian what could be the issue we are facing.
    Thanks.
    Kiran
    Edited by: Kiran Sakhardande on Oct 22, 2008 4:08 PM

    HI Kiran,
    Have gone throgh the following link?
    INVALID_QUEUE_NAME
    Regards
    Sridhar Goli

Maybe you are looking for

  • Need help for a query

    Dear all, I have data in a table with 3 columns. Tabke name---test & columns are .... Proeductioncode varchar2(5); revisionno varchar2(3); dateadopted char(8); SELECT PRODUCTIONCODE, REVISIONNO, dateadopted FROM test where productioncode ='CI50E'; ou

  • Distorted photos when synced to Ipod touch through I tunes

    Does anyone have an answer for this? I tunes synced perfectly with my nano, and now I have purchased an Ipod Touch and it only syncs some of the pics and says some pics and oo1.jpeg could not be supported. I also have windows vista as my op sys

  • Event place wrong and can not be deleted

    iPhoto has assigned the WRONG place to many of my photos even though they have correct geo-tags. In the past I've found that the wrong place comes from the event and that deleting the event's place stopped masking the photo's correct place. Now I hav

  • Need to manually update an automatic primary key sequence

    EDIT:  I think this is actually a privileges issue not letting me see the sequences.  Sorry for the trouble Hi everyone. I actually didn't know this was possible until I came across this table. This table has a primary key that autoupdates without a

  • Mac Burning Problems.

    Okay hey I'm using a Macbook Pro 17" and I'm trying to burn a game to a DVD+R Double Layered disc. It's worked once after trying many times (randomly) but it still keeps failing. Here's the error I get; "The burn to the MATSHITA DVD-R UJ-868 drive fa