Data pump message in alert file

Hi Experts,
expdp system/testl@test full=Y directory=EXP_DIR2 dumpfile=testdatabase_02262009.dmp logfile=testexp_02262009.log
I useabove syntax to full export with Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at 13:49:00
However, I saw below message in alert as
Thu Feb 26 11:48:12 2009
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=30, OS id=4184
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_01', 'SYSTEM', 'KUPC$C_1_20090226114812', 'KUPC$S_1_20090226114812', 0);
Thu Feb 26 11:56:12 2009
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=31, OS id=3596
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_01', 'SYSTEM', 'KUPC$C_1_20090226115612', 'KUPC$S_1_20090226115612', 0);
Thu Feb 26 13:47:38 2009
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=19, OS id=3712
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_01', 'SYSTEM', 'KUPC$C_1_20090226134739', 'KUPC$S_1_20090226134739', 0);
kupprdp: worker process DW01 started with worker id=1, pid=22, OS id=3984
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_FULL_01', 'SYSTEM');
Are there some issue for my full export?
Thanks explain
JIM

user589812 wrote:
Yes. That paper makes expalain system monitoring info inalert file.
But I did not find message mens about The value (30) of MAXTRANS parameter ignored.
It seems that system skip something....
JImSKU's metalink doc has the answer
DataPump Export/Import Generate Messages "The Value (30) Of Maxtrans Parameter Ignored" in Alert Log
Doc ID: 455021.1
It's actually a non harmful bug, Since 10gR1 the parameter MAXTRANS is obsoleted but master table of data pump is created with it.

Similar Messages

  • Weird : Database Hangs without any message in Alert File

    Hi
    We have production database 10.2.0 , in the last 3 days database database has to be restarted as it hangs without any information in the Alert File.
    No trace file is genrated at that time.
    Only SYS user Can LOGIN to the database.
    I had no other option but to restart
    Database is in NO ARCHIVE MODE
    There is Enough Space in Temporary Tablespace and all other tablespace.
    NO patches were applied to the database
    NO parameters has been modified
    When i check the CPU usage its normal.
    There is Enough Disk Space.
    This is procution and i apreciate Help.
    Thank You

    user6421665 wrote:
    Only SYS user Can LOGIN to the database.
    What is the OS ? Windows --> Did you check virus scan or some other process running during that time ?
    Unix / Linux --> Is sysadmin running a defrag on the disks or some other OS Utility ?
    Do you login as SYS from the server ? Can you login as any other user from the server ?
    Hang == Slow ?
    >
    This is procution and i apreciate Help.
    Production database in NOARCHIVE log mode... is interesting by itself, but that is not a question here.

  • Dynamic Destination based on data in message using the File adapter

    I am unsure of where to start searching for a clue as to how to impement this. Any comments would be much appreciated.
    Scenario
    IDOC   -
    > XI -
    > File adapter -
    > File System
    (DESADV)                                                        (Variable based on info in DESADV. ie Site)
    Essentially I wish to use XI as a router and transformer for certain message types depending on the data within the message itself.
    Is anyone aware of any documentation around this kind of scenario ?
    Additional Notes
    There is the possibilty of have up to 1000 different destinations from the same message, but the message will ONLY be sent to the site addressed within.
    thanks in advance ..

    Hi Richard,
    The scenario requirement is not yet very clear.
    But if you want to route the Idoc to different receiver systems depending the payload value, you may configure it in ID with different business services and then using conditional Receiver Determination using XPath.
    That is one way, and if you want to use the same receiver service and only 100 different target folders on the File System, then you can surely use the Variable Substitution for the Target Directory in NFS File Adapter. You can build the target path with from the payload value in variable substitution table under the advanced tab in File Adapter. Remember to set the "Create Target Directory" indicator under Target tab.
    Hope one of these might be a solution for you. Let me know if you need more detailed information.
    Regards,
    Suddha

  • Message in alert file

    Please tell me whats going on
    Thu Sep 14 18:45:58 2006
    Incremental checkpoint up to RBA [0x23.d19.0], current log tail at RBA [0x23.e3e.0]
    Thanks

    Please check it and tell me the problem Lines which are confusing in bold case
    Fri Sep 15 10:48:57 2006
    Setting recovery target incarnation to 2
    Fri Sep 15 10:48:57 2006
    Successful mount of redo thread 1, with mount id 1128989844
    Fri Sep 15 10:48:57 2006
    Database mounted in Exclusive Mode
    Completed: ALTER DATABASE MOUNT
    Fri Sep 15 10:48:57 2006
    ALTER DATABASE OPEN
    Fri Sep 15 10:48:58 2006
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=14, OS id=2232
    Fri Sep 15 10:48:58 2006
    ARC0: Archival started
    Fri Sep 15 10:48:58 2006
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    Thread 1 opened at log sequence 35
    Current log# 3 seq# 35 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03A.LOG
    Current log# 3 seq# 35 mem# 1: D:\REDO03B.LOG
    Successful open of redo thread 1
    Fri Sep 15 10:48:59 2006
    SMON: enabling cache recovery
    Fri Sep 15 10:48:59 2006
    ARC0: STARTING ARCH PROCESSES
    ARC2: Archival started
    Fri Sep 15 10:48:59 2006
    ARC0: STARTING ARCH PROCESSES COMPLETE
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1 started with pid=13, OS id=2120
    Fri Sep 15 10:48:59 2006
    ARC1: Becoming the heartbeat ARCH
    Fri Sep 15 10:49:00 2006
    Successfully onlined Undo Tablespace 1.
    Fri Sep 15 10:49:00 2006
    SMON: enabling tx recovery
    ARC2 started with pid=16, OS id=2284
    Fri Sep 15 10:49:00 2006
    Database Characterset is WE8MSWIN1252
    replication_dependency_tracking turned off (no async multimaster replication found)
    Fri Sep 15 10:49:01 2006
    Incremental checkpoint up to RBA [0x23.206e.0], current log tail at RBA [0x23.209a.0]
    Fri Sep 15 10:49:02 2006
    Starting background process QMNC
    QMNC started with pid=17, OS id=2192
    Fri Sep 15 10:49:06 2006
    Completed: ALTER DATABASE OPEN

  • Data pump and SGA, system memory in window 2003

    Hi Experts,
    I have a question for oracle 10g data pump. Based on oracle document,
    all data pump ".dmp" and ".log" files are created on the Oracle server, not the client machine.
    That means data pump need to used oracle server SGA or other system memory. is it true?
    Or data pump must be support by oracle server memory?
    we use oracle 10 G R4 in 32 bit window 2003 with memory issue. So we take care this point.
    at the present, we use exp to do exp job. we DB size is 280 G. SGA is 2G in window.
    for testing, i can saw data pump message in alert file. I does not saw taht a export job broken DB and data replication job.
    Does any experts have this point experience?
    Thanks,
    JIM

    Hi Jim,
    user589812 wrote:
    I have a question for oracle 10g data pump. Based on oracle document,
    all data pump ".dmp" and ".log" files are created on the Oracle server, not the client machine.Yes, they are but you can always give a shared location on to another server.
    That means data pump need to used oracle server SGA or other system memory. is it true?
    Or data pump must be support by oracle server memory?Irrespective, the SGA is used for Conventional Export. You can reduce the overhead to some extent by a Direct Export (direct=y), but still the server resources will be in use.
    we use oracle 10 G R4 in 32 bit window 2003 with memory issue. So we take care this point.If you have windiows enterprise edition, why don't you enable PAE to use more memory on provided you have memory beyond 3GB on the server.
    at the present, we use exp to do exp job. we DB size is 280 G. SGA is 2G in window.With respect to the size of the database, your SGA is too small.
    Hope it helps.
    Regards
    Z.K.

  • Error in alert file

    Please tell me I am recieving this message in alert file Oracle DataGuard is not available in this edition of oracle
    Thanks

    Hi
    Data Guard Is Available with ORACLE 9i onwards. Socheck with the version of ur Database with the help of following command.....
    SQL> SELECT * FROM V&VERSION
    Thnx & Regards
    Sunil

  • How-to list the contents of a Data Pump Export file?

    How can I list the contents of a 10gR2 Data Pump Export file? I'm looking at the Syntax Diagram for Data Pump Import and can't see a list-only option.
    Regards,
    Al Malin

    use the parameter SQLFILE in the impdp which writes all the sql ddl's to the specified file.
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm
    SQLFILE
    Default: none
    Purpose
    Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
    Syntax and Description
    SQLFILE=[directory_object:]file_name
    The file_name specifies where the import job will write the DDL that would be executed during the job. The SQL is not actually executed, and the target system remains unchanged. The file is written to the directory object specified in the DIRECTORY parameter, unless another directory_object is explicitly specified here. Any existing file that has a name matching the one specified with this parameter is overwritten.
    Note that passwords are not included in the SQL file. For example, if a CONNECT statement is part of the DDL that was executed, it will be replaced by a comment with only the schema name shown. In the following example, the dashes indicate that a comment follows, and the hr schema name is shown, but not the password.
    -- CONNECT hr
    Therefore, before you can execute the SQL file, you must edit it by removing the dashes indicating a comment and adding the password for the hr schema (in this case, the password is also hr), as follows:
    CONNECT hr/hr
    For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.
    Example
    The following is an example of using the SQLFILE parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See FULL.
    impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmpSQLFILE=dpump_dir2:expfull.sql
    A SQL file named expfull.sql is written to dpump_dir2.
    Message was edited by:
    Ranga
    Message was edited by:
    Ranga

  • Data Pump with parallel data ending up in 1st file!

    Oracle 10g 10.2.0.3 EE
    Ran the following command on a 16 core HPUX PA-RISC machine:
    expdp normaluser/password@RDSPOC FULL=y directory=DMPDIR parallel=12 dumpfile=exp_RDSPOC_2nd_%U.dmp logfile=exp_RDSPOC_2nd.log
    Database size, approx 900Gig of data
    All things looked good at first. All cores close to 100% utilized, Disk also at 100% utilized
    1h23m later I had 11 files all about the same size +- 40Gig
    The first dump file continued to grow. After another 60 hours the first file is approx 500 Gig and growing (status says 85% complete)
    One core is running max, and disk utilization is about 15%
    Note, I am not sys but a normal user with full export privilege (If that could make a difference)
    How do I get it to keep the machine running all cores and disks as hard as possible?
    Thanks

    Hi,
    Metadata is never unloaded in parallel, but is sometimes loaded in parallel.
    See Parallel Capabilities of Oracle Data Pump (Doc ID 365459.1)
    Also check the status of expdp, if there is e.g. one big table only one worker will still have data to pump.
    HTH,
    Peter
    Edited by: pa110564 on 05.08.2011 11:04

  • How to consolidate data files using data pump when migrating 10g to 11g?

    We have one 10.2.0.4 database to be migrated to a new box running 11.2.0.1. The 10g database has too many data files scattered within too many file systems. I'd like to consolidate the data files into one or two large chunk in one file systems. Both OSs are RHEL 5. How should I do that using Data Pump Export/Import? I knew there is "Remap" option could be used, but it's only one to one mapping. How can I map multiple old data files into one new data file?

    hi
    datapump is terribly slow, make sure you have as much memory as possible allocated for Oracle but the bottleneck can be I/O throughput.
    Use PARALLEL option, set also these ones:
    * DISK_ASYNCH_IO=TRUE
    * DB_BLOCK_CHECKING=FALSE
    * DB_BLOCK_CHECKSUM=FALSE
    set high enough to allow for maximum parallelism:
    * PROCESSES
    * SESSIONS
    * PARALLEL_MAX_SERVERS
    more:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_perf.htm
    that's it, patience welcome ;-)
    P.S.
    For maximum throughput, do not set PARALLEL to much more than twice the number of CPUs (two workers for each CPU).
    Edited by: g777 on 2011-02-02 09:53
    P.S.2
    breaking news ;-)
    I am playing now with storage performance and I turned the option of disk cache (also called write-back cache) to ON (goes at least along with RAID0 and 5 and setting it you don't lose any data on that volume) - and it gave me 1,5 to 2 times speed-up!
    Some says there's a risk of lose of more data when outage happens, but there's always such a risk even though you can lose less. Anyway if you can afford it (and with import it's OK, as it ss not a production at that moment) - I recommend to try. Takes 15 minutes, but you can gain 2,5 hours out of 10 of normal importing.
    Edited by: g777 on 2011-02-02 14:52

  • I have a scenario,  ECC-PI-Message broker. ECC sending IDOC to  PI, PI execute mapping and  sending data to Message borker.(with almost one to one mapping)., IDOC(AAE)-PI-JMS. Now my requirement is., from PI  after mapping we need to save file in SAP fold

    I have a scenario,  ECC-PI-Message broker. ECC sending IDOC to  PI, PI execute mapping and  sends data to Message borker(thru JMS channel).(with almost one to one mapping)., IDOC(AAE)-PI-JMS. Now my requirement is., from PI  after mapping we need to create file with same data what ever send to Message broker and put the file in SAP folder without touching mapping. Is it possible? Please advise with the steps. We are using the ICO for this senario. Quick response is appriciated.

    Hi Pratik,
         http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42?quicklink=index&overridelayout=true
    This link might help.
    regards
    Anupam

  • File name substitution with Data pump

    Hi,
    I'm experimenting with Oracle data pump export, 10.2 on Windows 2003 Server.
    On my current export scripts, I am able to create the dump file name dynamically.
    This name includes the database name, date, and time such as the
    following : exp_testdb_01192005_1105.dmp.
    When I try to do the same thing with data pump, it doesn't work. Has anyone
    had success with this. Thanks.
    ed lewis

    Hi Ed
    This is an example for your issue:
    [oracle@dbservertest backups]$ expdp gsmtest/gsm directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
    Export: Release 10.2.0.1.0 - Production on Thursday, 19 January, 2006 12:23:55
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Starting "GSMTEST"."SYS_EXPORT_TABLE_01": gsmtest/******** directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 64 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "GSMTEST"."BAN_BANCO" 7.718 KB 9 rows
    Master table "GSMTEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for GSMTEST.SYS_EXPORT_TABLE_01 is:
    /megadata/clona/exp_testdb_01192005_1105.dmp
    Job "GSMTEST"."SYS_EXPORT_TABLE_01" successfully completed at 12:24:18
    This work OK.
    Regards,
    Wilson

  • Message in standby alert file

    I am recieving this error messgae in standby alert file. Any one have idea about this message
    ksvcreate: Process(m000) creation failed
    ksvcreate: Process(m000) creation failed
    ksvcreate: Process(m000) creation failed
    Thanks

    According to metalink Note:352388.1:
    >>
    Cause:
    Additional M000 slave processes cannot start
    Solution:
    Increase value of the repository database 'processes' parameter. Ensure
    it is set to at least 150.
    >>

  • Compressing data pump export files

    I am running 10g R2 on sun and aix. Is it possible to compress the output from data pump like you could with piping exp? I have a DW that is about 5T, and we wanted to be able to utilize data pump to dump data, but wanted these output files compressed.
    Thanks

    Just for your information, Check out BCV backup solution.

  • I can't see any sent messages, the alert message is Unab;e to open the summary file for sent. Please advise - Karen

    Hi .. I am not able to see any messages in my sent folder. The following alert message is showing on my screen
    Alert
    Unable to open the summary file for sent. Perhaps there was an error on disk, or the full path is too long
    Please advise
    Many thanks
    Karen

    try to rebuild the index file.
    Right click on Sent folder and select 'Properties'
    click on 'Repair folder' button
    click on OK
    close and reopen Thunderbird.
    It may also be possible that there are a lot of sent messages in the file.
    If rebuilding the index gets you access:
    # Delete any emails you do not want.
    # Archive all other emails or at least anything older than a month.
    # right click on folder and select 'Compact'
    see how to set up Archiving options:
    * https://support.mozilla.org/en-US/kb/archived-messages
    Is this a Pop or IMAP mail account folder?

  • How rman backup messages printed in alert file?

    surprised how the messages of rman getting printed in alert file?
    i am using 10.2.0.4 on AIX and using veritas net backups.....from rman.....
    ORA-19624: operation failed, retry possible
    ORA-19506: failed to create sequential file, name="c-1650503797-20091118-0c", parms=""
    ORA-27028: skgfqcre: sbtbackup returned error
    ORA-19511: Error received from media manager layer, error text:

    Raman wrote:
    Well. i agree. we have logs to capture media backups....so wanted to cut only these media backups failure out of alert file...else what is happening here is our monitoring reports these as alerts as already we know from media backup logs...So why not modify your monitoring of the alert log to exclude that particular error?
    Personally, I'd allow both systems to continue to report the error from their perspective.
    >
    in nutshell...just exploring any options to turn off the media backup failures like any rman parameters or can anything to do with veritas parameters?
    Anyway thanks to all for your valuable opinions.
    Thanks,
    Raman.

Maybe you are looking for