Advantage of datapump export and import over original export and import

Hi,
let me know the Advantage of datapump export(expdm) and import(impdm) over original export(exp) and import(imp).

Hello,
let me know the Advantage of datapump export(expdm) and import(impdm) over original export(exp) and import(imp). There're many advantages on using DATAPUMP.
For instance, with INCLUDE / EXCLUDE parameters you can filter exactly which Object and / or Object Type you intend to Export or Import. Which is not easy with the Original Export / Import (except for Tables, Index, constraints,...).
You can Import straightly from a NETWORK_LINK without using a Dump.
You have many interesting features as COMPRESSION, FLASHBACK_SCN / _TIME*,... .
You can use PL/SQL API to perform your Export / Import rather than using the Command Line Interface.
More over, the DATAPUMP is much more optimized than the Original Export/Import and use Direct Path or External Tables, ... and what to say about the REMAP_% parameters which let you rename Datafiles, Schema, Tablespaces, ...
There would be many things to tell about DATAPUMP. You'll find an overview of this very good tool on the following links:
http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
http://www.oracle-base.com/articles/11g/DataPumpEnhancements_11gR1.php
Hope this help.
Best regards,
Jean-Valentin

Similar Messages

  • Which background process involves in datapump export/import?

    Hi guys,
    Could any one please tell me which background process involves in datapump export and import activity? . any information please.
    /mR

    Data pump export and import is done by foreground server processes (master and workers), not background.
    http://www.acs.ilstu.edu/docs/Oracle/server.101/b10825/dp_overview.htm#sthref22

  • SQL*Net and Datapump export disconnects with ORA-03113 error

    Hi,
    I have some trouble with an Oracle 11.2.0.1.0 server installation on a Windows 2008 R2 server.
    If I am connected with SQL*Plus on the database server to an instance it randomly disconnects me, but no longer than after 30-60 seconds even though I am active running simple select queries.
    I can connect again without any error and soon it disconnects me again.
    If I am running an expdp, datapump export it starts but displays error UDE-03113 and then ORA-03113. The datapump export continues in the background and finish successfully though.
    When having a look at the Alert.log for the database instance I see this error everytime it happens:
    Fatal NI connect error 12547, connecting to:
    (LOCAL=NO)
    VERSION INFORMATION:
         TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
         Oracle Bequeath NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production
         Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production
    Time: 22-JUL-2012 21:10:40
    Tracing not turned on.
    Tns error struct:
    ns main err code: 12547
    TNS-12547: TNS:lost contact
    ns secondary err code: 12560
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0
    opiodr aborting process unknown ospid (19800) as a result of ORA-609
    Mon Jul 23 02:00:00 2012
    I don't believe it is an network problem as each command I am running is on the server console. No other errors in the OS event log so I need some help where to start...
    Thanks in advance!
    Best Regards
    Martin Gabrielsson

    Thanks, no windows firewall turned on, I turned on SQL*Net tracing and this is the last part of the trc file after doing a simple select query on a table with 3 rows. After the select query has been runned I am disconnected... Any idea, help is really appreciated!
    +2012-07-30 18:15:28.658650 : nsbasic_brc:packet dump+
    +2012-07-30 18:15:28.658660 : nsbasic_brc:01 74 00 00 06 00 00 00 |.t......|+
    +2012-07-30 18:15:28.658668 : nsbasic_brc:00 00 06 01 1A 00 26 00 |......&.|+
    +2012-07-30 18:15:28.658676 : nsbasic_brc:00 00 00 00 0F 00 00 00 |........|+
    +2012-07-30 18:15:28.658684 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658693 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658701 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658708 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658717 : nsbasic_brc:00 00 00 00 07 96 2C 00 |......,.|+
    +2012-07-30 18:15:28.658725 : nsbasic_brc:25 06 4D 52 20 20 20 20 |%.MR....|+
    +2012-07-30 18:15:28.658734 : nsbasic_brc:03 C2 1A 5F 04 C3 02 10 |..._....|+
    +2012-07-30 18:15:28.658742 : nsbasic_brc:43 06 43 4C 4F 53 45 44 |C.CLOSED|+
    +2012-07-30 18:15:28.658750 : nsbasic_brc:FF FF FF FF FF FF FF FF |........|+
    +2012-07-30 18:15:28.658758 : nsbasic_brc:FF FF 06 4C 50 49 4D 20 |...LPIM.|+
    +2012-07-30 18:15:28.658767 : nsbasic_brc:20 FF FF FF FF FF FF FF |........|+
    +2012-07-30 18:15:28.658775 : nsbasic_brc:FF FF FF FF FF 0A 4D 41 |......MA|+
    +2012-07-30 18:15:28.658783 : nsbasic_brc:52 47 41 42 20 20 20 20 |RGAB....|+
    +2012-07-30 18:15:28.658791 : nsbasic_brc:FF 3C 77 68 65 65 6C 20 |.<wheel.|+
    +2012-07-30 18:15:28.658799 : nsbasic_brc:73 68 6F 70 20 20 20 20 |shop....|+
    +2012-07-30 18:15:28.658807 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658815 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658824 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658832 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658840 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658848 : nsbasic_brc:20 20 20 20 20 20 02 C1 |........|+
    +2012-07-30 18:15:28.658856 : nsbasic_brc:05 07 78 66 05 03 10 09 |..xf....|+
    +2012-07-30 18:15:28.658864 : nsbasic_brc:38 02 C1 02 FF 03 C2 08 |8.......|+
    +2012-07-30 18:15:28.658872 : nsbasic_brc:42 FF 01 58 04 01 00 00 |B..X....|+
    +2012-07-30 18:15:28.658880 : nsbasic_brc:00 14 00 01 02 00 00 00 |........|+
    +2012-07-30 18:15:28.658888 : nsbasic_brc:7B 05 00 00 00 00 02 00 |{.......|+
    +2012-07-30 18:15:28.658897 : nsbasic_brc:00 00 03 00 20 00 00 00 |........|+
    +2012-07-30 18:15:28.658905 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658912 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658921 : nsbasic_brc:00 00 00 00 00 16 00 00 |........|+
    +2012-07-30 18:15:28.658929 : nsbasic_brc:01 00 00 00 36 01 00 00 |....6...|+
    +2012-07-30 18:15:28.658937 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658945 : nsbasic_brc:00 00 00 00 E0 53 D1 22 |.....S."|+
    +2012-07-30 18:15:28.658953 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658961 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658969 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658977 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658987 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658996 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.659004 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.659012 : nsbasic_brc:00 00 00 00 17 4F 52 41 |.....ORA|+
    +2012-07-30 18:15:28.659021 : nsbasic_brc:2D 30 31 34 30 33 3A 20 |-01403:.|+
    +2012-07-30 18:15:28.659029 : nsbasic_brc:64 61 74 61 20 73 61 6B |data.sak|+
    +2012-07-30 18:15:28.659038 : nsbasic_brc:6E 61 73 0A |nas. |+
    +2012-07-30 18:15:28.659046 : nsbasic_brc:exit: oln=0, dln=362, tot=372, rc=0+
    +2012-07-30 18:15:28.659054 : nioqrc:exit+
    +2012-07-30 18:16:09.583993 : nioqsn:entry+
    +2012-07-30 18:16:09.584062 : nioqsn:exit+
    +2012-07-30 18:16:09.584086 : nioqrc:entry+
    +2012-07-30 18:16:09.584097 : nsbasic_bsd:entry+
    +2012-07-30 18:16:09.584107 : nsbasic_bsd:tot=0, plen=318.+
    +2012-07-30 18:16:09.584116 : nttfpwr:entry+
    +2012-07-30 18:16:09.584146 : ntt2err:entry+
    +2012-07-30 18:16:09.584162 : ntt2err:soc 564 error - operation=6, ntresnt[0]=530, ntresnt[1]=54, ntresnt[2]=0+
    +2012-07-30 18:16:09.584171 : ntt2err:exit+
    +2012-07-30 18:16:09.584178 : nttfpwr:exit+
    +2012-07-30 18:16:09.584192 : nserror:entry+
    +2012-07-30 18:16:09.584203 : nserror:nsres: id=0, op=67, ns=12571, ns2=12560; nt[0]=530, nt[1]=54, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0+
    +2012-07-30 18:16:09.584213 : nsbasic_bsd:exit (-1)+
    +2012-07-30 18:16:09.584224 : nioqrc:send failed: bl = 1, nicbl = 1+
    +2012-07-30 18:16:09.584234 : nioqper: error from nioqrc+
    +2012-07-30 18:16:09.584242 : nioqper: ns main err code: 12571+
    +2012-07-30 18:16:09.584250 : nioqper: ns (2) err code: 12560+
    +2012-07-30 18:16:09.584258 : nioqper: nt main err code: 530+
    +2012-07-30 18:16:09.584266 : nioqper: nt (2) err code: 54+
    +2012-07-30 18:16:09.584275 : nioqper: nt OS err code: 0+
    +2012-07-30 18:16:09.584285 : nioqer:entry+
    +2012-07-30 18:16:09.584293 : nioqer: incoming err = 12150+
    +2012-07-30 18:16:09.584301 : niomapnserror:entry+
    +2012-07-30 18:16:09.584311 : niqme:entry+
    +2012-07-30 18:16:09.584321 : niqme:reporting NS-12571 error as ORA-12571+
    +2012-07-30 18:16:09.584330 : niqme:exit+
    +2012-07-30 18:16:09.584337 : niomapnserror:exit+
    +2012-07-30 18:16:09.584344 : nioqce:entry+
    +2012-07-30 18:16:09.584352 : nioqce:exit+
    +2012-07-30 18:16:09.584359 : nioqer: returning err = 12571+
    +2012-07-30 18:16:09.584366 : nioqer:exit+
    +2012-07-30 18:16:09.584374 : nioqrc: returning error: 12571+
    +2012-07-30 18:16:09.584381 : nioqrc:exit+
    +2012-07-30 18:16:09.584396 : nioqrs:entry+
    +2012-07-30 18:16:09.584412 : nioqrs: state = interrupted (1)+
    +2012-07-30 18:16:09.584425 : nscontrol:entry+
    +2012-07-30 18:16:09.584435 : nscontrol:cmd=45, lcl=0x0+
    +2012-07-30 18:16:09.584442 : nscontrol:normal exit+
    +2012-07-30 18:16:09.584450 : nscontrol:entry+
    +2012-07-30 18:16:09.584457 : nscontrol:cmd=1, lcl=0x0+
    +2012-07-30 18:16:09.584464 : nscontrol:normal exit+
    +2012-07-30 18:16:09.584476 : nioqsm:entry+
    +2012-07-30 18:16:09.584485 : nioqsm: Sending break packet (1)...+
    +2012-07-30 18:16:09.584493 : nscontrol:entry+
    +2012-07-30 18:16:09.584500 : nscontrol:cmd=45, lcl=0x0+
    +2012-07-30 18:16:09.584508 : nscontrol:normal exit+
    +2012-07-30 18:16:09.584516 : nsdo:entry+
    +2012-07-30 18:16:09.584525 : nsdo:cid=0, opcode=67, *bl=1, *what=17, uflgs=0x100, cflgs=0x3+
    +2012-07-30 18:16:09.584534 : nsdo:rank=64, nsctxrnk=0+
    +2012-07-30 18:16:09.584543 : nsdo:nsctx: state=8, flg=0x400d, mvd=0+
    +2012-07-30 18:16:09.584552 : nsdo:gtn=32, gtc=32, ptn=10, ptc=8191+
    +2012-07-30 18:16:09.584560 : nsdofls:entry+
    +2012-07-30 18:16:09.584569 : nsdofls:DATA flags: 0x0+
    +2012-07-30 18:16:09.584577 : nsdofls:normal exit+
    +2012-07-30 18:16:09.584587 : nsdo:sending NSPTMK packet+
    +2012-07-30 18:16:09.584596 : nspsend:entry+
    +2012-07-30 18:16:09.584605 : nspsend:plen=11, type=12+
    +2012-07-30 18:16:09.584614 : nttwr:entry+
    +2012-07-30 18:16:09.584628 : ntt2err:entry+
    +2012-07-30 18:16:09.584638 : ntt2err:soc 564 error - operation=6, ntresnt[0]=530, ntresnt[1]=54, ntresnt[2]=0+
    +2012-07-30 18:16:09.584646 : ntt2err:exit+
    +2012-07-30 18:16:09.584657 : nttwr:exit+
    +2012-07-30 18:16:09.584668 : nspsend:0 bytes to transport+
    +2012-07-30 18:16:09.584677 : nspsend:transport write error+
    +2012-07-30 18:16:09.584684 : nspsend:error exit+
    +2012-07-30 18:16:09.584692 : nsdo:error sending NSPTMK packet+
    +2012-07-30 18:16:09.584700 : nserror:entry+
    +2012-07-30 18:16:09.584709 : nserror:nsres: id=0, op=67, ns=12571, ns2=12560; nt[0]=530, nt[1]=54, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0+
    +2012-07-30 18:16:09.584719 : nsdo:nsctxrnk=0+
    +2012-07-30 18:16:09.584726 : nsdo:error exit+
    +2012-07-30 18:16:09.584737 : nioqsm:send-break: failed to send break...+
    +2012-07-30 18:16:09.584746 : nioqper: error from send-marker+
    +2012-07-30 18:16:09.584753 : nioqper: ns main err code: 12571+
    +2012-07-30 18:16:09.584761 : nioqper: ns (2) err code: 12560+
    +2012-07-30 18:16:09.584769 : nioqper: nt main err code: 530+
    +2012-07-30 18:16:09.584776 : nioqper: nt (2) err code: 54+
    +2012-07-30 18:16:09.584784 : nioqper: nt OS err code: 0+
    +2012-07-30 18:16:09.584792 : nioqsm:exit+
    +2012-07-30 18:16:09.584799 : nioqer:entry+
    +2012-07-30 18:16:09.584808 : nioqer: incoming err = 12152+
    +2012-07-30 18:16:09.584817 : niomapnserror:entry+
    +2012-07-30 18:16:09.584824 : niqme:entry+
    +2012-07-30 18:16:09.584833 : niqme:reporting NS-12571 error as ORA-12571+
    +2012-07-30 18:16:09.584840 : niqme:exit+
    +2012-07-30 18:16:09.584847 : niomapnserror:exit+
    +2012-07-30 18:16:09.584854 : nioqce:entry+
    +2012-07-30 18:16:09.584861 : nioqce:exit+
    +2012-07-30 18:16:09.584868 : nioqer: returning err = 12571+
    +2012-07-30 18:16:09.584876 : nioqer:exit+
    +2012-07-30 18:16:09.584884 : nioqrs:nioqrs: Couldn't send break. returning 12571+
    +2012-07-30 18:16:09.584894 : nioqrs:exit+
    +2012-07-30 18:16:09.584912 : nioqds:entry+
    +2012-07-30 18:16:09.584921 : nioqds: disconnecting...+
    +2012-07-30 18:16:09.584933 : nsclose:entry+
    +2012-07-30 18:16:09.584945 : nsvntx_dei:entry+
    +2012-07-30 18:16:09.584953 : nsvntx_dei:exit+
    +2012-07-30 18:16:09.584964 : nstimarmed:entry+
    +2012-07-30 18:16:09.584973 : nstimarmed:no timer allocated+
    +2012-07-30 18:16:09.584980 : nstimarmed:normal exit+
    +2012-07-30 18:16:09.584994 : nttctl:entry+
    +2012-07-30 18:16:09.585009 : nttctl:entry+
    +2012-07-30 18:16:09.585021 : nsfull_cls:entry+
    +2012-07-30 18:16:09.585031 : nsfull_cls:cid=0, opcode=65, *bl=0, *what=0, uflgs=0x0, cflgs=0x0+
    +2012-07-30 18:16:09.585040 : nsfull_cls:nsctx: state=8, flg=0x4009, mvd=0+
    +2012-07-30 18:16:09.585048 : nsdo:entry+
    +2012-07-30 18:16:09.585056 : nsdo:cid=0, opcode=67, *bl=0, *what=1, uflgs=0x0, cflgs=0x1+
    +2012-07-30 18:16:09.585065 : nsdo:nsctx: state=8, flg=0x4009, mvd=0+
    +2012-07-30 18:16:09.585074 : nsdo:gtn=32, gtc=32, ptn=10, ptc=8191+
    +2012-07-30 18:16:09.585082 : nsdo:normal exit+
    +2012-07-30 18:16:09.585089 : nsdofls:entry+
    +2012-07-30 18:16:09.585097 : nsdofls:DATA flags: 0x40+
    +2012-07-30 18:16:09.585105 : nsdofls:sending NSPTDA packet+
    +2012-07-30 18:16:09.585113 : nspsend:entry+
    +2012-07-30 18:16:09.585120 : nspsend:plen=10, type=6+
    +2012-07-30 18:16:09.585128 : nttwr:entry+
    +2012-07-30 18:16:09.585140 : ntt2err:entry+
    +2012-07-30 18:16:09.585150 : ntt2err:soc 564 error - operation=6, ntresnt[0]=530, ntresnt[1]=54, ntresnt[2]=0+
    +2012-07-30 18:16:09.585158 : ntt2err:exit+
    +2012-07-30 18:16:09.585165 : nttwr:exit+
    +2012-07-30 18:16:09.585173 : nspsend:0 bytes to transport+
    +2012-07-30 18:16:09.585181 : nspsend:transport write error+
    +2012-07-30 18:16:09.585188 : nspsend:error exit+
    +2012-07-30 18:16:09.585196 : nserror:entry+
    +2012-07-30 18:16:09.585205 : nserror:nsres: id=0, op=67, ns=12571, ns2=12560; nt[0]=530, nt[1]=54, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0+
    +2012-07-30 18:16:09.585215 : nsdofls:exit (-1)+
    +2012-07-30 18:16:09.585223 : nsbfr:entry+
    +2012-07-30 18:16:09.585230 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585239 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585247 : nsbfr:normal exit+
    +2012-07-30 18:16:09.585254 : nsbfr:entry+
    +2012-07-30 18:16:09.585261 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585268 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585276 : nsbfr:normal exit+
    +2012-07-30 18:16:09.585283 : nsfull_cls:normal exit+
    +2012-07-30 18:16:09.585291 : nsiocancel:entry+
    +2012-07-30 18:16:09.585303 : nsiofrrg:entry+
    +2012-07-30 18:16:09.585313 : nsiofrrg:cur = 5e5f3f8+
    +2012-07-30 18:16:09.585321 : nsbfr:entry+
    +2012-07-30 18:16:09.585328 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585335 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585342 : nsbfr:normal exit+
    +2012-07-30 18:16:09.585350 : nsiofrrg:exit+
    +2012-07-30 18:16:09.585358 : nsiocancel:exit+
    +2012-07-30 18:16:09.585365 : nsclose:closing transport+
    +2012-07-30 18:16:09.585375 : nttdisc:entry+
    +2012-07-30 18:16:09.585438 : nttdisc:Closed socket 564+
    +2012-07-30 18:16:09.585453 : nttdisc:exit+
    +2012-07-30 18:16:09.585463 : nsclose:global context check-out (from slot 0) complete+
    +2012-07-30 18:16:09.585471 : nsnadisc:entry+
    +2012-07-30 18:16:09.585484 : nadisc:entry+
    +2012-07-30 18:16:09.585496 : nacomtm:entry+
    +2012-07-30 18:16:09.585506 : nacompd:entry+
    +2012-07-30 18:16:09.585513 : nacompd:exit+
    +2012-07-30 18:16:09.585521 : nacompd:entry+
    +2012-07-30 18:16:09.585527 : nacompd:exit+
    +2012-07-30 18:16:09.585535 : nacomtm:exit+
    +2012-07-30 18:16:09.585545 : nas_dis:entry+
    +2012-07-30 18:16:09.585553 : nas_dis:exit+
    +2012-07-30 18:16:09.585562 : nau_dis:entry+
    +2012-07-30 18:16:09.585577 : nau_dis:exit+
    +2012-07-30 18:16:09.585587 : naeetrm:entry+
    +2012-07-30 18:16:09.585596 : naeetrm:exit+
    +2012-07-30 18:16:09.585604 : naectrm:entry+
    +2012-07-30 18:16:09.585613 : naectrm:exit+
    +2012-07-30 18:16:09.585623 : nagbltrm:entry+
    +2012-07-30 18:16:09.585632 : nau_gtm:entry+
    +2012-07-30 18:16:09.585640 : nau_gtm:exit+
    +2012-07-30 18:16:09.585648 : nagbltrm:exit+
    +2012-07-30 18:16:09.585657 : nadisc:exit+
    +2012-07-30 18:16:09.585665 : nsnadisc:normal exit+
    +2012-07-30 18:16:09.585675 : nsvntx_dei:entry+
    +2012-07-30 18:16:09.585682 : nsvntx_dei:exit+
    +2012-07-30 18:16:09.585694 : nsopenfree_nsntx:nlhthdel from mplx_ht_nsgbu, ctx=5e5e0e0 nsntx=5e5e6c0+
    +2012-07-30 18:16:09.585703 : nsiocancel:entry+
    +2012-07-30 18:16:09.585710 : nsiofrrg:entry+
    +2012-07-30 18:16:09.585718 : nsiofrrg:exit+
    +2012-07-30 18:16:09.585725 : nsiocancel:exit+
    +2012-07-30 18:16:09.585732 : nsmfr:entry+
    +2012-07-30 18:16:09.585741 : nsmfr:2944 bytes at 0x5e5e6c0+
    +2012-07-30 18:16:09.585748 : nsmfr:normal exit+
    +2012-07-30 18:16:09.585755 : nsmfr:entry+
    +2012-07-30 18:16:09.585763 : nsmfr:240 bytes at 0x5f2a610+
    +2012-07-30 18:16:09.585771 : nsmfr:normal exit+
    +2012-07-30 18:16:09.585778 : nsmfr:entry+
    +2012-07-30 18:16:09.585785 : nsmfr:280 bytes at 0x63c010+
    +2012-07-30 18:16:09.585792 : nsmfr:normal exit+
    +2012-07-30 18:16:09.585803 : nladtrm:entry+
    +2012-07-30 18:16:09.585820 : nladtrm:exit+
    +2012-07-30 18:16:09.585828 : nsmfr:entry+
    +2012-07-30 18:16:09.585836 : nsmfr:1496 bytes at 0x5e5e0e0+
    +2012-07-30 18:16:09.585844 : nsmfr:normal exit+
    +2012-07-30 18:16:09.585851 : nsclose:normal exit+
    +2012-07-30 18:16:09.585859 : nioqds:exit+
    +2012-07-30 18:16:09.585868 : nsbfree:entry+
    +2012-07-30 18:16:09.585876 : nsbgetfl:entry+
    +2012-07-30 18:16:09.585884 : nsbgetfl:normal exit+
    +2012-07-30 18:16:09.585894 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585901 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585909 : nsbfree:normal exit+
    +2012-07-30 18:16:09.585916 : nsbfree:entry+
    +2012-07-30 18:16:09.585923 : nsbgetfl:entry+
    +2012-07-30 18:16:09.585930 : nsbgetfl:normal exit+
    +2012-07-30 18:16:09.585938 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585945 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585952 : nsbfree:normal exit+
    +2012-07-30 18:16:09.585961 : nigtrm:Count in the NI global area is now 2+
    +2012-07-30 18:16:09.585974 : nsbfrfl:entry+
    +2012-07-30 18:16:09.585982 : nsbrfr:entry+
    +2012-07-30 18:16:09.585991 : nsbrfr:nsbfs at 0x5f26470, data at 0x5f26520.+
    +2012-07-30 18:16:09.585999 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586006 : nsbrfr:entry+
    +2012-07-30 18:16:09.586015 : nsbrfr:nsbfs at 0x5f28540, data at 0x5f285f0.+
    +2012-07-30 18:16:09.586025 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586033 : nsbrfr:entry+
    +2012-07-30 18:16:09.586040 : nsbrfr:nsbfs at 0x5e5f480, data at 0x5e5f530.+
    +2012-07-30 18:16:09.586048 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586055 : nsbrfr:entry+
    +2012-07-30 18:16:09.586063 : nsbrfr:nsbfs at 0x5f2a610, data at 0x5f2a9f0.+
    +2012-07-30 18:16:09.586073 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586082 : nsbrfr:entry+
    +2012-07-30 18:16:09.586090 : nsbrfr:nsbfs at 0x62c7d0, data at 0x5f2ca10.+
    +2012-07-30 18:16:09.586098 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586106 : nsbfrfl:normal exit+
    +2012-07-30 18:16:09.586153 : nigtrm:Count in the NL global area is now 3+

  • Datapump Export stops at "Estimate in progress...."

    Hi,
    I am facing an issue while doing Schema level Datapump Export in Oracle 10g. The export for a particular schema stops at "Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA" and more over it only spawns one worker(DW01) irrespective of the PARALLEL parameter value. For other schema the export works fine and even the table level export for the problematic schema.
    I am clue less, because the alert log does not show anything, can any one please advice....
    Here is how my Parfile looks like:
    userid=id/password
    directory=impdir
    parallel=2
    schemas=prod11sep12
    dumpfile=expC2P_20120925_%U.dmp
    logfile=expC2P_20120925.log
    job_name=expC2P_20120925
    tail -f expC2P_20120925.log
    bash-3.00$ expdp parfile=expC2P.par ESTIMATE=STATISTICS
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 26 September, 2012 16:44:30
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."EXPC2P_20120925": parfile=expC2P.par ESTIMATE=STATISTICS
    Estimate in progress using STATISTICS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Alert log:
    kupprdp: master process DM00 started with pid=38, OS id=15156
    to execute - SYS.KUPM$MCP.MAIN('EXPC2P_20120925', 'SYSTEM', 'KUPC$C_1_20120926164430', 'KUPC$S_1_20120926164430', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=46, OS id=15201
    to execute - SYS.KUPW$WORKER.MAIN('EXPC2P_20120925', 'SYSTEM');
    Thanks in Advance...

    Pl enable trace as per this MOS Doc to see if additional debug information can be gathered
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    HTH
    Srini

  • Getting Datapump Export Dump file to the local machine

    I apologize to everyone as this is a duplicate post.
    Re: Getting Datapump Export Dump file to the local machine
    My initial thread(started yesterday)was in 'Database General' and didn't get much response today. Where do i post questions on EXPORT/IMPORT utilities?
    Anyway, here is my problem:
    I want to take the export dump of itemrep schema in orcl database (in a remote machine). I have an Oracle server (10G Rel2) running in my local Windows machine. I have created a user john with necessary EXPORT/IMPORT privileges in my local db. Then i created a Directory object,ie a folder named datapump in my local hard drive and granted READ WRITE privileges to john.
    So john, who is a user in my local machine's oracle db is going to run the expdp utility.
    expdp john/jhendrix@my_local_db_alias SCHEMAS=itemrep directory=datapump logfile=itemrepexp.log
    The above command will fail because it will look for itemrep schema inside my local db, not the remote db where the itemprep is actually located. And you can't qualify the schemaname with its db in the SCHEMAS parameter (like SCHEMAS=itemrep@orcl).
    Can anyone provide me a solution for this?

    I think you can initiate the datapump exp utility from your client machine to export a schema in a remote database.But, Upon execution,oracle looks for the directory in the remote database and not on your local machine.
    You're inovoking expdp from a client (local DB) to export data from a remote DB.
    So, With this method, you can create the dumpfiles only on the Remote server and not on the local Machine.
    You can perform a direct import instead of export using the NETWORK_LINK option.
    Create a DBlink from your local and Remote DB and verify the connection.
    Then,Initiate the Impdp from Your local machine DB using the parameter network_link=<db_link of the Remote DB> to import the schema.
    The advantage of this option eliminates the Dumpfile creation at the Server side.
    There are no dumpfiles during the import process. the Data is imported directly to the target schema.

  • Datapump exp and imp using API method

    Good Day All,
    I want to know what is the best way of error handling of datapump export and Import using API. I need to implement in my current project as there lot of limitations and the only way to see the process worked is writing the code with error handling method using exceptions. I have seen some examples on the web but if there are practicle examples or good links with examples that will work sure way, I would like to know and explore. I have never used API method so I am not sure of it.
    Thanks a lot for your time.
    Maggie.

    I wrote the procedure with error handling but it does not out put any information of the statuses while kicking off the expdp process. I have put dbms_output.put_line as per oracle docs example but it doesnt display any messages, just kicks off and created dumpfiles. As a happy path its ok but I need to track if something goes wrong. I even stated set serveroutput on sqlplus. It doesnt even display if job started. Please help me where I made a mistake to display the status . Do I need to modify or add anything. Help!!
    CREATE OR REPLACE PROCEDURE SCHEMAS_EXPORT_TEST AS
    --Using Exception Handling During a Simple Schema Export
    --This Proceedure shows a simple schema export using the Data Pump API.
    --It extends to show how to use exception handling to catch the SUCCESS_WITH_INFO case,
    --and how to use the GET_STATUS procedure to retrieve additional information about errors.
    --If you want to get status up to the current point, but a handle has not yet been obtained,
    --you can use NULL for DBMS_DATAPUMP.GET_STATUS.http://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_api.htm
    h1 number; -- Data Pump job handle
    l_handle number;
    ind NUMBER; -- Loop index
    spos NUMBER; -- String starting position
    slen NUMBER; -- String length for output
    percent_done NUMBER; -- Percentage of job complete
    job_state VARCHAR2(30); -- To keep track of job state
    sts ku$_Status; -- The status object returned by get_status
    le ku$_LogEntry; -- For WIP and error messages
    js ku$_JobStatus; -- The job status from get_status
    jd ku$_JobDesc; -- The job description from get_status
    BEGIN
    h1 := dbms_datapump.open (operation => 'EXPORT',job_mode => 'SCHEMA');
    dbms_datapump.add_file (handle => h1,filename => 'SCHEMA_BKP_%U.DMP',directory => 'BKP_SCHEMA_EXPIMP',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    dbms_datapump.add_file (handle => h1,directory => 'BKP_SCHEMA_EXPIMP',filename => 'SCHEMA_BKP_EX.log',filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
    ---- A metadata filter is used to specify the schema that will be exported.
    dbms_datapump.metadata_filter (handle => h1, name => 'SCHEMA_LIST',value => q'|'XXXXXXXXXX'|');
    dbms_datapump.set_parallel( handle => h1, degree => 4);
    -- Start the job. An exception will be returned if something is not set up
    -- properly.One possible exception that will be handled differently is the
    -- success_with_info exception. success_with_info means the job started
    -- successfully, but more information is available through get_status about
    -- conditions around the start_job that the user might want to be aware of.
    begin
    dbms_datapump.start_job (handle => h1);
    dbms_output.put_line('Data Pump job started successfully');
    exception
    when others then
    if sqlcode = dbms_datapump.success_with_info_num
    then
    dbms_output.put_line('Data Pump job started with info available:');
    dbms_datapump.get_status(h1,
    dbms_datapump.ku$_status_job_error,0,
    job_state,sts);
    if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
    then
    le := sts.error;
    if le is not null
    then
    ind := le.FIRST;
    while ind is not null loop
    dbms_output.put_line(le(ind).LogText);
    ind := le.NEXT(ind);
    end loop;
    end if;
    end if;
    else
    raise;
    end if;
    end;
    -- The export job should now be running. In the following loop, we will monitor the job until it completes.
    -- In the meantime, progress information is displayed.
    percent_done := 0;
    job_state := 'UNDEFINED';
    while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
    dbms_datapump.ku$_status_job_error +
    dbms_datapump.ku$_status_job_status +
    dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;
    -- If the percentage done changed, display the new value.
    if js.percent_done != percent_done
    then
    dbms_output.put_line('*** Job percent done = ' ||to_char(js.percent_done));
    percent_done := js.percent_done;
    end if;
    -- Display any work-in-progress (WIP) or error messages that were received for
    -- the job.
    if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
    le := sts.wip;
    else
    if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
    then
    le := sts.error;
    else
    le := null;
    end if;
    end if;
    if le is not null
    then
    ind := le.FIRST;
    while ind is not null loop
    dbms_output.put_line(le(ind).LogText);
    ind := le.NEXT(ind);
    end loop;
    end if;
    end loop;
    -- Indicate that the job finished and detach from it.
    dbms_output.put_line('Job has completed');
    dbms_output.put_line('Final job state = ' || job_state);
    dbms_datapump.detach (handle => h1);
    -- Any exceptions that propagated to this point will be captured. The
    -- details will be retrieved from get_status and displayed.
    Exception
    when others then
    dbms_output.put_line('Exception in Data Pump job');
    dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0, job_state,sts);
    if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
    then
    le := sts.error;
    if le is not null
    then
    ind := le.FIRST;
    while ind is not null loop
    spos := 1;
    slen := length(le(ind).LogText);
    if slen > 255
    then
    slen := 255;
    end if;
    while slen > 0 loop
    dbms_output.put_line(substr(le(ind).LogText,spos,slen));
    spos := spos + 255;
    slen := length(le(ind).LogText) + 1 - spos;
    end loop;
    ind := le.NEXT(ind);
    end loop;
    end if;
    end if;
    END SCHEMAS_EXPORT_TEST;

  • Parallel Sessions on Datapump Export  (10.2.0.4)

    Hi,
    We are using Oracle 10.2.0.4 on Solaris and I'm exporting a table using Datapump export.
    The export includes a query which selects from three tables based on relevant conditions. The parfile specifies 'parallel=4' and the dumpfile setting uses %U so that it creates an appropriate number of files.
    When I run the export using my own (DBA) account (i.e. expdp mr_dba parfile=exp_xyz.par) the export completes in 15 minutes and creates four dumpfiles. When I run the export as the schema owner using the exact same parfile (i.e. expdp schema_own parfile=exp_xyz.par) the export takes over two hours and only creates two dumpfiles.
    Could anyone suggest things that I could look at to find out why there is such a difference in the elapsed time? The exports have been run a number of times as both users with the box having similar loads and the results are fairly consistent i.e. 15 mins for my user and two hours for the schema owner.
    The schema owner does have a different profile and a different Resource Consumer Group but both my profile and the schema owners profile have 'sessions_per_user' set to Unlimited. In Resource Manager the Parallel_Degree_Limit_P1 value is set to 16 for my consumer group and is not set at all for the schema owners consumer group.
    I have observed that when exporting under the schema owner the DBA_DATAPUMP_SESSIONS showed a DBMS_DATAPUMP session, a MASTER session and two WORKER sessions. When I run it under my user id it shows these four sessions but also shows three EXTERNAL TABLE sessions. This suggests that it is using a different approach but I'm not sure what would cause this.
    Any advice would be very welcome. I haven't posted any specific information about the parameter file or the tables as I'm not sure what info people might require - so if you need specific details of anything please let me know.
    Many thanks.

    Sorry for the delay in responding - it took a couple of days for our security people to give me the go-ahead to make the changes (red tape is ridiculous here!)
    The tweak to the consumer groups in Resource Manager didn't seem to make much difference and it continued to use the same plan (but it was worth trying it). I then granted the EXP_FULL_DATABASE role and it did indeed result in much better performance (and it created the four dumpfiles instead of two).
    I'm still not sure why it makes such a difference - the export is only exporting a table from the users schema but it does query a table in someone else's schema to identify appropriate candidates. You would assume that providing it can access all the necessary information it would run at the optimum speed but obviously the EXP_FULL_DATABASE role makes a considerable difference.
    Thanks again for both replies, much appreciated. Well done Dean for identifying the solution - great call.
    Edited by: user2480656 on 21-Aug-2012 08:35

  • Are there any advantages of using Secure Empty Trash over regular Empty Trash?

    Are there any advantages of using Secure Empty Trash over regular Empty Trash?

    If you are going to run around with sensitive files on your file system, you might be better off using a whole disk encryption so that every file is encrypted, any deleted file contains encrypted contents.  As long a no one can access your files using your encrytpion keys, all your data is secure.
    Also secure delete is not really going to do much on a Solid State Drive, and again whole disk encryption would be a better choice.
    For a moderate amount of data, secure erase does not take too much time. But if you have a ton of file and/or a few really large files to erase, secure erase can take a long time to complete as it is doing multiple pass to write and overwrite the file's storage with patterns of data that make it extremely difficult to recover the original data.  That takes time.
    Finally, if you have been updating a document, previous editions may have been given over to free storage as new versions are written, such that when you decide to erase the file you may only be doing secure erase on the most recent copy.

  • Advantages of using PI 7.1 over PI 7.0

    Hi All,
    I need to suggest to my client, the advantages of using PI 7.1 over PI 7.0, so that we can use the same for our development.
    Please dont send me the links from SDN, just suggest the advantages.
    Useful answers will be rewarded.
    Regards
    Ranjit

    Hi,
    PI 7.1 will be in rampup then - not global availibility
    when it will be available you will get an upgreade guide as always
    Currently PI 7.0 is only available for real time implementation. You could have the version of PI 7.1 from SDN, but that not for the real time implementation. Its limited only for individual usage.
    There is huge difference between PI 7.0 and PI 7.1.
    1. PI 7.0 is the part of Netweaver 2004s and PI 7.1 is key entity in eSOA.
    2. PI 7.1 have various mapping Enhancements as well as reusability of UDFs across the mappings etc. which is not available in PI 7.0
    /people/william.li/blog/2008/01/02/sap-pi-71-mapping-enhancements-series-share-user-defined-functions
    3. PI 7.1 comes with the concept of folders for more flexibale and Organized way of developements.
    /people/william.li/blog/2007/08/07/using-folders-in-pi-71 -- Folders in PI 7.1
    4. Advanced adapter engine is used in PI 7.1, which have overcome the various communication related limitations of previous versions
    Refer
    Upgrade to SAP NetWeaver Process Integration 7.1
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/8085e299-718c-2a10-de94-928f62b763ce
    Features of PI 7.1
    /people/udo.paltzer/blog/2007/04/26/new-sap-netweaver-process-integration-release-planned-for-2007
    High Volume support in PI 7.1
    /people/holger.faulhaber/blog/2007/12/12/high-volume-support-in-pi-71
    Usability Features in SAP NetWeaver PI 7.1 Development and Configuration Times
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a0e7734f-e969-2a10-24b6-df58a710941c
    SAP NetWeaver Process Integration 7.1 - Overview of New Capabilities
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/706005a3-3bd6-2910-91ae-a2016239bdcf
    Usability Features in SAP NetWeaver PI 7.1 Development and Configuration Times
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a0e7734f-e969-2a10-24b6-df58a710941c
    SAP Network Blog: Share User-Defined Functions in Message Mappings of PI 7.1
    /people/william.li/blog/2008/01/02/sap-pi-71-mapping-enhancements-series-share-user-defined-functions
    Preview on New Features of the Integration Directory in SAP NetWeaver Process Integration 7.1
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10c0de4b-7876-2a10-e286-8412668643a8
    SAP Network Blog: Mapping Enhancements in SAP NetWeaver Process Integration (PI) 7.1
    /people/jin.shin/blog/2008/01/11/sap-pi-71-mapping-enhancements-series-mapping-enhancements-demo
    New Business Process Engine Features in SAP NetWeaver Process Integration
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0068bc1-6f8c-2a10-52bb-c6ee3562feb2
    Thanks
    Swarup

  • Attach datapump export job

    Hi Guys,
    I am using Oracle 10g Release 2 on Solaris.
    I have database that is 1.5 TB and I am doing datapump export of this database of which datapump estimate is 500GB.
    Now after the 300GB export the server crashed.
    Will I be able to attach the data pump export job and continue from 300GB after database startup?
    NB I am using the parameter flashback_time for data consistency.
    Please Help !!!!!!!!!!!!!!
    Thanks.

    Thanks for the reply...
    I tried to attach the job after the database startup and here is what I get:
    expdp \"/ as sysdba\" attach=SYS_EXPORT_FULL_01Export: Release 10.2.0.2.0 - 64bit Production on Saturday, 30 July, 2011 17:50:31
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORA-39002: invalid operation
    ORA-39068: invalid master table data in row with PROCESS_ORDER=-59
    ORA-39150: bad flashback time
    ORA-00907: missing right parenthesis
    I guess I just have to restart the job as i cannot attach that job...
    Thanks...

  • Enterprise Manager Job for Scripting DataPump Export for Oracle Database Running On MS Windows Server 2008

    Greetings,
    I would like an example of an Enterprise Manager Job that uses an OS Script for MS Windows that would effectively run a datapump export of my Oracle 11g database (11.2.0.3) running on a Windows 2008 server.  My OEM OMS is running on a Linux server with an Oracle 12c repository.  I'd like to be able to set environment variables for date and time, my export file name (that includes SID, and export date and time, job name, and other information pertinent to the datapump export.  Thus far, I have been unsuccessful with using the % delimiter around my variables.  Also, I have put the "cmd/c" as the "Interpreter" but I am not getting anywhere in a hurry :-( 
    Thanks  Million!!!
    Mike

    1. Try to reach server with IP )(bypath name resolution)
    2. Disabling IPv6 is not good idea
    3. What is server operating system and what is workstation operating system?
    4. Is this new or persistent problkem?
    5. If server and workstation has different SMB version, set higher to lower one (see Petri web for procedure)
    6. Uninstall AV with removal tool and test it without AV
    7. Use network monitor to diagnose network traffic.
    M.

  • Advantages of using a Data Template over an RDF

    Hi All,
    Can anyone help me to understand what are the advantages of using a Data Template over an RDF while developing an XML Publisher Report ?
    Regards,
    Shruti

    Hi Sruthi,
    you can merge n number of SQL Quries in a singe Data Template.
    suppose you have column say Partner Name in which u have made Prompt. now lets say based on the partner name prompt, fileds need to change in the reoprt ie; (different sql Quries) in this case we can use data template.
    Thanks,
    Chintu

  • What are the advantages of using CACHE and NOCACHE Hint

    What are the advantages of using CACHE and NOCACHE Hint & difference of them.I saw one of oracle.sql script have CACHE & oracle rac script include NOCACHE .Why is that

    924250 wrote:
    In a SOA product include db scripts,when we install the product we want to execute the db script to create sequence both oracle RAC & Oracle db
    Oracle DB
    CREATE SEQUENCE REG_LOG_SEQUENCE START WITH 1 INCREMENT BY 1 NOCACHE
    Oracle RAC DB
    CREATE SEQUENCE REG_LOG_SEQUENCE START WITH 1 INCREMENT BY 1 CACHE 20 ORDERas sb mentioned, this has nothing to do with oracle hints.
    you'll want to search the documentation about sequences.

  • Advantages of segment reporting and document splitting

    Hi Experts,
    Please provide the advantages of segment reporting and document splitting. and also please share any configuration document relating to this.
    Thanks
    Chandana

    Hi!
    Segment is a business sub unit for internal reporting purpose. This ensures to give a performance report in terms of money on particular segment. Profit center need segment in their master data. If you want a different segment than profit center segment, you have to implement BAPi with the help of ABAPer to get the derived segment.
    Document splitting means to split up the line items for selected dimentions (Ex: Profit Centers, segments etc). You can split up a single line items with multiple line items for desired reporting purpose.
    You have to follow certain steps at IMP to configure this.

  • Advantage of having OCR and Voting disk on ASM

    What are advantages of putting OCR and Voting disk on ASM from 11g

    well, other than the sharing thing, you dont have to go RAIDing an additional shared disk either. If you have properly configured ASM, redundancy should be built in as well, either soft or hard.
    not sure what other advantages you may need. Theres the IO thing with ASM, buit thats not really an advantage persay with ocr and vote. I may be contradicted by others but Ive never seen performance hit of any kind attributed to ocr and vote on non-ASM disk.

Maybe you are looking for

  • FAcetime between iPad and MacBook

    I cannot establish a FaceTime connection between my iPad mini and MacBook Air. Keeps dropping instantly

  • Log properties not supported in jboss server

    hi Thanx for ur help...... I am using jboss4.0.5 app server.Here the log informations are not loaded in the log file.But in websphere its working properly. In my app the log4.jar file has taken from joboss server path(C:\Program Files\jboss-4.0.5.GA\

  • Importing from Photoshop (problems)

    Hi I have designed my site in Photoshop and want to get it into flash. Im exporting it into image ready and the exporting it as a SWF to import into flash. Flash imports the file as layers but it puts each text character into a seperate text box, so

  • Can any body  tell me about  sap-plm  from  technical point of view

    In sap-plm from the development point of view what abapers task

  • RSS FEED FOR CISCO PRODUCT LAUNCHES

    Get notified as soon as we launch new Cisco products. See It in Action What do you think about the RSS Feeds for Cisco products launches concept? Is it useful? Are there places on Cisco.com where you would like to see it? Do you think the idea should