Datapump Export Wipro Question

Hi,
we are having a 50GB size of Schema but the Dumpfile size will be 5GB only,
now what the question is ,
How can we Export this 50gb Schema using the 5GB Dumpfile?
Is this is posible?
Give me the solution for this with query?

Create a compressed export on the fly. Depending on the type of data, you probably can export up to 10 gigabytes to a single file. This example uses gzip. It offers the best compression I know of, but you can also substitute it with zip, compress or whatever.
# create a named pipe
mknod exp.pipe p
# read the pipe - output to zip file in the background
gzip < exp.pipe > scott.exp.gz &
# feed the pipe
exp userid=scott/tiger file=exp.pipe ...
refer the link:
http://www.orafaq.com/wiki/Import_Export_FAQ#Can_one_export_to_multiple_files.3F.2F_Can_one_beat_the_Unix_2_Gig_limit.3F

Similar Messages

  • Getting Datapump Export Dump file to the local machine

    I apologize to everyone as this is a duplicate post.
    Re: Getting Datapump Export Dump file to the local machine
    My initial thread(started yesterday)was in 'Database General' and didn't get much response today. Where do i post questions on EXPORT/IMPORT utilities?
    Anyway, here is my problem:
    I want to take the export dump of itemrep schema in orcl database (in a remote machine). I have an Oracle server (10G Rel2) running in my local Windows machine. I have created a user john with necessary EXPORT/IMPORT privileges in my local db. Then i created a Directory object,ie a folder named datapump in my local hard drive and granted READ WRITE privileges to john.
    So john, who is a user in my local machine's oracle db is going to run the expdp utility.
    expdp john/jhendrix@my_local_db_alias SCHEMAS=itemrep directory=datapump logfile=itemrepexp.log
    The above command will fail because it will look for itemrep schema inside my local db, not the remote db where the itemprep is actually located. And you can't qualify the schemaname with its db in the SCHEMAS parameter (like SCHEMAS=itemrep@orcl).
    Can anyone provide me a solution for this?

    I think you can initiate the datapump exp utility from your client machine to export a schema in a remote database.But, Upon execution,oracle looks for the directory in the remote database and not on your local machine.
    You're inovoking expdp from a client (local DB) to export data from a remote DB.
    So, With this method, you can create the dumpfiles only on the Remote server and not on the local Machine.
    You can perform a direct import instead of export using the NETWORK_LINK option.
    Create a DBlink from your local and Remote DB and verify the connection.
    Then,Initiate the Impdp from Your local machine DB using the parameter network_link=<db_link of the Remote DB> to import the schema.
    The advantage of this option eliminates the Dumpfile creation at the Server side.
    There are no dumpfiles during the import process. the Data is imported directly to the target schema.

  • Parallel Sessions on Datapump Export  (10.2.0.4)

    Hi,
    We are using Oracle 10.2.0.4 on Solaris and I'm exporting a table using Datapump export.
    The export includes a query which selects from three tables based on relevant conditions. The parfile specifies 'parallel=4' and the dumpfile setting uses %U so that it creates an appropriate number of files.
    When I run the export using my own (DBA) account (i.e. expdp mr_dba parfile=exp_xyz.par) the export completes in 15 minutes and creates four dumpfiles. When I run the export as the schema owner using the exact same parfile (i.e. expdp schema_own parfile=exp_xyz.par) the export takes over two hours and only creates two dumpfiles.
    Could anyone suggest things that I could look at to find out why there is such a difference in the elapsed time? The exports have been run a number of times as both users with the box having similar loads and the results are fairly consistent i.e. 15 mins for my user and two hours for the schema owner.
    The schema owner does have a different profile and a different Resource Consumer Group but both my profile and the schema owners profile have 'sessions_per_user' set to Unlimited. In Resource Manager the Parallel_Degree_Limit_P1 value is set to 16 for my consumer group and is not set at all for the schema owners consumer group.
    I have observed that when exporting under the schema owner the DBA_DATAPUMP_SESSIONS showed a DBMS_DATAPUMP session, a MASTER session and two WORKER sessions. When I run it under my user id it shows these four sessions but also shows three EXTERNAL TABLE sessions. This suggests that it is using a different approach but I'm not sure what would cause this.
    Any advice would be very welcome. I haven't posted any specific information about the parameter file or the tables as I'm not sure what info people might require - so if you need specific details of anything please let me know.
    Many thanks.

    Sorry for the delay in responding - it took a couple of days for our security people to give me the go-ahead to make the changes (red tape is ridiculous here!)
    The tweak to the consumer groups in Resource Manager didn't seem to make much difference and it continued to use the same plan (but it was worth trying it). I then granted the EXP_FULL_DATABASE role and it did indeed result in much better performance (and it created the four dumpfiles instead of two).
    I'm still not sure why it makes such a difference - the export is only exporting a table from the users schema but it does query a table in someone else's schema to identify appropriate candidates. You would assume that providing it can access all the necessary information it would run at the optimum speed but obviously the EXP_FULL_DATABASE role makes a considerable difference.
    Thanks again for both replies, much appreciated. Well done Dean for identifying the solution - great call.
    Edited by: user2480656 on 21-Aug-2012 08:35

  • Which background process involves in datapump export/import?

    Hi guys,
    Could any one please tell me which background process involves in datapump export and import activity? . any information please.
    /mR

    Data pump export and import is done by foreground server processes (master and workers), not background.
    http://www.acs.ilstu.edu/docs/Oracle/server.101/b10825/dp_overview.htm#sthref22

  • Datapump Export stops at "Estimate in progress...."

    Hi,
    I am facing an issue while doing Schema level Datapump Export in Oracle 10g. The export for a particular schema stops at "Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA" and more over it only spawns one worker(DW01) irrespective of the PARALLEL parameter value. For other schema the export works fine and even the table level export for the problematic schema.
    I am clue less, because the alert log does not show anything, can any one please advice....
    Here is how my Parfile looks like:
    userid=id/password
    directory=impdir
    parallel=2
    schemas=prod11sep12
    dumpfile=expC2P_20120925_%U.dmp
    logfile=expC2P_20120925.log
    job_name=expC2P_20120925
    tail -f expC2P_20120925.log
    bash-3.00$ expdp parfile=expC2P.par ESTIMATE=STATISTICS
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 26 September, 2012 16:44:30
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."EXPC2P_20120925": parfile=expC2P.par ESTIMATE=STATISTICS
    Estimate in progress using STATISTICS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Alert log:
    kupprdp: master process DM00 started with pid=38, OS id=15156
    to execute - SYS.KUPM$MCP.MAIN('EXPC2P_20120925', 'SYSTEM', 'KUPC$C_1_20120926164430', 'KUPC$S_1_20120926164430', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=46, OS id=15201
    to execute - SYS.KUPW$WORKER.MAIN('EXPC2P_20120925', 'SYSTEM');
    Thanks in Advance...

    Pl enable trace as per this MOS Doc to see if additional debug information can be gathered
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    HTH
    Srini

  • Attach datapump export job

    Hi Guys,
    I am using Oracle 10g Release 2 on Solaris.
    I have database that is 1.5 TB and I am doing datapump export of this database of which datapump estimate is 500GB.
    Now after the 300GB export the server crashed.
    Will I be able to attach the data pump export job and continue from 300GB after database startup?
    NB I am using the parameter flashback_time for data consistency.
    Please Help !!!!!!!!!!!!!!
    Thanks.

    Thanks for the reply...
    I tried to attach the job after the database startup and here is what I get:
    expdp \"/ as sysdba\" attach=SYS_EXPORT_FULL_01Export: Release 10.2.0.2.0 - 64bit Production on Saturday, 30 July, 2011 17:50:31
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORA-39002: invalid operation
    ORA-39068: invalid master table data in row with PROCESS_ORDER=-59
    ORA-39150: bad flashback time
    ORA-00907: missing right parenthesis
    I guess I just have to restart the job as i cannot attach that job...
    Thanks...

  • Exporting Each question to LMS

    Hello,
    Is there a way that we can export each question within a
    captivate presentation to a LMS system. I do know that we can
    export the results pass/fail etc.. Our issue is that we can't view
    the errors in the questions or be able to review the problems with
    our learners. Any help would be great
    Thank you
    Sergio

    You can set Captivate to report Interactions and Score instead of just Score.
    But whether you can dig down in the LMS reporting to find out how many users chose a particular answer depends entirely on the capabilities of your LMS.  So allow it, most don't.  You might need to get a special report created for your LMS that queries the database and digs out such info.

  • Advantage of datapump export and import over original export and import

    Hi,
    let me know the Advantage of datapump export(expdm) and import(impdm) over original export(exp) and import(imp).

    Hello,
    let me know the Advantage of datapump export(expdm) and import(impdm) over original export(exp) and import(imp). There're many advantages on using DATAPUMP.
    For instance, with INCLUDE / EXCLUDE parameters you can filter exactly which Object and / or Object Type you intend to Export or Import. Which is not easy with the Original Export / Import (except for Tables, Index, constraints,...).
    You can Import straightly from a NETWORK_LINK without using a Dump.
    You have many interesting features as COMPRESSION, FLASHBACK_SCN / _TIME*,... .
    You can use PL/SQL API to perform your Export / Import rather than using the Command Line Interface.
    More over, the DATAPUMP is much more optimized than the Original Export/Import and use Direct Path or External Tables, ... and what to say about the REMAP_% parameters which let you rename Datafiles, Schema, Tablespaces, ...
    There would be many things to tell about DATAPUMP. You'll find an overview of this very good tool on the following links:
    http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
    http://www.oracle-base.com/articles/11g/DataPumpEnhancements_11gR1.php
    Hope this help.
    Best regards,
    Jean-Valentin

  • Enterprise Manager Job for Scripting DataPump Export for Oracle Database Running On MS Windows Server 2008

    Greetings,
    I would like an example of an Enterprise Manager Job that uses an OS Script for MS Windows that would effectively run a datapump export of my Oracle 11g database (11.2.0.3) running on a Windows 2008 server.  My OEM OMS is running on a Linux server with an Oracle 12c repository.  I'd like to be able to set environment variables for date and time, my export file name (that includes SID, and export date and time, job name, and other information pertinent to the datapump export.  Thus far, I have been unsuccessful with using the % delimiter around my variables.  Also, I have put the "cmd/c" as the "Interpreter" but I am not getting anywhere in a hurry :-( 
    Thanks  Million!!!
    Mike

    1. Try to reach server with IP )(bypath name resolution)
    2. Disabling IPv6 is not good idea
    3. What is server operating system and what is workstation operating system?
    4. Is this new or persistent problkem?
    5. If server and workstation has different SMB version, set higher to lower one (see Petri web for procedure)
    6. Uninstall AV with removal tool and test it without AV
    7. Use network monitor to diagnose network traffic.
    M.

  • DATAPUMP EXPORT FAILS (for certain table names)

    I have a table name (G$_EXP$) defined in a user foo.
    When I try to export the table using expdp (datapump export command like utility):-
    expdp foo/foo TABLES=G$_EXP$ DIRECTORY=USERDIR EXCLUDE=TRIGGER
    I get the following error messages:-
    _EXP: Undefined variable.
    So can expdp not handle this kind of table names.
    Will appreciate any comments/responses.

    It's not expdp can't handle the tablename, it's your OS interprete $xxx as variables.
    Try to use " to quote the tablename, or use parameter file PARFILE

  • SQL*Net and Datapump export disconnects with ORA-03113 error

    Hi,
    I have some trouble with an Oracle 11.2.0.1.0 server installation on a Windows 2008 R2 server.
    If I am connected with SQL*Plus on the database server to an instance it randomly disconnects me, but no longer than after 30-60 seconds even though I am active running simple select queries.
    I can connect again without any error and soon it disconnects me again.
    If I am running an expdp, datapump export it starts but displays error UDE-03113 and then ORA-03113. The datapump export continues in the background and finish successfully though.
    When having a look at the Alert.log for the database instance I see this error everytime it happens:
    Fatal NI connect error 12547, connecting to:
    (LOCAL=NO)
    VERSION INFORMATION:
         TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
         Oracle Bequeath NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production
         Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 11.2.0.1.0 - Production
    Time: 22-JUL-2012 21:10:40
    Tracing not turned on.
    Tns error struct:
    ns main err code: 12547
    TNS-12547: TNS:lost contact
    ns secondary err code: 12560
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0
    opiodr aborting process unknown ospid (19800) as a result of ORA-609
    Mon Jul 23 02:00:00 2012
    I don't believe it is an network problem as each command I am running is on the server console. No other errors in the OS event log so I need some help where to start...
    Thanks in advance!
    Best Regards
    Martin Gabrielsson

    Thanks, no windows firewall turned on, I turned on SQL*Net tracing and this is the last part of the trc file after doing a simple select query on a table with 3 rows. After the select query has been runned I am disconnected... Any idea, help is really appreciated!
    +2012-07-30 18:15:28.658650 : nsbasic_brc:packet dump+
    +2012-07-30 18:15:28.658660 : nsbasic_brc:01 74 00 00 06 00 00 00 |.t......|+
    +2012-07-30 18:15:28.658668 : nsbasic_brc:00 00 06 01 1A 00 26 00 |......&.|+
    +2012-07-30 18:15:28.658676 : nsbasic_brc:00 00 00 00 0F 00 00 00 |........|+
    +2012-07-30 18:15:28.658684 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658693 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658701 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658708 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658717 : nsbasic_brc:00 00 00 00 07 96 2C 00 |......,.|+
    +2012-07-30 18:15:28.658725 : nsbasic_brc:25 06 4D 52 20 20 20 20 |%.MR....|+
    +2012-07-30 18:15:28.658734 : nsbasic_brc:03 C2 1A 5F 04 C3 02 10 |..._....|+
    +2012-07-30 18:15:28.658742 : nsbasic_brc:43 06 43 4C 4F 53 45 44 |C.CLOSED|+
    +2012-07-30 18:15:28.658750 : nsbasic_brc:FF FF FF FF FF FF FF FF |........|+
    +2012-07-30 18:15:28.658758 : nsbasic_brc:FF FF 06 4C 50 49 4D 20 |...LPIM.|+
    +2012-07-30 18:15:28.658767 : nsbasic_brc:20 FF FF FF FF FF FF FF |........|+
    +2012-07-30 18:15:28.658775 : nsbasic_brc:FF FF FF FF FF 0A 4D 41 |......MA|+
    +2012-07-30 18:15:28.658783 : nsbasic_brc:52 47 41 42 20 20 20 20 |RGAB....|+
    +2012-07-30 18:15:28.658791 : nsbasic_brc:FF 3C 77 68 65 65 6C 20 |.<wheel.|+
    +2012-07-30 18:15:28.658799 : nsbasic_brc:73 68 6F 70 20 20 20 20 |shop....|+
    +2012-07-30 18:15:28.658807 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658815 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658824 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658832 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658840 : nsbasic_brc:20 20 20 20 20 20 20 20 |........|+
    +2012-07-30 18:15:28.658848 : nsbasic_brc:20 20 20 20 20 20 02 C1 |........|+
    +2012-07-30 18:15:28.658856 : nsbasic_brc:05 07 78 66 05 03 10 09 |..xf....|+
    +2012-07-30 18:15:28.658864 : nsbasic_brc:38 02 C1 02 FF 03 C2 08 |8.......|+
    +2012-07-30 18:15:28.658872 : nsbasic_brc:42 FF 01 58 04 01 00 00 |B..X....|+
    +2012-07-30 18:15:28.658880 : nsbasic_brc:00 14 00 01 02 00 00 00 |........|+
    +2012-07-30 18:15:28.658888 : nsbasic_brc:7B 05 00 00 00 00 02 00 |{.......|+
    +2012-07-30 18:15:28.658897 : nsbasic_brc:00 00 03 00 20 00 00 00 |........|+
    +2012-07-30 18:15:28.658905 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658912 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658921 : nsbasic_brc:00 00 00 00 00 16 00 00 |........|+
    +2012-07-30 18:15:28.658929 : nsbasic_brc:01 00 00 00 36 01 00 00 |....6...|+
    +2012-07-30 18:15:28.658937 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658945 : nsbasic_brc:00 00 00 00 E0 53 D1 22 |.....S."|+
    +2012-07-30 18:15:28.658953 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658961 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658969 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658977 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658987 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.658996 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.659004 : nsbasic_brc:00 00 00 00 00 00 00 00 |........|+
    +2012-07-30 18:15:28.659012 : nsbasic_brc:00 00 00 00 17 4F 52 41 |.....ORA|+
    +2012-07-30 18:15:28.659021 : nsbasic_brc:2D 30 31 34 30 33 3A 20 |-01403:.|+
    +2012-07-30 18:15:28.659029 : nsbasic_brc:64 61 74 61 20 73 61 6B |data.sak|+
    +2012-07-30 18:15:28.659038 : nsbasic_brc:6E 61 73 0A |nas. |+
    +2012-07-30 18:15:28.659046 : nsbasic_brc:exit: oln=0, dln=362, tot=372, rc=0+
    +2012-07-30 18:15:28.659054 : nioqrc:exit+
    +2012-07-30 18:16:09.583993 : nioqsn:entry+
    +2012-07-30 18:16:09.584062 : nioqsn:exit+
    +2012-07-30 18:16:09.584086 : nioqrc:entry+
    +2012-07-30 18:16:09.584097 : nsbasic_bsd:entry+
    +2012-07-30 18:16:09.584107 : nsbasic_bsd:tot=0, plen=318.+
    +2012-07-30 18:16:09.584116 : nttfpwr:entry+
    +2012-07-30 18:16:09.584146 : ntt2err:entry+
    +2012-07-30 18:16:09.584162 : ntt2err:soc 564 error - operation=6, ntresnt[0]=530, ntresnt[1]=54, ntresnt[2]=0+
    +2012-07-30 18:16:09.584171 : ntt2err:exit+
    +2012-07-30 18:16:09.584178 : nttfpwr:exit+
    +2012-07-30 18:16:09.584192 : nserror:entry+
    +2012-07-30 18:16:09.584203 : nserror:nsres: id=0, op=67, ns=12571, ns2=12560; nt[0]=530, nt[1]=54, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0+
    +2012-07-30 18:16:09.584213 : nsbasic_bsd:exit (-1)+
    +2012-07-30 18:16:09.584224 : nioqrc:send failed: bl = 1, nicbl = 1+
    +2012-07-30 18:16:09.584234 : nioqper: error from nioqrc+
    +2012-07-30 18:16:09.584242 : nioqper: ns main err code: 12571+
    +2012-07-30 18:16:09.584250 : nioqper: ns (2) err code: 12560+
    +2012-07-30 18:16:09.584258 : nioqper: nt main err code: 530+
    +2012-07-30 18:16:09.584266 : nioqper: nt (2) err code: 54+
    +2012-07-30 18:16:09.584275 : nioqper: nt OS err code: 0+
    +2012-07-30 18:16:09.584285 : nioqer:entry+
    +2012-07-30 18:16:09.584293 : nioqer: incoming err = 12150+
    +2012-07-30 18:16:09.584301 : niomapnserror:entry+
    +2012-07-30 18:16:09.584311 : niqme:entry+
    +2012-07-30 18:16:09.584321 : niqme:reporting NS-12571 error as ORA-12571+
    +2012-07-30 18:16:09.584330 : niqme:exit+
    +2012-07-30 18:16:09.584337 : niomapnserror:exit+
    +2012-07-30 18:16:09.584344 : nioqce:entry+
    +2012-07-30 18:16:09.584352 : nioqce:exit+
    +2012-07-30 18:16:09.584359 : nioqer: returning err = 12571+
    +2012-07-30 18:16:09.584366 : nioqer:exit+
    +2012-07-30 18:16:09.584374 : nioqrc: returning error: 12571+
    +2012-07-30 18:16:09.584381 : nioqrc:exit+
    +2012-07-30 18:16:09.584396 : nioqrs:entry+
    +2012-07-30 18:16:09.584412 : nioqrs: state = interrupted (1)+
    +2012-07-30 18:16:09.584425 : nscontrol:entry+
    +2012-07-30 18:16:09.584435 : nscontrol:cmd=45, lcl=0x0+
    +2012-07-30 18:16:09.584442 : nscontrol:normal exit+
    +2012-07-30 18:16:09.584450 : nscontrol:entry+
    +2012-07-30 18:16:09.584457 : nscontrol:cmd=1, lcl=0x0+
    +2012-07-30 18:16:09.584464 : nscontrol:normal exit+
    +2012-07-30 18:16:09.584476 : nioqsm:entry+
    +2012-07-30 18:16:09.584485 : nioqsm: Sending break packet (1)...+
    +2012-07-30 18:16:09.584493 : nscontrol:entry+
    +2012-07-30 18:16:09.584500 : nscontrol:cmd=45, lcl=0x0+
    +2012-07-30 18:16:09.584508 : nscontrol:normal exit+
    +2012-07-30 18:16:09.584516 : nsdo:entry+
    +2012-07-30 18:16:09.584525 : nsdo:cid=0, opcode=67, *bl=1, *what=17, uflgs=0x100, cflgs=0x3+
    +2012-07-30 18:16:09.584534 : nsdo:rank=64, nsctxrnk=0+
    +2012-07-30 18:16:09.584543 : nsdo:nsctx: state=8, flg=0x400d, mvd=0+
    +2012-07-30 18:16:09.584552 : nsdo:gtn=32, gtc=32, ptn=10, ptc=8191+
    +2012-07-30 18:16:09.584560 : nsdofls:entry+
    +2012-07-30 18:16:09.584569 : nsdofls:DATA flags: 0x0+
    +2012-07-30 18:16:09.584577 : nsdofls:normal exit+
    +2012-07-30 18:16:09.584587 : nsdo:sending NSPTMK packet+
    +2012-07-30 18:16:09.584596 : nspsend:entry+
    +2012-07-30 18:16:09.584605 : nspsend:plen=11, type=12+
    +2012-07-30 18:16:09.584614 : nttwr:entry+
    +2012-07-30 18:16:09.584628 : ntt2err:entry+
    +2012-07-30 18:16:09.584638 : ntt2err:soc 564 error - operation=6, ntresnt[0]=530, ntresnt[1]=54, ntresnt[2]=0+
    +2012-07-30 18:16:09.584646 : ntt2err:exit+
    +2012-07-30 18:16:09.584657 : nttwr:exit+
    +2012-07-30 18:16:09.584668 : nspsend:0 bytes to transport+
    +2012-07-30 18:16:09.584677 : nspsend:transport write error+
    +2012-07-30 18:16:09.584684 : nspsend:error exit+
    +2012-07-30 18:16:09.584692 : nsdo:error sending NSPTMK packet+
    +2012-07-30 18:16:09.584700 : nserror:entry+
    +2012-07-30 18:16:09.584709 : nserror:nsres: id=0, op=67, ns=12571, ns2=12560; nt[0]=530, nt[1]=54, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0+
    +2012-07-30 18:16:09.584719 : nsdo:nsctxrnk=0+
    +2012-07-30 18:16:09.584726 : nsdo:error exit+
    +2012-07-30 18:16:09.584737 : nioqsm:send-break: failed to send break...+
    +2012-07-30 18:16:09.584746 : nioqper: error from send-marker+
    +2012-07-30 18:16:09.584753 : nioqper: ns main err code: 12571+
    +2012-07-30 18:16:09.584761 : nioqper: ns (2) err code: 12560+
    +2012-07-30 18:16:09.584769 : nioqper: nt main err code: 530+
    +2012-07-30 18:16:09.584776 : nioqper: nt (2) err code: 54+
    +2012-07-30 18:16:09.584784 : nioqper: nt OS err code: 0+
    +2012-07-30 18:16:09.584792 : nioqsm:exit+
    +2012-07-30 18:16:09.584799 : nioqer:entry+
    +2012-07-30 18:16:09.584808 : nioqer: incoming err = 12152+
    +2012-07-30 18:16:09.584817 : niomapnserror:entry+
    +2012-07-30 18:16:09.584824 : niqme:entry+
    +2012-07-30 18:16:09.584833 : niqme:reporting NS-12571 error as ORA-12571+
    +2012-07-30 18:16:09.584840 : niqme:exit+
    +2012-07-30 18:16:09.584847 : niomapnserror:exit+
    +2012-07-30 18:16:09.584854 : nioqce:entry+
    +2012-07-30 18:16:09.584861 : nioqce:exit+
    +2012-07-30 18:16:09.584868 : nioqer: returning err = 12571+
    +2012-07-30 18:16:09.584876 : nioqer:exit+
    +2012-07-30 18:16:09.584884 : nioqrs:nioqrs: Couldn't send break. returning 12571+
    +2012-07-30 18:16:09.584894 : nioqrs:exit+
    +2012-07-30 18:16:09.584912 : nioqds:entry+
    +2012-07-30 18:16:09.584921 : nioqds: disconnecting...+
    +2012-07-30 18:16:09.584933 : nsclose:entry+
    +2012-07-30 18:16:09.584945 : nsvntx_dei:entry+
    +2012-07-30 18:16:09.584953 : nsvntx_dei:exit+
    +2012-07-30 18:16:09.584964 : nstimarmed:entry+
    +2012-07-30 18:16:09.584973 : nstimarmed:no timer allocated+
    +2012-07-30 18:16:09.584980 : nstimarmed:normal exit+
    +2012-07-30 18:16:09.584994 : nttctl:entry+
    +2012-07-30 18:16:09.585009 : nttctl:entry+
    +2012-07-30 18:16:09.585021 : nsfull_cls:entry+
    +2012-07-30 18:16:09.585031 : nsfull_cls:cid=0, opcode=65, *bl=0, *what=0, uflgs=0x0, cflgs=0x0+
    +2012-07-30 18:16:09.585040 : nsfull_cls:nsctx: state=8, flg=0x4009, mvd=0+
    +2012-07-30 18:16:09.585048 : nsdo:entry+
    +2012-07-30 18:16:09.585056 : nsdo:cid=0, opcode=67, *bl=0, *what=1, uflgs=0x0, cflgs=0x1+
    +2012-07-30 18:16:09.585065 : nsdo:nsctx: state=8, flg=0x4009, mvd=0+
    +2012-07-30 18:16:09.585074 : nsdo:gtn=32, gtc=32, ptn=10, ptc=8191+
    +2012-07-30 18:16:09.585082 : nsdo:normal exit+
    +2012-07-30 18:16:09.585089 : nsdofls:entry+
    +2012-07-30 18:16:09.585097 : nsdofls:DATA flags: 0x40+
    +2012-07-30 18:16:09.585105 : nsdofls:sending NSPTDA packet+
    +2012-07-30 18:16:09.585113 : nspsend:entry+
    +2012-07-30 18:16:09.585120 : nspsend:plen=10, type=6+
    +2012-07-30 18:16:09.585128 : nttwr:entry+
    +2012-07-30 18:16:09.585140 : ntt2err:entry+
    +2012-07-30 18:16:09.585150 : ntt2err:soc 564 error - operation=6, ntresnt[0]=530, ntresnt[1]=54, ntresnt[2]=0+
    +2012-07-30 18:16:09.585158 : ntt2err:exit+
    +2012-07-30 18:16:09.585165 : nttwr:exit+
    +2012-07-30 18:16:09.585173 : nspsend:0 bytes to transport+
    +2012-07-30 18:16:09.585181 : nspsend:transport write error+
    +2012-07-30 18:16:09.585188 : nspsend:error exit+
    +2012-07-30 18:16:09.585196 : nserror:entry+
    +2012-07-30 18:16:09.585205 : nserror:nsres: id=0, op=67, ns=12571, ns2=12560; nt[0]=530, nt[1]=54, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0+
    +2012-07-30 18:16:09.585215 : nsdofls:exit (-1)+
    +2012-07-30 18:16:09.585223 : nsbfr:entry+
    +2012-07-30 18:16:09.585230 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585239 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585247 : nsbfr:normal exit+
    +2012-07-30 18:16:09.585254 : nsbfr:entry+
    +2012-07-30 18:16:09.585261 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585268 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585276 : nsbfr:normal exit+
    +2012-07-30 18:16:09.585283 : nsfull_cls:normal exit+
    +2012-07-30 18:16:09.585291 : nsiocancel:entry+
    +2012-07-30 18:16:09.585303 : nsiofrrg:entry+
    +2012-07-30 18:16:09.585313 : nsiofrrg:cur = 5e5f3f8+
    +2012-07-30 18:16:09.585321 : nsbfr:entry+
    +2012-07-30 18:16:09.585328 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585335 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585342 : nsbfr:normal exit+
    +2012-07-30 18:16:09.585350 : nsiofrrg:exit+
    +2012-07-30 18:16:09.585358 : nsiocancel:exit+
    +2012-07-30 18:16:09.585365 : nsclose:closing transport+
    +2012-07-30 18:16:09.585375 : nttdisc:entry+
    +2012-07-30 18:16:09.585438 : nttdisc:Closed socket 564+
    +2012-07-30 18:16:09.585453 : nttdisc:exit+
    +2012-07-30 18:16:09.585463 : nsclose:global context check-out (from slot 0) complete+
    +2012-07-30 18:16:09.585471 : nsnadisc:entry+
    +2012-07-30 18:16:09.585484 : nadisc:entry+
    +2012-07-30 18:16:09.585496 : nacomtm:entry+
    +2012-07-30 18:16:09.585506 : nacompd:entry+
    +2012-07-30 18:16:09.585513 : nacompd:exit+
    +2012-07-30 18:16:09.585521 : nacompd:entry+
    +2012-07-30 18:16:09.585527 : nacompd:exit+
    +2012-07-30 18:16:09.585535 : nacomtm:exit+
    +2012-07-30 18:16:09.585545 : nas_dis:entry+
    +2012-07-30 18:16:09.585553 : nas_dis:exit+
    +2012-07-30 18:16:09.585562 : nau_dis:entry+
    +2012-07-30 18:16:09.585577 : nau_dis:exit+
    +2012-07-30 18:16:09.585587 : naeetrm:entry+
    +2012-07-30 18:16:09.585596 : naeetrm:exit+
    +2012-07-30 18:16:09.585604 : naectrm:entry+
    +2012-07-30 18:16:09.585613 : naectrm:exit+
    +2012-07-30 18:16:09.585623 : nagbltrm:entry+
    +2012-07-30 18:16:09.585632 : nau_gtm:entry+
    +2012-07-30 18:16:09.585640 : nau_gtm:exit+
    +2012-07-30 18:16:09.585648 : nagbltrm:exit+
    +2012-07-30 18:16:09.585657 : nadisc:exit+
    +2012-07-30 18:16:09.585665 : nsnadisc:normal exit+
    +2012-07-30 18:16:09.585675 : nsvntx_dei:entry+
    +2012-07-30 18:16:09.585682 : nsvntx_dei:exit+
    +2012-07-30 18:16:09.585694 : nsopenfree_nsntx:nlhthdel from mplx_ht_nsgbu, ctx=5e5e0e0 nsntx=5e5e6c0+
    +2012-07-30 18:16:09.585703 : nsiocancel:entry+
    +2012-07-30 18:16:09.585710 : nsiofrrg:entry+
    +2012-07-30 18:16:09.585718 : nsiofrrg:exit+
    +2012-07-30 18:16:09.585725 : nsiocancel:exit+
    +2012-07-30 18:16:09.585732 : nsmfr:entry+
    +2012-07-30 18:16:09.585741 : nsmfr:2944 bytes at 0x5e5e6c0+
    +2012-07-30 18:16:09.585748 : nsmfr:normal exit+
    +2012-07-30 18:16:09.585755 : nsmfr:entry+
    +2012-07-30 18:16:09.585763 : nsmfr:240 bytes at 0x5f2a610+
    +2012-07-30 18:16:09.585771 : nsmfr:normal exit+
    +2012-07-30 18:16:09.585778 : nsmfr:entry+
    +2012-07-30 18:16:09.585785 : nsmfr:280 bytes at 0x63c010+
    +2012-07-30 18:16:09.585792 : nsmfr:normal exit+
    +2012-07-30 18:16:09.585803 : nladtrm:entry+
    +2012-07-30 18:16:09.585820 : nladtrm:exit+
    +2012-07-30 18:16:09.585828 : nsmfr:entry+
    +2012-07-30 18:16:09.585836 : nsmfr:1496 bytes at 0x5e5e0e0+
    +2012-07-30 18:16:09.585844 : nsmfr:normal exit+
    +2012-07-30 18:16:09.585851 : nsclose:normal exit+
    +2012-07-30 18:16:09.585859 : nioqds:exit+
    +2012-07-30 18:16:09.585868 : nsbfree:entry+
    +2012-07-30 18:16:09.585876 : nsbgetfl:entry+
    +2012-07-30 18:16:09.585884 : nsbgetfl:normal exit+
    +2012-07-30 18:16:09.585894 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585901 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585909 : nsbfree:normal exit+
    +2012-07-30 18:16:09.585916 : nsbfree:entry+
    +2012-07-30 18:16:09.585923 : nsbgetfl:entry+
    +2012-07-30 18:16:09.585930 : nsbgetfl:normal exit+
    +2012-07-30 18:16:09.585938 : nsbaddfl:entry+
    +2012-07-30 18:16:09.585945 : nsbaddfl:normal exit+
    +2012-07-30 18:16:09.585952 : nsbfree:normal exit+
    +2012-07-30 18:16:09.585961 : nigtrm:Count in the NI global area is now 2+
    +2012-07-30 18:16:09.585974 : nsbfrfl:entry+
    +2012-07-30 18:16:09.585982 : nsbrfr:entry+
    +2012-07-30 18:16:09.585991 : nsbrfr:nsbfs at 0x5f26470, data at 0x5f26520.+
    +2012-07-30 18:16:09.585999 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586006 : nsbrfr:entry+
    +2012-07-30 18:16:09.586015 : nsbrfr:nsbfs at 0x5f28540, data at 0x5f285f0.+
    +2012-07-30 18:16:09.586025 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586033 : nsbrfr:entry+
    +2012-07-30 18:16:09.586040 : nsbrfr:nsbfs at 0x5e5f480, data at 0x5e5f530.+
    +2012-07-30 18:16:09.586048 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586055 : nsbrfr:entry+
    +2012-07-30 18:16:09.586063 : nsbrfr:nsbfs at 0x5f2a610, data at 0x5f2a9f0.+
    +2012-07-30 18:16:09.586073 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586082 : nsbrfr:entry+
    +2012-07-30 18:16:09.586090 : nsbrfr:nsbfs at 0x62c7d0, data at 0x5f2ca10.+
    +2012-07-30 18:16:09.586098 : nsbrfr:normal exit+
    +2012-07-30 18:16:09.586106 : nsbfrfl:normal exit+
    +2012-07-30 18:16:09.586153 : nigtrm:Count in the NL global area is now 3+

  • Exporting Images Questions

    Hi,
    I basically understand all of LR except for the only part that matters - getting my images out of LR for various purposes such as standard printing, enlargements, work website, personal website (in the future), Facebook etc. I'll save most of my web-related questions for another thread.
    I don't have a printer, so I'm mostly interested in saving images to jpegs to take to a print shop (and possibly send online to some place like Shutterfly) and get them printed in good/high quality and untarnished (i.e. cropped or other manipulations by the print shop). I also live overseas in an undeveloped and unsophisticated country, so I have the language/concept challenge of explaining what I want to print shops. I'm having my own trouble with the concepts I also think in inches, but have to convert/work in cm.
    1) I'll start with the most dumb question. In the Export dialogue, is "Image Sizing" the physical size that I am trying to make the print?
    2) Is the aspect ratio affected by the Image Sizing settings? For example, if I have a 4:3 image but want it printed 4x6 inches or 5x7 or 8x10 etc. what happens in the Export dialogue? I think the image would get cropped, but I'm not sure
    3) What do I do for enlargements? For example, I want to make a 4x6 inch print and then a 20x30 inch print of the same image? What settings do I need to change? I assume with an enlarged image I need to be careful of exceeding the original file size. Next question...
    4) Is Image Sizing > "Don't Enlarge" the way to prevent the file size from exceeding the original file size? Or, is "Limit File Size to xxx" in File Settings the way to do this? I see no way  in LR of knowing what my original file size is; or what it is after some cropping - the file size changes after cropping, right?
    5) According to many (but not all), "Quality" in File Settings does not need to = 100. If I understand correctly, it's actually a 12-step scale that dramatically increases the file size in the last few steps of the scale with practically no discernible improvements. I wonder how the "Quality" setting might affect the print quality at various physical print sizes?  Should "Quality" be set differently for prints vs. web? I don't really know what the file size of the image is after exporting until I open the properties.
    6) A friend told me to just set print Resolution to 300 and web Resolution to 100 because it'll be easier to make calculations about pixels, image size and file sizes.
    My friend also showed me Photoshop (which I don't have), and it seemed easier to get information about physical size, pixels, image size etc. I also looked at the LR Print Module, but I don't see anything in the module that clarifies my confusion about the above questions. Maybe some answers above will also help me understand how to export/save/upload images for my web-related purposes.
    Ultimately, I think I'm trying to get a set of user presets for the various needs I might want and I apologize if I'm just not understanding the concepts or confusing them.
    Thanks
    Andrew

    Thank you Jim. It's getting me toward the right direction. I've been experimenting with all different types of export settings to try and learn what is happening in LR.
    I basically understand the concept of the # of pixels to print size.
    However, for example:
    1) I have an image from my OMD-EM1 that I shot in 4:3
    2) I crop it to 2x3/4x6 in the Develop module > Cropping Tool. (But, LR never shows the crop size or file size)
    3) I export it and set 6 inches by 4 inches in the Image Sizing dialog at 300 ppi
    (Let's leave Quality at 100 for ease)
    I fully understand that the image comes out at 1200x1800. The file size is 2.84 MB
    I can tell the print shop to print a 4x6 inch print. It won't be cropped and it should have good quality.
    4) Now I want a print of the same image at 20 inches x 30 inches (same aspect ratio and crop)
    5) I export it and set 30 inches by 20 inches in Image Sizing at 300 ppi (9000 by 6000 pixels)
    (Quality is the same at 100)
    The file size is now 37 MB. This is going to be a problem, right? Pixalation??
    If I go back to the original RAW file, the file size is 18 MB. Now, I have a file twice the size of the original.
    I was told that you don't ever want to exceed your original file size.
    If I drop the Quality to 90 and keep ppi at 300, then the file size is 16 MB.
    Or if I reduce the ppi to 150 (keeping Quality at 100), then the file size is about 10 MB.
    Is there going to be any difference in the final print?
    Btw, I don't see any way in LR to find out what my original file size is; my cropped file size or what my exported file size will be until after I export it and open the properties tab or scroll over it.
    I think Photoshop has a built-in print size/pixels/image size calculator.
    I have no idea what I would do if I manually cropped something to a non-typical aspect ratio and then wanted to print it. But, that's probably another topic/thread.
    Also, remember I live in Cambodia. I don't have the luxury of sending online to a quality print service or take to a real print shop. I have no idea what they use for printers here or how they set them up, but one problem is that they like to make everyone look "white"
    Thanks again and apologies again for not quite understanding.
    Andrew

  • Sharpening export workflow question

    I have a sharpening workflow question. Say I have pictures from a portrait session I just finished. I have to send 10 pictures the client ordered to a print lab and I also will make some small facebook sized pictures and upload them to my business facebook page. The level of sharpening needed for large prints (I upload to print lab as RGB JPEGS) and sharpening needed for the very small sRGB facebook-sized pictures is different. In Lightroom I have the option to set the sharpening on export and have a bunch of presets that alter the export size, color space, sharpening, etc(WHCC print lab, facebook, Client CD, etc). I don't see how to do that in Aperture. I see they have the option if you have a printer, but not on normal export.
    For those of you that have to export batches of pictures in multiple different sizes (with different levels of sharpening), what is your workflow? I could use some photoshop droplets/actions after Aperture export but I was hoping there was a way to avoid the extra step. Am I overlooking an export feature? The BorderFX plug-in looks like the only other option.
    Thank you in advance for time and help!
    Scott

    Frank Scallo Jr wrote:
    The thing is guys - Once a file is sized down it WILL lose sharpening - what we are doing is sharpening the full size RAW file or rather what the full size output would be like. Once we export a version sized down it will lose some of the 'bite'. LR has sharpening options on 'output' which is not only smart but a necessity. Adobe realizes that output for screen needs another sharpen. Apple either doesn't know or didn't bother. It makes ANY output for screen less than best.
    Bear in mind that there seem to be two separate issues going on here - sharpening adjustments not being applied on export, and resizing.
    As far as resizing is concerned, Aperture appears to use something roughly equivalent to Photoshop's Bicubic Sharper setting. Because of this I've never had much problem with Aperture's exports when used for the web, but obviously everyone's taste for sharpening differs which is why an option for output sharpening would be good.
    Sharpening adjustments not being applied on export is a separate issue and should be reported via the feedback form ASAP by everyone who is experiencing the bug.
    Now printing is another animal - I wouldn't print directly from RAW in aperture either if I'm printing small. Again, LR beats Aperture here as well since they include output sharpening for print.
    Aperture has had output sharpening for printing since 2.0 came out (unless it in was 1.5). In A3 you need to turn on 'More Options' and scroll down, I can't remember where it is in A2. I don't know how effective it is as I print via a lab, but it's there and it's been there for a long time...
    Ian

  • How to ZIP Oracle Datapump export backup file

    Hello All,
    My customer is asking to give him the production data dump to the following path \\138.90.17.56\OMNISAFE.
    I really don't understand his requirement and he also wants me to zip the export backup file. How do I do that, Do you know any unix command to zip backup files.
    thanks and regards
    cherry

    1013498 wrote:
    Well Thanks for your reply.....my oracle version is 11.2.0.3.b and if we have the compression option can you please elaborate how to do that......
    It's in the documentation.  See Data Pump Export
    let us say my expdp file is abc.dmp...should I give the command gzip abc.dmp or any different.
    Let me google that for you
    One more question what does teh customer mean by production data dump to the following path \\138.90.17.56\OMNISAFE. please explain
    How do we know what the customer means?  Why don't you ask him?
    That said, it looks like a url to an ip address and a defined folder at that ip address.  Again, if the customer wants you to send them a file, you need to be working with said customer on the mechanics of accessing their system.
    All that said ....
    Learning how to look things up in the documentation is time well spent investing in your career.  To that end, you should drop everything else you are doing and do the following:
    Go to tahiti.oracle.com.
    Locate the link for your Oracle product and version, and click on it.
    You are now at the entire documentation set for your selected Oracle product and version.
    BOOKMARK THAT LOCATION
    Spend a few minutes just getting familiar with what is available here. Take special note of the "books" and "search" tabs. Under the "books" tab (for 10.x) or the "Master Book List" link (for 11.x) you will find the complete documentation library.
    Spend a few minutes just getting familiar with what kind  of documentation is available there by simply browsing the titles under the "Books" tab.
    Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what kind of information is available there.
    Do the same with the SQL Reference Manual.
    Do the same with the Utilities manual.
    You don't have to read the above in depth.  They are reference manuals.  Just get familiar with what is there to be referenced. Ninety percent of the questions asked on this forum can be answered in less than 5 minutes by simply searching one of the above manuals.
    Then set yourself a plan to dig deeper.
    - *Read a chapter a day from the Concepts Manual*.
    - Take a look in your alert log.  One of the first things listed at startup is the initialization parms with non-default values. Read up on each one of them (listed in your alert log) in the Reference Manual.
    - Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files. Go to the Network Administrators manual and read up on everything you see in those files.
    - *When you have finished reading the Concepts Manual, do it again*.
    Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.

  • Importing/Exporting + syncing Questions

    Hi there,
    just a question with regards to Importing/Exporting + syncing .
    First of all, is it possble to sync Yahoo Calendar with iCal, or is there a cool widget/tool that does so in real time?
    Second, I currently have a Mac at work running 10.2.8 and would like to somehow sync to my Mac at home running 10.4.8 without exporting/importing from media devices. The Mac at work is always online running through cable connection to outside world... My Mac at home is connected to a switch and than through adsl modem to outside world... What is the best way to connect the two through the Sharing setup in System Preferences
    thanks

    I'm interested in the same sort of thing. i have a Desktop at home and a laptop on the road. Usually with other files and apps, I just copy the files to my lap top before I leave, but with iCal, I do not know where the 'file' is...?

Maybe you are looking for

  • Slo mo videos are no longer accessible

    I've been taking some slo mo videos lately and was planning to back them up to my raid storage as soon as iPhone's memory went critical. So yesterday I noticed I had less than a gig available. So I though I'd back them up tonight. So when I got home

  • Convert Oracle SQL Query to SQLite Query

    Hi, I am new to SQLite and have requirement to write quries in SQLite format. As table structure is same in both the Oracle and SQLite database i have created queries in Oracle SQL. Now wanted to know is there any tool or package available which will

  • Reconstruction

    Hi experts Purchasing infosources 2lis_02_HDR and ITM are supplying data into a data target cube. We did init upload for both the sources. Firstly we did the 2LIS_02_ITM and then did 2LIS_02_ITM. Do we face any problems by doing this way..usually fir

  • Input reading best practices

    I'm curious as to what opinions people might have on this. If I'm writing a program that reads in a file line-by-line and processes each one, I'm used to doing it the old K&R C-style: reader = new BufferedReader(new FileReader(filename)); String next

  • Start up J2Me application on the mobile device when the device is swtiched

    Hi, Does anybody know how to start the Midlet when the handset is switched on ? I have one J2ME application installed on the mobile. I got to start the application automatically when the device is switched on. Thanks in advance Mamatha