DUMP files on UNIX

i have my Developer Suite 9i application running under UNIX. In the path where the exceutable files are there, some files like dump ones are getting created with the format:
f90webm_dump_<pid> where pid is the process identifier
Can i remove these files because these are occupying space on the UNIX server. Please help in solving my doubt as i need it resolved urgently.

1. which user i should use to search and destroy?! root-user or SYSID-User?!
Always use the user with the least permissions to do the job, don't use root if your sidadm can do it. If you want to find / delete SAP files use sidadm.
2. I found no core files and the harddisk is still 100% full, what files might also cause this problem
In your first post you wrote that /usr/sap/SID/INST/work dir is full, it is most likely that some trace files got to large. Check for files like dev_*, dew_w0 for example is the trace file of work process 0, dev_disp is the trace of the dispatcher and so on. You either have to increase the size of the filesystem, or find the cause for the growing file. It can be due to an increased trace level.
3. What on database-side could cause this problems?! can i search for sth here.
This does not look like a database issue so far.
4. i was unable to use the given scripts (noob!), what can i do else?!
Which one, please post what you typed and the error you got.
Best regards, Michael

Similar Messages

  • Newbie: How To find Core Dump files on Unix?!  URGENT!

    Hi, i would like to know on How To find Core Dump files on Unix?!
    I know they should be found in  /usr/sap/<SYSTEM-ID>/<INSTANZ>/work
    but there are no "core" files and also in tmp is nothing unusual but disk space is totally full.
    So how to find the big files which i could delete to make system running again?!
    can someone provide me with some infos?!
    br

    1. which user i should use to search and destroy?! root-user or SYSID-User?!
    Always use the user with the least permissions to do the job, don't use root if your sidadm can do it. If you want to find / delete SAP files use sidadm.
    2. I found no core files and the harddisk is still 100% full, what files might also cause this problem
    In your first post you wrote that /usr/sap/SID/INST/work dir is full, it is most likely that some trace files got to large. Check for files like dev_*, dew_w0 for example is the trace file of work process 0, dev_disp is the trace of the dispatcher and so on. You either have to increase the size of the filesystem, or find the cause for the growing file. It can be due to an increased trace level.
    3. What on database-side could cause this problems?! can i search for sth here.
    This does not look like a database issue so far.
    4. i was unable to use the given scripts (noob!), what can i do else?!
    Which one, please post what you typed and the error you got.
    Best regards, Michael

  • Is there anyway we can check the validity of dump file in Unix system

    Hi,
    I have taken export of 2 tables which are huge in volume. one of the table seems to be successfully exported yet another one failed due to space scarcity. Since both the tables share a single dumpfile.
    I wanted to know.
    1. Can we use the same dumpfile to import the table which is successfull or it can not be used because the second table got error while exporting?
    2. Is there any way we can check the validity of dumpfile in unix plateform.
    Your reply is highly appreciated.

    784786 wrote:
    Hi,
    I have taken export of 2 tables which are huge in volume. one of the table seems to be successfully exported yet another one failed due to space scarcity. Since both the tables share a single dumpfile.
    I wanted to know.
    1. Can we use the same dumpfile to import the table which is successfull or it can not be used because the second table got error while exporting?No. It is better to take another backup of the same
    >
    2. Is there any way we can check the validity of dumpfile in unix plateform.
    Please check the export log backup. If it says that export terminated successfully, then the dump may be fine.

  • Reading XML file from UNIX

    I am reading XML file from unix using :
    FORM read_file USING p_name.
      DO.
        READ DATASET p_name INTO WXML_LINE LENGTH LENG.
      ENDDO.
    Then I am using subroutine below where I get a short dump at
      case X_NODE->get_type( ).
    FORM get_data tables   Y_CAPXML   structure GV_CAPXML
                  using value(x_node) type ref to if_ixml_node.
      data: INDENT      type i.
      data: PTEXT       type ref to if_ixml_text.
      data: STRING      type string.
      data: TEMP_STRING(100).
      case X_NODE->get_type( ).
        when if_ixml_node=>co_node_element.
          STRING = X_NODE->get_name( ).
          GV_NODETEXT = STRING.
        when if_ixml_node=>co_node_text.
          PTEXT ?= X_NODE->query_interface( IXML_IID_TEXT ).
          if PTEXT->ws_only( ) is initial.
            STRING = X_NODE->get_value( ).
            case GV_NODETEXT.
              when 'NIIN'.
                move STRING to GV_CAPXML-NIIN.
              when 'FED_x0020_STOCK_x0020_CLASS'.
                move STRING to GV_CAPXML-fed_stock_class.
              when 'DODIC'.
                move STRING to GV_CAPXML-dodic.
             endcase.
    The text for the short dump is : STACK_STATE_NO_ROLL_MEMORY
    Can someone please explain what is it mean.
    Thanks.

    May be this blog can help:
    <a href="/people/r.eijpe/blog/2005/11/21/xml-dom-processing-in-abap-part-ii--convert-an-xml-file-into-an-abap-table-using-sap-dom-approach for Blog</a>
    Sri

  • Output chinese character to CSV file in UNIX

    Hi
    I encountered ABAP dump whenever output chinese character to CSV file in UNIX in ECC6. Error show as
    "At the conversion of a text from codepage '4102' to codepage '1100':
    - a character was found that cannot be displayed in one of the two
    codepages;
    - or it was detected that this conversion is not supported"
    The program with coding of statement OPEN DATASET xxxxx FOR OUTPUT IN TEXT MODE  ENCODING NON-UNICODE. Reason to output to OPEN statement to non-unicode as users would like to open the csv file thru Excel directly. They do not wish to open the text file in Excel. Can Experts please share with me how to overcome the problem?
    Thanks
    Kang Ring

    May be you could give a try with the following code and check
    OPEN DATASET xxxxx FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE CODEPAGE '4103'.
    Vikranth

  • Size of export dump file

    I read an article on EXPORT of dump file cannot over the limit of 2Gig. This is because prior to 8.1.3 there is no large file support for Oracle Import, Export, or SQL*Loader utilties.
    How bout the current version? 9i or 10g? Can the exp dump file be larger than 2G?

    Under AIX Unix, there is a fsize parameter for every user in /etc/security/limits file. if fsize = -1 then user will have authority to create any file size. But to do this change, you should have root access under Unix enviroment.

  • Dbms_datapupm, dump file permissions

    Oracle 11.2.0.2
    Using dbms_datapipe APIs I can successfully create the dump file. However, that file does not have read or write permissions, so the UNIX user can not zip and ftp the file as required, not can it chmod the permissions. Please, advise. Thanks.

    Use ACLs. For example:
    hpux > # whoami
    hpux > whoami
    oradba
    hpux > # Create directory /tmp/acl_test for datapump files
    hpux > mkdir /tmp/acl_test
    hpux > # set directory /tmp/acl_test access to rwx for owner (Unix user oradba) and no access to group and other
    hpux > chmod 700 /tmp/acl_test
    hpux > # set ACL access to directory /tmp/acl_test file itself to rwx for user oracle
    hpux > setacl  -m u:oracle:rwx /tmp/acl_test
    hpux > # set ACL access to any file created in directory /tmp/acl_test to rwx for user oracle
    hpux > setacl  -m d:u:oracle:rwx /tmp/acl_test
    hpux > # set ACL access to any file created in directory /tmp/acl_test to rwx for user oradba
    hpux > setacl  -m d:u:oradba:rwx /tmp/acl_test
    hpux > # show directory /tmp/acl_test ACLs
    hpux > getacl /tmp/acl_test
    # file: /tmp/acl_test
    # owner: oradba
    # group: appdba
    user::rwx
    user:oracle:rwx
    group::---
    class:rwx
    other:---
    default:user:oracle:rwx
    default:user:oradba:rwx
    hpux > # create Oracle directory object
    hpux > sqlplus / << EOF
    hpuxcreate directory acl_test as '/tmp/acl_test';
    hpuxexit
    hpuxEOF
    SQL*Plus: Release 11.1.0.7.0 - Production on Mon Aug 22 15:27:56 2011
    Copyright (c) 1982, 2008, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL>
    Directory created.
    SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
    - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    hpux > # datapump export
    hpux > expdp / JOB_NAME=acl_test TABLES=T_INDEX_USAGE PARALLEL=1 COMPRESSION=ALL
    REUSE_DUMPFILES=Y DIRECTORY=ACL_TEST dumpfile=acl_test_%U.dmp logfile=acl_test.
    log
    Export: Release 11.1.0.7.0 - 64bit Production on Monday, 22 August, 2011 15:28:0
    7
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
    Production
    With the Partitioning, OLAP and Data Mining options
    Starting "OPS$ORADBA"."ACL_TEST":  /******** JOB_NAME=acl_test TABLES=T_INDEX_USAGE PARALLEL=1 COMPRESSION=ALL REUSE_DUMPFILES=Y DIRECTORY=ACL_TEST dumpfile=acl_test_%U.dmp logfile=acl_test.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    . . exported "OPS$ORADBA"."T_INDEX_USAGE"                    0 KB       0 rows
    Master table "OPS$ORADBA"."ACL_TEST" successfully loaded/unloaded
    Dump file set for OPS$ORADBA.ACL_TEST is:
      /tmp/acl_test/acl_test_01.dmp
    Job "OPS$ORADBA"."ACL_TEST" successfully completed at 15:28:40
    hpux > # directory /tmp/acl_test listing
    hpux > ls -l /tmp/acl_test
    total 64
    -rw-r-----+  1 oracle     dba           1036 Aug 22 15:28 acl_test.log
    -rw-r-----+  1 oracle     dba          20480 Aug 22 15:28 acl_test_01.dmp
    hpux > # copy datapump files (to prove we can read them)
    hpux > cp /tmp/acl_test/acl_test_01.dmp /tmp/acl_test/acl_test_01_copy.dmp
    hpux > cp /tmp/acl_test/acl_test.log /tmp/acl_test/acl_test_copy.log
    hpux > # delete files
    hpux > rm /tmp/acl_test/*
    /tmp/acl_test/acl_test.log: 640+ mode ? (y/n) y
    /tmp/acl_test/acl_test_01.dmp: 640+ mode ? (y/n) y
    hpux > # delete directory
    hpux > rmdir /tmp/acl_test
    hpux > But based on "Oracle does have rights for the directory" and "UNIX user does have rights for the directory too" all you need is ACL for non-oracle UNIX user:
    setacl -m d:u:unix_user:rwx directory_pathSY.

  • How can a non dba user manipulate the dump file outside of oracle ?

    I have a business request to allow a none DBA database user to dump his tables and he can move his dump file on the Unix box from a file system to another file system. This user has a none oracle unix account. When using traditional exp, this is not a problem. But in expdp, all dump files are owned by oracle. Does anybody know how to change the ownership without a DBA involved?
    Unix: Sun Solaris
    DB: 10g
    Storage: sand disk

    Betty wrote:
    following option 1, problem is now the command in the shell script like chmod 744 doesn't allow this none dba user to change the permission, since he doesn't own the file. you can test yourself:
    changepermit.ksh 755
    chmod 744 dump.dmpSo have the script owned by oracle:dba change the owner!
    $ echo "" > bla
    $ ll bla
    -rw-rw-rw-   1 jeg    users            1 Nov 10 16:53 bla
    $ chmod 640 bla
    $ ll bla
    -rw-r-----   1 jeg    users            1 Nov 10 16:53 bla
    $ chown smk bla
    $ ll bla
    -rw-r-----   1 smk    users            1 Nov 10 16:53 bla
    $ echo "" > bla
    /usr/bin/ksh: bla: cannot createNote you'll have to move it unless you let oracle write to it.

  • How to write a file into UNIX!

    Hi Group!
    I am trying to write an error file into UNIX directory using 'OPEN DATASET ERRFILE FOR OUTPUT IN TEXT MODE ENCODING DEFAULT'. But unfortunately i am getting a dump as it is giving the subrc 8 and dumping with CX_SY_FILE_OPEN_MODE.
    The path looks like /sap/DE1/batch/data/SCM_YM/partfiles/HGMEH011.20071001150344
    Is there any special command or way to write the file into UNIX?
    Quick response would be of great help.
    Suresh

    The detailed error is
    The file '&FILENAME&' was not opened, or was opened in the wrong mode.
    Can you please try to create a new file that does not exist.
    then we can check whether you have authorization or not.
    OPEN DATASET to a file already opened - in the same internal mode - triggers the exception.
    THtas why you just create a file with some rough name ( may be ur name) intially and then check.
    bye sasi

  • Interpreting RAC error dump file

    Can someone help me interpret this RAC dump file:
    $ more cogcsdev3_ora_27680.trc.old
    /n01/app/oracle/admin/COGCSDEV/udump/cogcsdev3_ora_27680.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP and Data Mining options
    ORACLE_HOME = /n01/app/oracle/product/10.2.0
    System name: SunOS
    Node name: grid-t3
    Release: 5.10
    Version: Generic_118833-24
    Machine: sun4u
    Instance name: COGCSDEV3
    Redo thread mounted by this instance: 3
    Oracle process number: 34
    Unix process pid: 27680, image: oracleCOGCSDEV3@grid-t3
    *** 2007-09-20 12:07:21.480
    *** SERVICE NAME:(SYS$USERS) 2007-09-20 12:07:21.480
    *** SESSION ID:(77.22755) 2007-09-20 12:07:21.480
    DUMP LOCAL BLOCKER/HOLDER: block level 5 res [0x19001d][0x37ff],[TX]
    ----------resource 0x3986d4a30----------------------
    resname : [0x19001d][0x37ff],[TX]
    Local node : 2
    dir_node : 2
    master_node : 2
    hv idx : 123
    hv last r.inc : 2
    current inc : 2
    hv status : 0
    hv master : 2
    open options : dd
    grant_bits : KJUSERNL KJUSEREX
    grant mode : KJUSERNL KJUSERCR KJUSERCW KJUSERPR KJUSERPW KJUSEREX
    count : 9 0 0 0 0 1
    val_state : KJUSERVS_NOVALUE
    valblk : 0x00000000000000000000000000000000 .
    access_node : 2
    vbreq_state : 0
    state : x0
    resp : 3986d4a30
    On Scan_q? : N
    Total accesses: 1759
    Imm. accesses: 1594
    Granted_locks : 1
    Cvting_locks : 9
    value_block: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    GRANTED_Q :
    lp 410dbe058 gl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 410c6d008 possible pid 9871 xid 3001-0012-0000006A bast 0 rseq 7 mseq 0 history 0x14951495
    open opt KJUSERDEADLOCK
    CONVERT_Q:
    lp 410dbe1a8 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 410c86dc8 possible pid 27680 xid 3002-0022-0000013E bast 0 rseq 7 mseq 0 history 0x1495149a
    convert opt KJUSERGETVALUE
    lp 3d1196d88 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 3d1063f38 possible pid 8134 xid 3003-003F-00000006 bast 0 rseq 7 mseq 0 history 0x1495149a
    convert opt KJUSERGETVALUE
    lp 3d11e7c38 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 3d10615d8 possible pid 9126 xid 3004-0041-00000002 bast 0 rseq 7 mseq 0 history 0x1495149a
    convert opt KJUSERGETVALUE
    lp 410dbe448 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 410c67d48 possible pid 10101 xid 3003-003A-00000008 bast 0 rseq 7 mseq 0 history 0x1495149a
    convert opt KJUSERGETVALUE
    lp 3d114ec98 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 3d1060128 possible pid 11154 xid 3004-0043-00000002 bast 0 rseq 7 mseq 0 history 0x1495149a
    convert opt KJUSERGETVALUE
    lp 3d11e7998 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 3d105d7c8 possible pid 11553 xid 3004-0047-00000002 bast 0 rseq 7 mseq 0 history 0x49a5149a
    convert opt KJUSERGETVALUE
    lp 3d114ef38 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 3d105c318 possible pid 12500 xid 3004-0049-00000002 bast 0 rseq 7 mseq 0 history 0x1495149a
    convert opt KJUSERGETVALUE
    lp 410dbeaf0 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 410c62a88 possible pid 15499 xid 3004-0046-00000003 bast 0 rseq 7 mseq 0 history 0x1495149a
    convert opt KJUSERGETVALUE
    lp 410d254c0 gl KJUSERNL rl KJUSEREX rp 3986d4a30 [0x19001d][0x37ff],[TX]
    master 2 gl owner 410c615d8 possible pid 17049 xid 3004-0048-00000002 bast 0 rseq 7 mseq 0 history 0x1495149a
    convert opt KJUSERGETVALUE
    ----------enqueue 0x410dbe058------------------------
    lock version : 159405
    Owner node : 2
    grant_level : KJUSEREX
    req_level : KJUSEREX
    bast_level : KJUSERNL
    notify_func : 0
    resp : 3986d4a30
    procp : 39ca22e98
    pid : 27680
    proc version : 86
    oprocp : 0
    opid : 0
    group lock owner : 410c6d008
    possible pid : 9871
    xid : 3001-0012-0000006A
    dd_time : 0.0 secs
    dd_count : 0
    timeout : 0.0 secs
    On_timer_q? : N
    On_dd_q? : N
    lock_state : GRANTED
    Open Options : KJUSERDEADLOCK
    Convert options : KJUSERNOQUEUE
    History : 0x14951495
    Msg_Seq : 0x0
    res_seq : 7
    valblk : 0x00000000000000000000000000000000 .
    Potential blocker (pid=9871) on resource TX-0019001D-000037FF
    DUMP LOCAL BLOCKER: initiate state dump for TIMEOUT
    possible owner[18.9871]
    Submitting asynchronized dump request [28]

    the "Potential blocker (pid=9871) on resource TX-0019001D-000037FF" part in the logfile caught my attention and I was not sure it's a block or enqueue contention.
    The database instance is the only running instance in a 3 node RAC and I was wondering how it still may be affected by enque process?
    Any suggestion will be welcome!

  • Export larger dump file  in small drive

    Hello Everybody,
    I want to take the export of 100GB database (Dump file will come approx 50 to 60 GB) to the drive having 40GB space .Is it possible.
    Thanks in Advance
    Regards
    Hamid
    Message was edited by:
    Hamid

    No version, no platform... Why? Too difficult? Strain on your fingers?
    The answer is platform and version dependent!
    Anyway: on 9i and before : Winblows: Set the compression attribute of the directory or drive you plan to compress to, make sure this attribute is inherited.
    Unix: export to a pipe, compress the input of this pipe.
    Scripts are floating on this forum and on the Internet everywhere (also on Metalink) everyone should be able to find them with little effort.
    On 10g: expdp can generate compressed exports.
    Sybrand Bakker
    Senior Oracle DBA

  • Redirect core dump file

    This seems like a fairly simple problem, but I cannot seem to find it in any searches...
    How can I set the core (or heap) dump directory to "/usr/scratch" instead of the current directory. I running 1.4.2 for now.
    Thanks for any help!

    Hi, Im not familiar with heap dumps so i apologize if this isnt helpful...
    http://blogs.sun.com/roller/page/alanb?entry=heap_dumps_are_back_with
    If you don't want big dump files in the application working directory then the HeapDumpPath option can be used to specify an alternative location - for example -XX:HeapDumpPath=/disk2/dumps will cause the heap dump to be generated in the /disk2/dumps directory.
    https://hat.dev.java.net/doc/README.html
    For example, to run the program Main and generate a binary heap profile in the file dump.hprof, use the following command:
    java -Xrunhprof:file=dump.hprof,format=b Main
    also:
    http://www.hp.com/products1/unix/java/infolibrary/prog_guide/hotspot.html
    -XX:+HeapDumpOnOutOfMemory
    The HeapDumpOnOutOfMemory command line option causes the JVM to dump a snapshot of the Java heap when an Out Of Memory error condition has been reached. The heap dump format generated by HeapDumpOnOutOfMemory is in hprof binary format, and is written to filename java_pid<pid>_<.hprof in the current working directory. The option -XX:HeapDumpPath=<file>_< can be used to specify the dump filename or the directory where the dump file is created. Running with application with -XX:+HeapDumpOnOutOfMemoryError does not impact performance. Please note the following known issue: The HeapDumpOnOutOfMemory option does not work with the low-pause collector (-XX:+UseConcMarkSweepGC). This option is available starting with the 1.4.2.11 and 1.5.0.04 releases.

  • Compressed dump file while export on Windows!

    Hi All
    Could someone suggest how do we take and compressed dump file while exporting on Windows environment with oracle 9.2.0.6.Please specify the exact syntax how to proceed on this.
    Thanks
    Bala.

    I don't think that exp tool can compress the export file it is creating (the compress parameter has nothing to do with the export file but with the way the database objects are going to be created in the target database).
    If you run export under Unix, there is a possibility to use Unix pipes to compress the export file during the export using Unix commands (compress or gzip for example). I don't know how to do something similar under Windows I have some doubts about this possibility.

  • Error while importing a dump file in Oracle 10g R1

    Hi all,
    While trying to import a schema using Data Dump, I am facing the following issue -
    UDI-00018 - Import utility version can not be more recent than the Data Dump server.
    Following is the version information of the source and target DB and the utilities :
    Source DB server : 10.1.0.2.0
    Export utility : 10.1.0.2.0
    Import utility : 10.1.0.2.0
    Target DB server : 10.1.0.2.0
    Export utility : 10.2.0.1.0
    Import utility : 10.2.0.1.0
    I can figure out the cause for the problem, but don't know how to resolve it.
    Any help will be appreciated.
    Thanks in advance.
    Gitika Khurana

    How did you get thre DMP file created and how are you trying to import the dump file? Could you post the commands you're using, please?

  • How to import external table, which exist in export dump file.

    My export dump file has one external table. While i stated importing into my developement instance , I am getting the error "ORA-00911: invalid character".
    The original definition of the extenal table is as given below
    CREATE TABLE EXT_TABLE_EV02_PRICEMARTDATA
    EGORDERNUMBER VARCHAR2(255 BYTE),
    EGINVOICENUMBER VARCHAR2(255 BYTE),
    EGLINEITEMNUMBER VARCHAR2(255 BYTE),
    EGUID VARCHAR2(255 BYTE),
    EGBRAND VARCHAR2(255 BYTE),
    EGPRODUCTLINE VARCHAR2(255 BYTE),
    EGPRODUCTGROUP VARCHAR2(255 BYTE),
    EGPRODUCTSUBGROUP VARCHAR2(255 BYTE),
    EGMARKETCLASS VARCHAR2(255 BYTE),
    EGSKU VARCHAR2(255 BYTE),
    EGDISCOUNTGROUP VARCHAR2(255 BYTE),
    EGREGION VARCHAR2(255 BYTE),
    EGAREA VARCHAR2(255 BYTE),
    EGSALESREP VARCHAR2(255 BYTE),
    EGDISTRIBUTORCODE VARCHAR2(255 BYTE),
    EGDISTRIBUTOR VARCHAR2(255 BYTE),
    EGECMTIER VARCHAR2(255 BYTE),
    EGECM VARCHAR2(255 BYTE),
    EGSOLATIER VARCHAR2(255 BYTE),
    EGSOLA VARCHAR2(255 BYTE),
    EGTRANSACTIONTYPE VARCHAR2(255 BYTE),
    EGQUOTENUMBER VARCHAR2(255 BYTE),
    EGACCOUNTTYPE VARCHAR2(255 BYTE),
    EGFINANCIALENTITY VARCHAR2(255 BYTE),
    C25 VARCHAR2(255 BYTE),
    EGFINANCIALENTITYCODE VARCHAR2(255 BYTE),
    C27 VARCHAR2(255 BYTE),
    EGBUYINGGROUP VARCHAR2(255 BYTE),
    QTY NUMBER,
    EGTRXDATE DATE,
    EGLISTPRICE NUMBER,
    EGUOM NUMBER,
    EGUNITLISTPRICE NUMBER,
    EGMULTIPLIER NUMBER,
    EGUNITDISCOUNT NUMBER,
    EGCUSTOMERNETPRICE NUMBER,
    EGFREIGHTOUTBOUNDCHARGES NUMBER,
    EGMINIMUMORDERCHARGES NUMBER,
    EGRESTOCKINGCHARGES NUMBER,
    EGINVOICEPRICE NUMBER,
    EGCOMMISSIONS NUMBER,
    EGCASHDISCOUNTS NUMBER,
    EGBUYINGGROUPREBATES NUMBER,
    EGINCENTIVEREBATES NUMBER,
    EGRETURNS NUMBER,
    EGOTHERCREDITS NUMBER,
    EGCOOP NUMBER,
    EGPOCKETPRICE NUMBER,
    EGFREIGHTCOSTS NUMBER,
    EGJOURNALBILLINGCOSTS NUMBER,
    EGMINIMUMORDERCOSTS NUMBER,
    EGORDERENTRYCOSTS NUMBER,
    EGRESTOCKINGCOSTSWAREHOUSE NUMBER,
    EGRETURNSCOSTADMIN NUMBER,
    EGMATERIALCOSTS NUMBER,
    EGLABORCOSTS NUMBER,
    EGOVERHEADCOSTS NUMBER,
    EGPRICEADMINISTRATIONCOSTS NUMBER,
    EGSHORTPAYMENTCOSTS NUMBER,
    EGTERMCOSTS NUMBER,
    EGPOCKETMARGIN NUMBER,
    EGPOCKETMARGINGP NUMBER,
    EGWEIGHTEDAVEMULTIPLIER NUMBER
    ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
    DEFAULT DIRECTORY EV02_PRICEMARTDATA_CSV_CON
    ACCESS PARAMETERS
    LOCATION (EV02_PRICEMARTDATA_CSV_CON:'VPA.csv')
    REJECT LIMIT UNLIMITED
    NOPARALLEL
    NOMONITORING;
    While importing , when i seen the log file , it is failing the create the external table. Getting the error "ORA-00911: invalid character".
    Can some one suggest how to import external tables
    Addressing this issue will be highly appriciated.
    Naveen

    Hi Srinath,
    When i observed the create table syntax of external table from import dump log file, it show few lines as below. I could not understand these special characters. And create table definationis failing with special character viz ORA-00911: invalid character
    ACCESS PARAMETERS
    LOCATION (EV02_PRICEMARTDATA_CSV_CON:'VPA.csv').
    I even observed the create table DDL from TOAD. It is same as i mentioned earlier
    Naveen

Maybe you are looking for

  • Internet Explorer 7 crashes when I open a pdf online

    I was just issued a new laptop and one of the bugs is that I get kicked out of Internet Explorer whenever I try to open/view a pdf document.  Can anyone help?  I thought it might be an issue with Adobe Reader, but I just purchased and installed Acrob

  • SEM CPM - Link a Query in Web Mode

    Hi all, Im working with SEM-CPM ; i can't add a link to a query in web mode ; i have tried this : 1)Add URL ; works fine but i cannot add the URL dinamycally in order to take the local server ; for example when i transport to QAS then links points to

  • Pdf export in pages (email as pdf) do not keep font style !!!

    I wrote a simple page in Pages with tables and i used Times News Roman police and also used a Bold style for some sentence. I was really surprised that the pdf export completely replace font police a no more visible difference between normal and bold

  • "error occurred setting the server application active"

    Hi All, I found following error when I save outline in ASO application. It seems that the application cannot be started correctly. Other applications (BSO/ASO) are running correctly. "error occurred setting the server application active"

  • How to resolve Error #3: memory.cpp, line 563

    I moved my application to a new computer with Windows XP and installed Labview 6.1 from the installation disc.  When I run Labview 6.1, it allows me to open a project but, as soon as the project loads it shows the following error message (Error #3: m