R4 EA: Error loading a table with an XMLTYPE field

When I try to view the data in a table with an XMLTYPE field I get the following error in R4 EA Version 4.0.0.12 Build MAIN-12-27.  This works in 2.2 and 3.0.
oracle.sqldeveloper.migration.application
     Error: Resource not found: ${SCRATCH_COMMAND_ICON}.
Double clicking on the error opens the EXTENSION.XML file and shows this line:
<trigger-hooks xmlns="http://xmlns.oracle.com/ide/extension">
  <!-- Add registry here if required -->
  <triggers xmlns:c="http://xmlns.oracle.com/ide/customization">
    <actions xmlns="http://xmlns.oracle.com/jdeveloper/1013/extension">
      <action id="MigrationProject.ApplicationScan">
        <properties>
          <property name="Name">${APPSCAN_TITLE}</property>
          <property name="MnemonicKey">${APPSCAN_TITLE2}</property>
          <property name="SmallIcon">res:${SCRATCH_COMMAND_ICON}</property>
        </properties>
      </action>
    </actions>
The table has the following definition:
ID    NUMBER(38,0)    No
WS_DATA    XMLTYPE    Yes
WS_SNAPSHOT_ID    NUMBER(38,0)    No
Any help would be greatly appreaciated.  We have to upgrade to 4.0 because our security team will no longer allow Java 6 on any server or workstation.
Thanks,
Steve

Hi Steve
Still no response?
I am having same issue when querying a table with XMLTYPE.
How is your XMLTYPE stored in the DB, as a CLOB or as a BINARY XML?
Regards,
Shaun

Similar Messages

  • Load fact table with null dimension keys

    Dear All,
    We have OWB 10g R2 and ROLAP star schema. In our source system some rows don’t have all attributes populated with values (null value), and this empty attributes are dimension (business) keys in star schema. Is it possible to load fact table with such rows (some dimension keys are null) in the OWB mappings? We use cube operator in mappings.
    Thanks And Regards
    Miran

    The dimension should have a row indicating UNKNOWN, this will have a business key outside of the normal range e.g. -999999.
    In the mapping the missing business keys can then be NVL'd to -999999.
    Cheers
    Si

  • MaxDB: Table with many LONG fields does not allow an INSERT: ...?

    Hi,
    I have a table with many LONG fields (28). So far, everythings works fine.
    However, if I add another LONG field I cannot insert a dataset anymore
    (29 LONG fields).
    Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
    Thanks in advance
    Michael
    appendix:
    - Create and Insert command and error message
    - MaxDB version and its parameters
    Create and Insert command and error message
    CREATE TABLE "DBA"."AZ_Z_TEST02"
         "ZTB_ID"               Integer    NOT NULL,
         "ZTB_NAMEOFREPORT"           Char (400) ASCII DEFAULT '',
         "ZTB_LONG_COMMENT"                LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_00"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_01"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_02"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_03"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_04"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_05"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_06"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_07"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_08"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_09"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_10"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_11"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_12"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_13"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_14"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_15"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_16"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_17"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_18"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_19"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_20"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_21"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_22"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_23"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_24"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_25"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_26"         LONG ASCII DEFAULT '',
         PRIMARY KEY ("ZTB_ID")
    The insert command
    INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
    works fine. If I add the LONG field
    "ZTB_LONG_TEXTBLOCK_27"         LONG ASCII DEFAULT '',
    the following error occurs:
        Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
        General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
        INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
    MaxDB version and its parameters
    All db params given by
    dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
    are
    KERNELVERSION                         KERNEL    7.5.0    BUILD 026-123-094-430
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    RESTART_SHUTDOWN                      MANUAL
    SERVERDBFOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      INTERNAL
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         10
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   LOG_001
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   64000
    DATA_VOLUME_NAME_0001                 DAT_0001
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0001                 64000
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    BACKUP_BLOCK_CNT                      8
    LOG_MIRRORED                          NO
    MAXVOLUMES                            22
    MULTIO_BLOCK_CNT                    4
    DELAYLOGWRITER                      0
    LOG_IO_QUEUE                          50
    RESTARTTIME                         600
    MAXCPU                                1
    MAXUSERTASKS                          50
    TRANSRGNS                           8
    TABRGNS                             8
    OMSREGIONS                          0
    OMSRGNS                             25
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        1
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    ROWRGNS                             8
    MINSERVER_DESC                      16
    MAXSERVERTASKS                        20
    _MAXTRANS                             288
    MAXLOCKS                              2880
    LOCKSUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       900
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    USEASYNC_IO                         YES
    IOPROCSPER_DEV                      1
    IOPROCSFOR_PRIO                     1
    USEIOPROCS_ONLY                     NO
    IOPROCSSWITCH                       2
    LRU_FOR_SCAN                          NO
    PAGESIZE                            8192
    PACKETSIZE                          36864
    MINREPLYSIZE                        4096
    MBLOCKDATA_SIZE                     32768
    MBLOCKQUAL_SIZE                     16384
    MBLOCKSTACK_SIZE                    16384
    MBLOCKSTRAT_SIZE                    8192
    WORKSTACKSIZE                       16384
    WORKDATASIZE                        8192
    CATCACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      1632
    INIT_ALLOCATORSIZE                    229376
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    TASKCLUSTER01                       tw;al;ut;2000sv,100bup;10ev,10gc;
    TASKCLUSTER02                       ti,100dw;30000us;
    TASKCLUSTER03                       compress
    MPRGN_QUEUE                         YES
    MPRGN_DIRTY_READ                    NO
    MPRGN_BUSY_WAIT                     NO
    MPDISP_LOOPS                        1
    MPDISP_PRIO                         NO
    XP_MP_RGN_LOOP                        0
    MP_RGN_LOOP                           0
    MPRGN_PRIO                          NO
    MAXRGN_REQUEST                        300
    PRIOBASE_U2U                        100
    PRIOBASE_IOC                        80
    PRIOBASE_RAV                        80
    PRIOBASE_REX                        40
    PRIOBASE_COM                        10
    PRIOFACTOR                          80
    DELAYCOMMIT                         NO
    SVP1_CONV_FLUSH                     NO
    MAXGARBAGECOLL                      0
    MAXTASKSTACK                        1024
    MAX_SERVERTASK_STACK                  100
    MAX_SPECIALTASK_STACK                 100
    DWIO_AREA_SIZE                      50
    DWIO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    FBMLOW_IO_RATE                      10
    CACHE_SIZE                            10000
    DWLRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    DATACACHE_RGNS                      8
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              11
    SEQUENCE_CACHE                        1
    IDXFILELIST_SIZE                    2048
    SERVERDESC_CACHE                    73
    SERVERCMD_CACHE                     21
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    READAHEADBLOBS                      25
    RUNDIRECTORY                          E:\_mp\u_v_dbs\EVERW_C5
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        1
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        0
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       648
    EXTERNAL_DUMP_REQUEST                 NO
    AKDUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    UTILITYPROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    BACKUPHISTFILE                      dbm.knl
    BACKUPMED_DEF                       dbm.mdf
    MAXMESSAGE_FILES                    0
    EVENTALIVE_CYCLE                    0
    _SHAREDDYNDATA                        10280
    _SHAREDDYNPOOL                        3607
    USE_MEM_ENHANCE                       NO
    MEM_ENHANCE_LIMIT                     0
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-13 13:47:17
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
    DIAGSEM                             1
    SHOW_MAX_STACK_USE                    NO
    LOG_SEGMENT_SIZE                      21333
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    0
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_JOIN_OUTER                   YES
    JOIN_OPERATOR_IMPLEMENTATION          IMPROVED
    JOIN_TABLEBUFFER                      128
    OPTIMIZE_FETCH_REVERSE                YES
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             NO
    SHAREDSQL_EXPECTEDSTATEMENTCOUNT      1500
    SHAREDSQL_COMMANDCACHESIZE            32768
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    HASHED_RESULTSET                      NO
    HASHED_RESULTSET_CACHESIZE            262144
    AUTO_RECREATE_BAD_INDEXES             NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FORBID_LOAD_BALANCING                 NO

    >
    Lars Breddemann wrote:
    > Hi Michael,
    >
    > this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
    > Really.
    >
    > Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
    > Anyhow, when I use
    >
    > insert into "AZ_Z_TEST02"  values (87,'','','','','','','','','','','','','','','',''
    >                                           ,'','','','','','','','','','','','','','','','')
    >
    > it works fine.
    It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
    >
    Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
    >
    > Now to the other errors:
    > - 28 Long values per row?
    > What the heck is wrong with the data design here?
    > Honestly, you can save data up to 2 GB in a BLOB/CLOB.
    > Currently, your data design allows 56 GB per row.
    > Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
    >
    > - The "ZTB_NAMEOFREPORT" looks like something the users see -
    > still there is no unique constraint preventing that you get 10000 of reports with the same name...
    You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
    (These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
    >
    - MaxDB 7.5 Build 26 ?? Where have you been the last years?
    > Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
    > With 7.6. I was not able to reproduce your issue at all.
    The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
    All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
    >
    - Are you really putting your data into the DBA schema? Don't do that, ever.
    > DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
    > Create a user/schema for your application data and put your tables into that.
    >
    > KR Lars
    In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
    Michael

  • How to fill internal table with selection screen field.

    Hi all,
    i am new to sap . pls tell me how to fill internal table with selection screen field.

    Hi,
    Please see the example below:-
    I have used both select-options and parameter on the selection-screen.
    Understand the same.
    * type declaration
    TYPES: BEGIN OF t_matnr,
            matnr TYPE matnr,
           END OF t_matnr,
           BEGIN OF t_vbeln,
             vbeln TYPE vbeln,
           END OF t_vbeln.
    * internal table declaration
    DATA : it_mara  TYPE STANDARD TABLE OF t_matnr,
           it_vbeln TYPE STANDARD TABLE OF t_vbeln.
    * workarea declaration
    DATA : wa_mara  TYPE t_matnr,
           wa_vbeln TYPE t_vbeln.
    * selection-screen field
    SELECTION-SCREEN: BEGIN OF BLOCK b1.
    PARAMETERS : p_matnr TYPE matnr.
    SELECT-OPTIONS : s_vbeln FOR wa_vbeln-vbeln.
    SELECTION-SCREEN: END OF BLOCK b1.
    START-OF-SELECTION.
    * I am adding parameter value to my internal table
      wa_mara-matnr = p_matnr.
      APPEND wa_mara TO it_mara.
    * I am adding select-options value to an internal table
      LOOP AT s_vbeln.
        wa_vbeln-vbeln =  s_vbeln-low.
        APPEND  wa_vbeln TO  it_vbeln.
      ENDLOOP.
    Regards,
    Ankur Parab

  • ORA-00904 error while export table with CLOB

    All,
    I'm trying to export from Oracle Client 8.0.4 an specific Oracle 9i R2 schema, but this error appears. This error is related with tables that have CLOB field types, because schemas with tables without this field type can be exported with no error. I've already run the catexp.sql script, but it haven't solved this problem.
    Does anyone can help me?
    Thanks,
    Davi

    You can try performing the import of the dump to see if it would work with 8i client or the 8.0.4 client.
    if not, you may not be able to use this method to move data into 8.0.4 database that is no longer spported by current tools.
    you may then want to try use other techniques like dumping tables into flat files and then using SQL*Loader to load into 8.0.4.
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:88212348059#14506966201668

  • Error While mapping table with 100 Columns

    Hello
    Actually i had a requirement in which i have to map the data into target table from more than 50 tables with complex join conditions
    So I created 5 maps separately to load the data and now i am feared that while i deploy the the first map the other columns which doesnt have a map
    will be filled with nulls and some of the columns have unique constraint on it so its giving me error
    Please Help me out i need to submit my Assignment by Monday
    Thanks
    Sriks

    Bharadwaj Hari wrote:
    Hi,
    I agree with u...I am not sure of the environment the user has so i put forth all the 3 option that crossed my mind that time....thats why i said he has to choose what best suits him/her...
    Also if the database is huge and we create physical temp tables (option 2 and ur idea) its like having redundant data in the database which is also a problem....So ist upto the user to actually evaluate the situation and come up with what best suits him/her...
    Regards
    BharathHi,
    I understand your opinion. But I am not sure that the user have enough experience to choose the best option by his one. And about the redundant data: because of this I wrote that he should truncate the tables after the last mapping which loads all data into the real target table.
    Regards,
    Detlef

  • Best Practice loading Dimension Table with Surrogate Keys for Levels

    Hi Experts,
    how would you load an Oracle dimension table with a hierarchy of at least 5 levels with surrogate keys in each level and a unique dimension key for the dimension table.
    With OWB it is an integrated feature to use surrogate keys in every level of a hierarchy. You don't have to care about
    the parent child relation. The load process of the mapping generates the right keys and cares about the relation between the parent and child inside the dimension key.
    I tried to use one interface per Level and created a surrogate key with a native Oracle sequence.
    After that I put all the interfaces in to one big Interface with a union data set per level and added look ups for the right parent child relation.
    I think it is a bit too complicated making the interface like that.
    I will be more than happy for any suggestions? Thank you in advance!
    negib
    Edited by: nmarhoul on Jun 14, 2012 2:26 AM

    Hi,
    I do like the level keys feature of OWB - It makes aggregate tables very easy to implement if your sticking with a star schema.
    Sadly there is nothing off the shelf with the built in knowledge modules with ODI , It doesnt support creating dimension objects in the database by default but there is nothing stopping you coding up your own knowledge module (use flex fields maybe on the datastore to tag column attributes as needed)
    Your approach is what I would have done, possibly use a view (if you dont mind having it external to ODI) to make the interface simpler.

  • Error loading class problem with applet (Newbie)

    Hi,
    I am new to Java applets. I try to display a simple sample. I put this into html file: <APPLET codebase="classes" code="NewApplet.class" width=350 height=200></APPLET>
    And it always appears one problem:
    Error loading class: NewApplet
    java.lang.NoClassDefFoundError
    java.lang.ClassNotFoundException: NewApplet
         at com/ms/vm/loader/URLClassLoader.loadClass
         at com/ms/vm/loader/URLClassLoader.loadClass
         at com/ms/applet/AppletPanel.securedClassLoad
         at com/ms/applet/AppletPanel.processSentEvent
         at com/ms/applet/AppletPanel.processSentEvent
         at com/ms/applet/AppletPanel.run
         at java/lang/Thread.run
    Can anybody help please? Thanks in advance.

    Now I am able to load the applet on MAC OS X 10.4.11. This is due to network issue.
    Now my problem with the applet functionality. My applet will display the map image. Applet contains the navigation arrow keys at the top left of my applet. These keys will be used to navigate through the map.
    The problem is while user using the navigation buttons, the image got squash and stretch.
    This is happened only on MAC OS X 10.4.11 and Its working fine on PC and MAC OS X 10.5.5.
    Please anyone help in this regard. Thanks in advance.

  • BPEL Compilation Error: Load of wsdl "with Message part element undefined..

    Hi Friends,
    I am getting following error while compiling my BPEL process:
    Error: Load of wsdl "FTPWrite.wsdl with Message part element undefined in wsdl [file:/D:/MyData/_MyProjects/052_Amazon_MetadataInterface/001_SVN/002_Intl/trunc/MetadataInterfaceIntl_2013Apr15_WorkingCode/MetadataInterface_Intl/MetadataInterface_Intl.wsdl] part name = reply     type = {http://com.fox.metadata/MetadataInterfaceIntl/MetadataInterface_Intl/types}processResponse" failed
    However the reply message is already defined in the MetadataInterface_Intl.wsdlas shown below:
    Code for MetadataInterface_Intl.wsdl::::
    "<?xml version= '1.0' encoding= 'UTF-8' ?>
    <wsdl:definitions
    name="MetadataInterface_Intl"
    targetNamespace="http://xmlns.oracle.com/MetadataInterfaceIntl/MetadataInterface_Intl/MetadataInterface_Intl"
    xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
    xmlns:inp1="http://com.fox.metadata/MetadataInterfaceIntl/MetadataInterface_Intl/types"
    xmlns:tns="http://xmlns.oracle.com/MetadataInterfaceIntl/MetadataInterface_Intl/MetadataInterface_Intl"
    >
    <wsdl:types>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <xsd:import namespace="http://com.fox.metadata/MetadataInterfaceIntl/MetadataInterface_Intl/types" schemaLocation="xsd/Metadata_Interface.xsd"/>
    </xsd:schema>
    </wsdl:types>
    <wsdl:message name="requestMessage">
    <wsdl:part name="request" element="inp1:process"/>
    </wsdl:message>
    *<wsdl:message name="replyMessage">*
    *<wsdl:part name="reply" element="inp1:processResponse"/>*
    *</wsdl:message>*
    <wsdl:portType name="execute_ptt">
    <wsdl:operation name="execute">
    <wsdl:input message="tns:requestMessage"/>
    <wsdl:output message="tns:replyMessage"/>
    </wsdl:operation>
    </wsdl:portType>
    </wsdl:definitions>"
    Surprisingly, this same code was compiling file last week and now I have no clue why I am getting this error. Can someone please shade some light on this issue?
    Thanks,
    Sachin.

    Hello
    I have had the same problem in Oracle BPM and solved it using the following steps:
    1- In your application navigator window, expand the project that contains the business rule.
    2- In the SOA Content, double click on your wsdl file.
    3- When the file opens, select the schema view from the bottom of the page.
    4- In the schema view, expand all the schema nodes and check if you see any values in red. If you see one, that value has probably caused the error and you should correct it using the property inspector window.
    In my case, the schema location value was set to a wrong path, so I changed it and the error resolved.
    Also, some error that appear as warning in the rule editor will show as compile error later, such as input types not being used and such, so those must be resolved before compiling.
    Hope that was helpful
    good luck

  • Error in creating table with sequence

    Good day,
    i tried creating a table in Object Browser - create table, selecting "Populated from a new sequence" when creating my primary key. The following error was encountered after i clicked on the "create" button towards the end.
    Creating table "temp2" failed.
    Failed Creating Table ORA-24344: success with compilation error ORA-00942: table or view does not exist ORA-06510: PL/SQL: unhandled user-defined exception
    Thank you for any assistance.

    Hi,
    "Action: Check each of the following:
    the spelling of the table or view name.
    that a view is not specified where a table is required.
    that an existing table or view name exists.
    Contact the database administrator if the table needs to be created or if user or application privileges are required to access the table.
    Also, if attempting to access a table or view in another schema, make certain the correct schema is referenced and that access to the object is granted. "
    There are reserved names. Change the names of the table and sequence.
    Konstantin

  • G555 (0873) error loading operating system with fresh install of XP

    Hello,
              I am in need of assistance.  i had XP Pro installed and functioning on my Lenovo G555 0873, but due to a bad virus i received in email I needed to reload XP because it corrupted everything.  Just use it for some games so wasn't worried about data.  But now I get an error loading operating system after fresh install of XP Pro,  Computer formats drive copyies files and restarts like itis supposed then Operating system is lost?  Can you help me?
    Xtiansdomain

    hey Xtiansdomain,
    i do hope that you created recovery media via One Key Recovery. If you did, use that to recover your system.
    however if no recovery media was made, get in contact with the support crew via : http://bit.ly/LNVsuppNum
    WW Social Media
    Important Note: If you need help, post your question in the forum, and include your system type, model number and OS. Do not post your serial number.
    Did someone help you today? Press the star on the left to thank them with a Kudo!
    If you find a post helpful and it answers your question, please mark it as an "Accepted Solution"!
    Follow @LenovoForums on Twitter!
    Have you checked out the Community Knowledgebase yet?!
    How to send a private message? --> Check out this article.

  • ORA-01461 Error when mapping table with multiple varchar2(4000) fields

    (Note: I think this was an earlier problem, supposed fixed in 11.0, but we are experiencing in 11.7)
    If I map an Oracle 9i table with multiple varchar2(4000) columns, targeting another Oracle 9i database, I get the ORA-01461 error (Can't bind a LONG value only for insert into a LONG).
    I have tried changing the target columns to varchar2(1000), as suggested as a workaround in earlier versions, all to no avail.
    I can have just one varchar2(4000) map correctly and execute flawlessly - the problem occurs when I add a second one.
    I have tried making the target column a LONG, but that does not solve the problem.
    Then, I made the target database SQL Server, and it had no problem at all, so the issue seems to be Oracle-related.

    Hi Jon,
    Thanks for the feedback. I'm unable to reproduce the problem you describe at the moment - if I try to migrate a TEXT(5), OMWB creates a VARCHAR(5) and the data migrates correctly!! However, I note from you description that even though the problematic source column datatype is TEXT(5), you mention that there are actually 20 lines of text in this field (and not 5 variable length characters as the definition might suggest).
    Having read through some of the MySQL reference guide I note that, in certain circumstances, MySQL actually changes the column datatype specified either at table creation time or when interfacing with other databases ( ref 14.2.5.1 Silent Column Specification Changes and 12.7 Using Column Types from Other Database Engines in the MySQL reference guide). Since your TEXT(5) actually contains 20 lines of text, MySQL (database or JDBC driver .... or both) may be trying to automatically map the specified datatype of the column to a datatype more appropriate to storing 20 lines of text.... that is, to a LONG value in this case. Then, when Oracle is presented with this LONG value to store in a VARCHAR(5) field, it throws the ORA-01461 error. I need to investigate this further, but this may be the case - its the first time I've see this problem encountered.
    To workaround this, you could change the datatype of the column to a LONG from within the Oracle Model before migrating. Any application code that accesses this column and expects a TEXT(5) value may need to be adjusted to cope with a LONG value. Is this a viable workaround for you?
    I will investigate further and notiofy you of any details I uncover. We will need to track this issue for possible inclusion in future development plans.
    I hope this helps,
    Regards,
    Tom.

  • Loading multiple tables with SQL Loader

    Hi,
    I want to load multiple tables from a single data file using SQL Loader.
    Here's the basic idea of what I want. Let's say I have two tables, table =T1
    and table T2:
    SQL> desc T1;
    COL1 VARCHAR2(20)
    COL2 VARCHAR2(20)
    SQL> desc T2;
    COL1 VARCHAR2(20)
    COL2 VARCHAR2(20)
    COL3 VARCHAR2(20)
    My data file, test.dat, looks like this:
    AAA|KBA
    BBR|BBCC|CCC
    NNN|BBBN|NNA
    I want to load the first record into T1, and the second and third record load into T2. How do I set up my control file to do that?
    Thank!

    Tough Job
    LOAD DATA
    truncate
    INTO table t1
    when col3 = 'dummy'
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS
    (col1,col2,col3 filler char nullif col3='dummy')
    INTO table t2
    when col3 != 'dummy'
    FIELDS TERMINATED BY '|'
    (col1,col2,col3 nullif col3='dummy')
    This will load t2 tbl but not t1.
    T1 Filler col3 is not accepting nullif. Its diff to compare columns have null using when condition. If i find something i will let you know.
    Can you seperate records into 2 file. Will a UNIX command work for you which will seperate 2col and 3col record types for you. and then you can execute 2 controlfiles on it.
    Thanks,
    http://www.askyogesh.com

  • Multiple entries in a Z table with same key fields

    Hi
    I do have a ZTABLE, with 3 key fields defined earlier. It consists of around 1 lakh records. Later onwards, two of the non keyfields have been made to key fields.
    This table is being populated with records at the time of saving a ztransaction.
    But some times, the system is updating the same records, some times twice, sometimes thrice, etc. I got to know that all fields (both key fields and non-key fields) of the record are same. That is, records are being updated in to the database table n number of times may be depending of some false logic in the program.
    If I tried to enter the same using SM30, it is showing me an error message stating that the record is already existing.
    What can be the reson?

    Hi,
    It seems there is some kind of data inconsistency..try to get all the records and then delete the duplicate entries through program....Now once u r done , from now onwards there won't be any duplicate entiers.,also before updating table use filters to avoid the duplication..
    Regards,
    Nagaraj

  • Table with a dropdown field with values.....

    Friends, I opened this new thread with my good explanation whatim looking for . sorry for this.
    I have a table:
                        col1            col2               col3
    row1              r1c1           r1c2               r1c3
    row2              r2c1           r2c2               r2c3
    row3              r3c1           r3c2               r3c3.
    I need col2 to be dropdown. Here when doinit default data is being passed to this table. Now when the WDA loads, i want col2 to be dropdown, but with only one value. whatever i have in backend.Here intially i want to disable col2. User should not hcange it.
    But when i hit ADDNEWROW button, it shoudl create row4,  with some (i m writing a query to fetch col2 values) values in the dropdown. User should be able to select any value here.
    The r1c2 and r2c2, r3c2, should not change on ADDNEWROW.  they should remain the same.
    Hope this time i explained you correctly, and im expecting ur replies friends...
    Kindly respondg back to me. thanks to all in advance,
    Niraja

    Hi Niraja,
    Please refer the code below for filling drop down in table.... this is hard coding i.e. i m filling value directly with out using database table .now wht u have to do is that you have to create an internal table with two column one key and one value. select data from database into internal table. thn just replace hard coding with the value and key column of internal table
    Please note SAP doesnt recommend usage of select query in WD abap. use any FM instead.
    DATA LR_NODE_INFO TYPE REF TO if_wd_context_node_info.
    data ls_value type wdy_key_value.
    data lt_value_set type wdy_key_value_table.
    ls_value-key = 'a'.
    ls_value-value = 'APPLE'.
    append ls_value to lt_value_set.
    ls_value-key = 'b'.
    ls_value-value = 'BANANA'.
    append ls_value to lt_value_set.
    ls_value-key = 'c'.
    ls_value-value = 'GRAPES'.
    append ls_value to lt_value_set.
    ls_value-key = 'd'.
    ls_value-value = 'MANGO'.
    append ls_value to lt_value_set.
    lr_node_info = wd_context->get_node_info( ).
    lr_node_info = lr_node_info->get_child_node( 'CN_UITABLE' ).
    lr_node_info->set_attribute_value_set( name = 'CA_COLUMN2' value_set = lt_value_set ).
    As suggested by Stefan you can make the read only field by using enable property.
    regards Pranav
    Edited by: Pranav Nagpal on Nov 21, 2008 5:55 AM

Maybe you are looking for