Dataload process - error capturing process (Best practise to follow)

I'm pulling data from Oracle db and load into MS-SQL 2008.
For my data type checks during the data load process, what are options to ensure that the data being processed wouldn't fail. such that I can verify first in-hand with the target type of data and then if its valid format load it into destination table else
mark it with error flag and push into errors table... All this at the row level.
One way I can think of is to load into a staging table then get the source & destination table -column data types, compare them and proceed.
Is this the right approach? Or should I just try loading the data directly and if it fails try trouble shooting(which could be a difficult task as I wouldn't know what caused error...)
Suggestions please..

thanks vikas. But he idea of getting the data types n length freaks me out from maintainability point of view.
can I do things like this dynamical.... Compare destination table-column data type n length vs the data I have received... If u could point me to some ref article ut will be of great help!
cheers!!!

Similar Messages

  • Mass processing - Error when processing Java programs / VMC out of memory

    When running a mass update background process that updates the status of a service order in CRM the job fails due to error 'Error when processing Java programs'. I checked the VMC (SM52) and noticed that there is an error about the VMC running out of memory.
    The background program can either be a PPF Action or a Z-ABAP program that performs the update. Both programs are performing a CRM_ORDER_INITIALIZE but it seems that the VMC is not releasing the memory fast enough.
    Is there anyway we can force the VMC to release the memory after processing of each individual order?
    Thanks!

    I got  similar issue and it got resolved by useing CRM_ORDER_INITILAIZE. Initalization should happen after every Order processing and see that only one header guid is passed to the the FM. May not be good option ,but just try by putting wait for 5secs after CRM_ORDER_INITILAIZE.

  • ORA-26744: STREAMS capture process "STRING" does not support "STRING"

    Hi All,
    I have configured oracle streams using Note "How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]" at schema level
    All the changes are getting reflected perfectly and was running smooth, but today suddenly I faced the below error and capture is aborted
    ORA-26744: STREAMS capture process "STREAM_CAPTURE" does not support "AMSATMS_PAWS"."B_SEARCH_PREFERENCE" because of the following reason:
    ORA-26783: Column data type not supported
    Couple of suggestions on forum are to add a negative ruleset, please suggest me how do i add a negative rule set and if this is added to negative ruleset then how the changes to this table will reflect in target database...?
    Please help me...
    Thanks

    I do not have any idea why it treats your XMLTYPE stored as CLOB like a XMLTYPE binary. From the doc, we read :
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/ap_restrictions.htm#BABGIFEA
    Unsupported Data Types for Capture Processes
    A capture process does not capture the results of DML changes to columns of the following data types:
        *       SecureFile CLOB, NCLOB, and BLOB
        *      BFILE
        *      ROWID
        *      User-defined types (including object types, REFs, varrays, and nested tables)
        *      XMLType stored object relationally or as binary XML                   <----------------------------
        *      The following Oracle-supplied types: Any types, URI types, spatial types, and media types
    A capture process raises an error if it tries to create a row LCR for a DML change to a column of
    an unsupported data type. When a capture process raises an error, it writes the LCR that caused
    the error into its trace file, raises an ORA-26744 error, and becomes disabled. For your support
    NOTE:556742.1 - Extended Datatype Support (EDS) for Streams
    to exclude the table:
    NOTE:239623.1 - How To Exclude A Table From Schema Capture And Replication When Using Schema Level Streams Replication
    Sound like a specific patch. You did not stated which version of Oracle you are running.

  • Request for howto - error processing best practise

    Hi JDev Team. Something I would like to see in a future HOWTO would be error handling in a BC4J/JSP application. What is best practise? How do we make sure that when a database error occurs, we can trap the error and provide a friendly error message, or failing that, at least ensure the standard error is usable by a maintenance programmer. For eg. the following error occurs if a referential constraint restricts the delete:
    javax.servlet.jsp.JspException: JBO-26041: Failed to post data to database during "Delete": SQL Statement " DELETE FROM TECHTRANSFER.TTSITES Sites WHERE SITEID=:1".
    in fact the same error message is displayed for almost any database error - the programmer can't fix the problem when he has no idea what it is!! (same with update and insert)
    I wasn't going to request this until I had read all of the help available on error processing but the way this project is going I won't get time. If you think that it is adequately covered in the help, then fine, just let me know where.
    Thanks,
    Simon

    You can enclose your bc4j/jsp code with a try / catch expression. That way if a failure occurs, you can trap it, display a friendy error, and do whatever you want with the exception.
    What I have been doing for develpment purposes, is send via email a modified errorpage.jsp. Here is what gets emailed to me (*'s in potentially sensitive data) and displayed to the screen (I'm eventually going to replace all the displayed garbage with something friendly):
    An error occured in application PDC User Administration
    User Session Properties:
    Sesion ID: *********
    App ID: *********
    User Name: *********
    User ID: *********
    Priv Role: *********
    Password: *********
    Org No: *********
    First Name: skunitzer
    Last Name: ANALYST
    App Title : PDC User Administration
    Current Url: insertNewUser.jsp
    Specific error is javax.servlet.jsp.JspException: JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
    Parameters:
    LastName
    Kunitzer
    EmailAddress
    [email protected]
    FirstName
    SteveLiveTest
    OrgNo
    PhoneWorkNo
    I have no phone #
    ExpireDate
    2001-04-26
    ExpireDateString
    jRQiIsFGANIbrGlihGTl[epofZmSNgEkGqbHN@iErHNPRi
    UserID
    UserPrivs
    Exception:
    javax.servlet.jsp.JspException: JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
    Message:
    JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
    Localized Message:
    JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
    Stack Trace:
    javax.servlet.jsp.JspException: JBO-25013: Too many objects match the primary key oracle.jbo.Key[1423 ].
    at java.lang.Throwable.fillInStackTrace(Native Method)
    at java.lang.Throwable.fillInStackTrace(Compiled Code)
    at java.lang.Throwable.<init>(Compiled Code)
    at java.lang.Exception.<init>(Compiled Code)
    ...Stack Trace goes on but I won't bother with it anymore...
    While not always as specific as I would like, I have not had too much trouble hunting down the errors.
    null

  • Internal Error when creating Capture Process

    Hi,
    I get the following when trying to create my capture process:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    2 3 queue_table => 'capture_queue_table',
    queue_name => 'capture_queue');
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'apply_queue_table',
    queue_name => 'apply_queue');
    END;
    4 5 6 7 8 9 10 11
    BEGIN
    ERROR at line 1:
    ORA-00600: internal error code, arguments: [kcbgtcr_4], [32492], [0], [1], [],
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 408
    ORA-06512: at line 2
    Any ideas?
    Cheers,
    Warren

    Make sure that you have upgraded to the 9.2.0.2 patchset and, as part of the migration to 9202, that you have run the catpatch.sql script.

  • Capture Process Error

    Hi,
    We are working on Oracle 9i bi-directional Stream replication. After set up, and sufficient amount of testing from our side, we are facing fatal error in
    Capture process in one of the database. Both the db srvr are having similar set up parameters, similar hardware, and almost everything is same. But we are facing this error in only one of them.
    The error is :
    Dump file e:\oracle\admin\repf\udump\repf_cp01_1620.trc
    Thu Apr 03 15:42:53 2003
    ORACLE V9.2.0.2.1 - Production vsnsta=0
    vsnsql=12 vsnxtr=3
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Oracle9i Enterprise Edition Release 9.2.0.2.1 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.2.0 - Production
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Instance name: repf
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Windows thread id: 1620, image: ORACLE.EXE (CP01)
    *** 2003-04-03 15:42:53.000
    *** SESSION ID:(21.548) 2003-04-03 15:42:53.000
    TLCR process death detected. Shutting down TLCR
    error 1280 in STREAMS process
    ORA-01280: Fatal LogMiner Error.
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01280: Fatal LogMiner Error.
    Dump file e:\oracle\admin\repf\udump\repf_cp01_1904.trc
    Tue Apr 01 18:44:27 2003
    ORACLE V9.2.0.2.1 - Production vsnsta=0
    vsnsql=12 vsnxtr=3
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Oracle9i Enterprise Edition Release 9.2.0.2.1 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.2.0 - Production
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Instance name: repf
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Windows thread id: 1904, image: ORACLE.EXE (CP01)
    *** 2003-04-01 18:44:27.000
    *** SESSION ID:(18.7) 2003-04-01 18:44:27.000
    error 604 in STREAMS process
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01423: error encountered while checking for extra rows in exact fetch
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-06512: at "SYS.LOGMNR_DICT_CACHE", line 1600
    ORA-06512: at "SYS.LOGMNR_GTLO3", line 33
    ORA-06512: at line 1
    OPIRIP: Uncaught error 1089. Error stack:
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01423: error encountered while checking for extra rows in exact fetch
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-06512: at "SYS.LOGMNR_DICT_CACHE", line 1600
    ORA-06512: at "SYS.LOGMNR_GTLO3", line 33
    ORA-06512: at line 1
    Thanx,
    Kamlesh Chaudhary

    If you are configuring Streams environment you dont have to specify the logminer tablespace. So i did not specify it manually when i was setting up my Capture process, and i did not change it later.
    Prior the 1280 fatal logminer error i have the following errors:
    ORA-00353: log corruption near block string change string time string
    ORA-00354: corrupt redo log block header.
    I've checked the hard drive, and it it correct.
    Any suggestions?

  • CAPTURE process error - missing Archive log

    Hi -
    I am getting cannot open archived log 'xxxx.arc' message when I try to start a newly created capture process. The archive files have been moved by the DBAs.
    Is there a way to set the capture process to start from a new archive ?
    I tried      
    exec DBMS_CAPTURE_ADM.ALTER_CAPTURE ( capture_name => 'STRMADMIN_SCH_CAPTURE', start_scn =>9668840362577);
    I got the new scn from DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();.
    But I still get the same error.
    Any ideas ?
    Thanks,
    Sadeepa

    If you are on 9i, I know that trying to reset the scn that way won't work. You have to drop and recreate the capture process. You can leave all the rules and rulesets in place, but I think you have to prepare all of the tables again.

  • Capture process aborts with logminer error

    Hi,
    We have set up streams on Oracle10g instance on a machine with 4 CPUs. The capture and propagate processes have been configured on this instance ( this instance has the initialization parameter "parallel_max_servers" set to 5)
    The capture process frequently aborted with the following error in the alert log.
    Thu Nov 10 13:23:07 2005
    Errors in file /oracle/apps/oracle/product/101/admin/seqprod/bdump/seqprod_p001_24657.trc:
    ORA-01341: LogMiner out-of-memory
    Thu Nov 10 13:23:08 2005
    TLCR process death detected. Shutting down TLCR
    Thu Nov 10 13:23:10 2005
    Streams CAPTURE C001 with pid=36, OS id=24213 stopped
    The executed the query : select * from V$RESOURCE_LIMIT where resource_name in ('processes', 'sessions', 'enqueue_resources', 'parallel_max_servers');
    The values of max_utilization and initial_allocation columns for the resource "parallel_max_servers" gives the values 5 and 5 respectively.
    So, we increased the value of the parameter from 5 to 8 and restarted the instance ( some of the initialization parameters are : )
    processes = 1500
    sga_max_size = 7650410496
    __shared_pool_size = 855638016
    __large_pool_size = 16777216
    __java_pool_size = 16777216
    streams_pool_size = 218103808
    job_queue_processes = 12
    parallel_max_servers = 8
    pga_aggregate_target = 1695547392
    The capture process aborted again in 1.5 hrs time with the same error.
    The executed the query : select * from V$RESOURCE_LIMIT where resource_name in ('processes', 'sessions', 'enqueue_resources', 'parallel_max_servers');
    The values of max_utilization and initial_allocation columns for the resource "parallel_max_servers" gives the values 8 and 8 respectively.
    My queries are :
    1. Could parallel_max_servers be the cause of the capture abort?
    2. Can the parameter "parallel_max_servers" be further increased without affecting the CPU load ?
    3. Could there be other parameters affecting the capture process. ?
    Thanx.

    Did you try configuring streams pool size parameter of your database? I 've set this value to 200MB, the shared-pool-size is 1Gigabyte and the problem has not been encountered since then. If streams-shared-pool size is not set then the capture uses 10 percent of your shared pool size for the buffer queues eventually leading to low memory for logminer session. If streams-pool is set the capture uses the memory from SGA directyl without affecting the shared-pool size.
    ~Raj

  • Error running Archived-Log Downstream Capture Process

    I have created a Archived-Log Downstream Capture Process with ref. to following link
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_ccap.htm#i1011654
    After executing the capture process get following error in trace
    ============================================================================
    Trace file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_13572.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /home/oracle/app/oracle/product/11.2.0/dbhome_1
    System name: Linux
    Node name: localhost.localdomain
    Release: 2.6.18-194.el5
    Version: #1 SMP Fri Apr 2 14:58:14 EDT 2010
    Machine: x86_64
    Instance name: orcl
    Redo thread mounted by this instance: 1
    Oracle process number: 37
    Unix process pid: 13572, image: [email protected] (CP01)
    *** 2011-08-20 14:21:38.899
    *** SESSION ID:(146.2274) 2011-08-20 14:21:38.899
    *** CLIENT ID:() 2011-08-20 14:21:38.899
    *** SERVICE NAME:(SYS$USERS) 2011-08-20 14:21:38.899
    *** MODULE NAME:(STREAMS) 2011-08-20 14:21:38.899
    *** ACTION NAME:(STREAMS Capture) 2011-08-20 14:21:38.899
    knlcCopyPartialCapCtx(), setting default poll freq to 0
    knlcUpdateMetaData(), before copy IgnoreUnsuperrTable:
    source:
    Ignore Unsupported Error Table: 0 entries
    target:
    Ignore Unsupported Error Table: 0 entries
    knlcUpdateMetaData(), after copy IgnoreUnsuperrTable:
    source:
    Ignore Unsupported Error Table: 0 entries
    target:
    Ignore Unsupported Error Table: 0 entries
    knlcfrectx_Init: rs=STRMADMIN.RULESET$_66, nrs=., cuid=0, cuid_prv=0, flags=0x0
    knlcObtainRuleSetNullLock: rule set name "STRMADMIN"."RULESET$_66"
    knlcObtainRuleSetNullLock: rule set name
    knlcmaInitCapPrc+
    knlcmaGetSubsInfo+
    knlqgetsubinfo
    subscriber name EMP_DEQ
    subscriber dblinke name
    subscriber name APPLY_EMP
    subscriber dblinke name
    knlcmaTerm+
    knlcmaTermSrvs+
    knlcmaTermSrvs-
    knlcmaTerm-
    knlcCCAInit()+, err = 26802
    knlcnShouldAbort: examining error stack
    ORA-26802: Queue "STRMADMIN"."STREAMS_QUEUE" has messages.
    knlcnShouldAbort: examing error 26802
    knlcnShouldAbort: returning FALSE
    knlcCCAInit: no combined capture and apply optimization err = 26802
    knlzglr_GetLogonRoles: usr = 91,
    knlqqicbk - AQ access privilege checks:
    userid=91, username=STRMADMIN
    agent=STRM05_CAPTURE
    knlqeqi()
    knlcRecInit:
    Combined Capture and Apply Optimization is OFF
    Apply-state checkpoint mode is OFF
    last_enqueued, last_acked
    0x0000.00000000 [0] 0x0000.00000000 [0]
    captured_scn, applied_scn, logminer_start, enqueue_filter
    0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908]
    flags=0
    Starting persistent Logminer Session : 13
    krvxats retval : 0
    CKPT_FREE event=FALSE CCA=FALSE Checkptfreq=1000 AV/CDC flags=0
    krvxssp retval : 0
    krvxsda retval : 0
    krvxcfi retval : 0
    #1: krvxcfi retval : 0
    #2: krvxcfi retval : 0
    About to call krvxpsr : startscn: 0x0000.0004688c
    state before krvxpsr: 0
    dbms_logrep_util.get_checkpoint_scns(): logminer sid = 13 applied_scn = 288908
    dbms_logrep_util.get_checkpoint_scns(): prev_ckpt_scn = 0 curr_ckpt_scn = 0
    *** 2011-08-20 14:21:41.810
    Begin knlcDumpCapCtx:*******************************************
    Error 1304 : ORA-01304: subordinate process error. Check alert and trace logs
    Capture Name: STRM05_CAPTURE : Instantiation#: 65
    *** 2011-08-20 14:21:41.810
    ++++ Begin KNST dump for Sid: 146 Serial#: 2274
    Init Time: 08/20/2011 14:21:38
    ++++Begin KNSTCAP dump for : STRM05_CAPTURE
    Capture#: 1 Logminer_Id: 13 State: DICTIONARY INITIALIZATION [ 08/20/2011 14:21:38]
    Capture_Message_Number: 0x0000.00000000 [0]
    Capture_Message_Create_Time: 01/01/1988 00:00:00
    Enqueue_Message_Number: 0x0000.00000000 [0]
    Enqueue_Message_Create_Time: 01/01/1988 00:00:00
    Total_Messages_Captured: 0
    Total_Messages_Created: 0 [ 01/01/1988 00:00:00]
    Total_Messages_Enqueued: 0 [ 01/01/1988 00:00:00]
    Total_Full_Evaluations: 0
    Elapsed_Capture_Time: 0 Elapsed_Rule_Time: 0
    Elapsed_Enqueue_Time: 0 Elapsed_Lcr_Time: 0
    Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
    Apply_Name :
    Apply_DBLink :
    Apply_Messages_Sent: 0
    ++++End KNSTCAP dump
    ++++ End KNST DUMP
    +++ Begin DBA_CAPTURE dump for: STRM05_CAPTURE
    Capture_Type: DOWNSTREAM
    Version:
    Source_Database: ORCL2.LOCALDOMAIN
    Use_Database_Link: NO
    Logminer_Id: 13 Logfile_Assignment: EXPLICIT
    Status: ENABLED
    First_Scn: 0x0000.0004688c [288908]
    Start_Scn: 0x0000.0004688c [288908]
    Captured_Scn: 0x0000.0004688c [288908]
    Applied_Scn: 0x0000.0004688c [288908]
    Last_Enqueued_Scn: 0x0000.00000000 [0]
    Capture_User: STRMADMIN
    Queue: STRMADMIN.STREAMS_QUEUE
    Rule_Set_Name[+]: "STRMADMIN"."RULESET$_66"
    Checkpoint_Retention_Time: 60
    +++ End DBA_CAPTURE dump
    +++ Begin DBA_CAPTURE_PARAMETERS dump for: STRM05_CAPTURE
    PARALLELISM = 1 Set_by_User: NO
    STARTUP_SECONDS = 0 Set_by_User: NO
    TRACE_LEVEL = 7 Set_by_User: YES
    TIME_LIMIT = -1 Set_by_User: NO
    MESSAGE_LIMIT = -1 Set_by_User: NO
    MAXIMUM_SCN = 0xffff.ffffffff [281474976710655] Set_by_User: NO
    WRITE_ALERT_LOG = TRUE Set_by_User: NO
    DISABLE_ON_LIMIT = FALSE Set_by_User: NO
    DOWNSTREAM_REAL_TIME_MINE = FALSE Set_by_User: NO
    MESSAGE_TRACKING_FREQUENCY = 2000000 Set_by_User: NO
    SKIP_AUTOFILTERED_TABLE_DDL = TRUE Set_by_User: NO
    SPLIT_THRESHOLD = 1800 Set_by_User: NO
    MERGE_THRESHOLD = 60 Set_by_User: NO
    +++ End DBA_CAPTURE_PARAMETERS dump
    +++ Begin DBA_CAPTURE_EXTRA_ATTRIBUTES dump for: STRM05_CAPTURE
    USERNAME Include:YES Row_Attribute: YES DDL_Attribute: YES
    +++ End DBA_CAPTURE_EXTRA_ATTRIBUTES dump
    ++ LogMiner Session Dump Begin::
    SessionId: 13 SessionName: STRM05_CAPTURE
    Start SCN: 0x0000.00000000 [0]
    End SCN: 0x0000.00046c2d [289837]
    Processed SCN: 0x0000.0004689e [288926]
    Prepared SCN: 0x0000.000468d4 [288980]
    Read SCN: 0x0000.000468e2 [288994]
    Spill SCN: 0x0000.00000000 [0]
    Resume SCN: 0x0000.00000000 [0]
    Branch SCN: 0x0000.00000000 [0]
    Branch Time: 01/01/1988 00:00:00
    ResetLog SCN: 0x0000.00000001 [1]
    ResetLog Time: 08/18/2011 16:46:59
    DB ID: 740348291 Global DB Name: ORCL2.LOCALDOMAIN
    krvxvtm: Enabled threads: 1
    Current Thread Id: 1, Thread State 0x01
    Current Log Seqn: 107, Current Thrd Scn: 0x0000.000468e2 [288994]
    Current Session State: 0x20005, Current LM Compat: 0xb200000
    Flags: 0x3f2802d8, Real Time Apply is Off
    +++ Additional Capture Information:
    Capture Flags: 4425
    Logminer Start SCN: 0x0000.0004688c [288908]
    Enqueue Filter SCN: 0x0000.0004688c [288908]
    Low SCN: 0x0000.00000000 [0]
    Capture From Date: 01/01/1988 00:00:00
    Capture To Date: 01/01/1988 00:00:00
    Restart Capture Flag: NO
    Ping Pending: NO
    Buffered Txn Count: 0
    -- Xid Hash entry --
    -- LOB Hash entry --
    -- No TRIM LCR --
    Unsupported Reason: Unknown
    --- LCR Dump not possible ---
    End knlcDumpCapCtx:*********************************************
    *** 2011-08-20 14:21:41.810
    knluSetStatus()+{
    *** 2011-08-20 14:21:44.917
    knlcapUpdate()+{
    Updated streams$_capture_process
    finished knlcapUpdate()+ }
    finished knluSetStatus()+ }
    knluGetObjNum()+
    knlsmRaiseAlert: keltpost retval is 0
    kadso = 0 0
    KSV 1304 error in slave process
    *** 2011-08-20 14:21:44.923
    ORA-01304: subordinate process error. Check alert and trace logs
    knlz_UsrrolDes()
    knstdso: state object 0xb644b568, action 2
    knstdso: releasing so 0xb644b568 for session 146, type 0
    knldso: state object 0xa6d0dea0, action 2 memory 0x0
    kadso = 0 0
    knldso: releasing so 0xa6d0dea0
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01304: subordinate process error. Check alert and trace logs
    Any suggestions???

    Output of above query
    ==============================
    CAPTURE_NAME STATUS ERROR_MESSAGE
    STRM05_CAPTURE ABORTED ORA-01304: subordinate process error. Check alert and trace logs
    Alert log.xml
    =======================
    <msg time='2011-08-25T16:58:01.865+05:30' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='localhost.localdomain' host_addr='127.0.0.1' module='STREAMS'
    pid='30921'>
    <txt>Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_30921.trc:
    ORA-01304: subordinate process error. Check alert and trace logs
    </txt>
    </msg>
    The orcl_cp01_30921.trc has the same thing posted in the first message.

  • Error in Process-Please specify a valid account assignment error inBackend

    Hi Gurus,
    I am abaper and I am new to SRM technical.
    One Po is already there in SRM 4.0 and ERP with cost center and profit center replicated as it is. But my user is chaning the price of the PO in SRM.  It is showing as 'Error in Process'
    I check in backend error in portal and in rz20  'Please specify a valid account assignment' is error is coming.
    This is technical or functional issue.
    I checked in BBP_PD_PO_TRANSFER_EXEC, BBP_PO_INBOUND i find the error message as SE E181, But how can i rectify this error.
    when they changed the price for other pos it is replicating in backend but why for this po it is giving error, I checked all Cost center and profit center and all are fine...
    Please do the needful.
    Thanks,
    Kiran

    Hi,
    You said change of price in other POs is working fine. Only one PO goes into error.
    Is the cost center, profit center same as the ones in error PO ?
    If it's possible, can you also try this:
    - Create a PO in ERP directly using the same CC, Profit center as account assignment data
    - Check the other account assignment data that got defaulted (e.g. Business area etc) --> Verify whether this data is same in error PO also.
    - Could you create the PO successfully ?
    Best regards,
    Ramki

  • InfoPath form, rich text fields, "There was a form postback error" InvalidOperationException, There has been an error while processing the form

    Using InfoPath 2013 browser enabled form.
    I am getting the above error on ALL Infopath Designed Rich Text fields, where the "Cannot be blank" attribute is set.
    To reproduce it, I create a custom list and custom list form with InfoPath 2013. I add 2 Rich Text fields and enable "cannot be blank". To raise the error, I put some data in the RT field. Skip to another field (so focus is changed and a postback
    occurs), then back to original field to delete the contents (to raise the validation).
    I originally thought it was associated with the HTMLCHKR.DLL not being registered (and I have re-registered it just in case), but the exception I get from the ULS logs reads (it is from a list AFTER I have re-registered the DLL):
    There was a form postback error. (User: 0#.w|myDomain\jc, Form Name: Template, IP: , Request: h t t p ://MyWebApp/MySite/Lists/rtAfterHtmlCHkrReg/Item/newifs.aspx?List=2212ff41-77b4-445b-931b-d7e538c9da91&Source=h t t p://MyWebApp/MySite/Lists/rtAfterHtmlCHkrReg/AllItems.aspx&RootFolder=&Web=3db49106-bdca-47bb-b4cd-a549d2d86aa7,
    Form ID: urn:schemas-microsoft-com:office:infopath:list:-AutoGen-2015-01-16T21:51:48:853Z, Type: InvalidOperationException, Exception Message: No content generated as the result of the operation.) 8cc5e09c-3665-903b-575a-faaac506c40a
    I noticed that errors associated with the HTMLCHKR.DLL not being registered would have some sort of COM exception (example: TYPE_E_LIBNOTREGISTERED or REGDB_E_CLASSNOTREG)
    I also should mention that this problem started happening about 3 weeks ago. We have extended the web application to handle HTTPS on the intranet zone (we had a reverse proxy project that did not eventuate) - would that cause something? How can I do further
    checking?

    Hi,
    I have done a test in my SharePoint, and I met the same issue with you.
    I created a custom list and custom list form with InfoPath 2013. I added 2 Rich Text fields and enabled "cannot be blank".  I put some content in the RT field, then delete the contents, I got the error message:"there has been an
    error while processing the form."
    Here is a similar post said that executing the command: regsvr32 "C:\Program Files\Common Files\Microsoft Shared\
    OFFICE14\htmlchkr.dll" will solve the issue.
    https://social.msdn.microsoft.com/Forums/en-US/eb2e0f6e-c8e4-4e92-ac5e-a09d72759eda/rich-text-field-error-in-webform?forum=sharepointcustomizationprevious
    But I just disabled "cannot be blank", and it solved the issue.
    Best Regards,
    Lisa Chen
    Lisa Chen
    TechNet Community Support

  • Error while processing sales order as factory calendar missing.

    "factory calendar missing or error in factory calendar".iam getting this error while processing sales order even though i have assigned it to shipping point and plant and sales org.

    Hi Malik, How are you? I'm sorry to trouble you,
                   Let me introduce, I'm Daniel, and I saw your fantastic asnwer in
    the topic "How to update Factory Calendar in SRM", and might I have the same
    problem. It's with the "factory calendar".
    In transaction va02 when I choose a Delivery Date, and I sent it to process, as
    a result in the "shipping tab" it calculated to the first item one date, and to
    the rest one date different.  I don't know What it happen?
    I debugged the program and I saw that to the first item use to calculate the
    date is the function  'APO_SCHEDULING', and to the rest item use
    'SD_SCHEDULING', and until here I come.
    How can I do to what the dates calculated are the same? or What would be the
    problem?
    I need the dates calculated are equal.
    I looked at it's using two "factory calendar" which are "J4" and "C2". Is this
    correct?
    Thanks In Advance,
    My best regards,

  • Error while processing message payload Element 'CategoryCode&#39

    Hi Experts,
    I am trying to integrate SAP ECC with TM. I have transferred the sales order from ECC to TM, but its not triggering order based transportation requirement (OTR) instead giving an error Error while processing message payload Element &#39;CategoryCode&#39 and all the xml messages are stuck in inbound queue. The screen shot of the issue is as appended herewith. Please advise.
    Thanks & Regards,
    Aunkur De

    Hi Aunkur,
    The issue will generally  come when a mandatory field in XML which you are not sending either or missed.
    Please check if all mandatory fields are mapped properly.
    Best regards,
    Rohit

  • Error While processing an Dimension

    Hi Experts,
    while processing an Dimension i am getting an error while processing, but in the BW  it is showing 2 Dimension.
    How to check which is the Latest/ Right one.
    Kindly Help
    Best Regards,
    Bhupendra Arya

    Hi,
    You can find the details of diemnsions and its infoobject technical name in the table UJA_DIMENSION.
    The filed TECH_NAME holds the infoobject technical name in BW for a particular dimension in BPC.
    Hope this helps.
    Regards,
    Balraj.

  • Error while processing BDOC

    Hi All,
    I am facing an error while processing BDOC form the Tcode  SMW01.The error messages coming are:
    1).User M02M104# doesn't exist in the system and
    2).Validation error occured :Module crm_bupa_main_val,Bdoc type Bupa_Main.
    Every thing is maintained properly (i.e. Account Owner id,clerk_id).what should i do to process this BDOC.
    Regards,
    Amrit

    Hi,
    How did u resolve the error.
    Thanks,
    Best Regards,
    Madhu

Maybe you are looking for

  • Advice on what plan to signup for MYFICO score for mortgage

    I had mortgage lender pull scores in April im around 600 on all 3 reports need about 40 pts ive sinve done some work...got two pfd on collections opened a CC, reports the last two months i know..so my question is i want to monitor my ficoscores but i

  • MII 12.1.8 not connected to ECC 6

    Hi, I have already configured successfully another instance of MII/4.6C (only idoc listener) and now I'm trying to link an instance of MII 12.1.8 (NW SP5) with an ECC 6. I cannot even read BAPI list from SAP JCO Interface action inside a MII Transact

  • VPN SITE to SITE (RV520-FE-K9 TO RV042)

    Hello Everyboddy, I got some issues here, so i hope you can help me out. What i´m tryin to do is seting up a vpn betwen these two routers, so i´ve checked the configuration many times but i didn´t find the problem. PS: Sorry for hiding the public add

  • Downloaded apps not starting after splash screens

    I've had this device for about a month. I recently paid the $10 to upgrade the software to version 3.0. There are 1.9 free GB on the device. I sync it via a Windows Vista machine. I have not downloaded any apps through iTunes on the Vista machine. I

  • Rendering targa sequence with only alpha

    Can I render a targa sequence with only alpha channel (RGB transparent, not black)? So that if I open a frame in Photoshop the RGB channels are transparent and only the alpha channel has content. I'm going to use the alpha channel in another software