ORA-00603 after a commit through a dblink

Dear Sirs:
I am working with a 9.2.0.4.0 oracle database, with a dblink (named SIO_TEST) to another 8.1.7.0.0 oracle database. I want to insert a record in a table of the 8.1.7 DB.
Using SQL+, and connected to the 9.2 DB, i can issue a SQL command like:
INSERT INTO PP02_PREST_PS@SIO_TEST
(CP02_PROGRESSIVO,CP02_C_REPARTO,
CP02_PROGR_CERTIF,
CP02_C_TIPO_PRESTA,
CP02_ESITO,
C_USR,
DT_AGG,
CP02_C_SALA_PS,
CP02_C_REPARTO_PRESTA)
VALUES (SEQ_PROGR_PREST_PS.NEXTVAL@SIO_TEST,
'PSGEN',
19990057207,
'P130',
NULL, -- ESITO
'CARISSIM', -- max 8 caratteri
TO_CHAR(SYSDATE, 'YYYY-MM-DD-HH.MI.SS'),
'PSE',
'PSGEN');
and then COMMIT; i have NO PROBLEM AT ALL.
I wrote a simple PL/SQL procedure in the 9.2 DB, with the same statement inside. It works fine: I can commit both into the procedure and immediately following its execution, with NO PROBLEM AT ALL.
I tried to execute the same SQL using a trigger, in two different fashions:
a) putting the statement it in a VARCHAR2 string and then issuing an EXECUTE IMMEDIATE command
b) executing a direct INSERT INTO with all the needed parameters.
The trigger works fine, but does not commit. The first COMMIT command issued after the trigger has run, terminates the session abruptly with ORA-00603 error.
The data are actually committed in the target table, and the new record is generated in the 8.1.7 schema.
I have trace files available, but they dont seem of any help.
Any suggestion?
Thank You very much

Dear Sirs:
I'm still eager to know what has caused the problem addressed in this topic, but I would like to let You know the solution I used. Apparently, the problem is about committing - as long as I couldn't commit from within the trigger, any commit issued subsequently to the trigger execution caused a session termination by fatal error.
Fortunately, I can afford the inability to rollback the insert performed by the trigger, so I simply added a PRAGMA AUTONOMOUS TRANSACTION in the trigger, so that I could commit, and added a COMMIT statement within the trigger.
The ORA-00603 error disappeared.
Thank You for Your attention - and please let me know what may have caused the ORA-00603 problem

Similar Messages

  • How to execute Update Module FM after final commit work for a T-code.

    Hello Folks,
    I have a bit complex issue with my current object.
    We have modified MM41/MM42 transactions and added a subscreen to fulfill the requirement.
    We have designed the subscreen and embedded the same to MM41/MM42 through SPRO configuration.
    Now for update business logic,i am trying to execute one Function module(Update Module) in update Task...so that it will be executed once after the final commit work will be done for MM41/MM42.
    But its not executing.
    To fulfill my requirement i need to execute the FM only after Final Commit work for MM41/MM42.
    Please suggest in this regard.
    Even i can see few BADI's which are triggering through MM01/MM02 but not through MM41/MM42.
    BADI_ARTICLE_REF_RT
    BADI_MATERIAL_CHECK
    BADI_MAT_F_SPEC_SEL
    Code with which i am trying is given below.
    After PAI event of the subscreen---
    1:        MODULE USER_COMMAND_9001.
    2:       MODULE user_command_9001 INPUT.
                        PERFORM sub_save_mara ON COMMIT.
              ENDMODULE.
    3:       FORM sub_save_mara.
                         CALL FUNCTION 'ZMMUPDATE_MARA_APPEND_STRUCT' "IN UPDATE TASK
                                        EXPORTING
                                                 materialno = gv_matnr
                                                 appendmara = ty_zzmara.
             ENDFORM.                    "sub_save_mara
    4:                FUNCTION zmmupdate_mara_append_struct.
                      ""Update Function Module:
                       ""Local Interface:
                      *"  IMPORTING
                   *"     VALUE(MATERIALNO) TYPE  MATNR
                   *"     VALUE(APPENDMARA) TYPE  ZZMARA
    Data Declaration for Local use
      DATA : w_mara TYPE mara.
    Selecting the latest values for the material.
      SELECT SINGLE * FROM mara INTO w_mara WHERE matnr = materialno.
      IF sy-subrc = 0.
      Move the ZZMARA values to structure MARA
        MOVE-CORRESPONDING appendmara TO w_mara.
      Update the values in table MARA.
        MODIFY mara FROM w_mara.
      ENDIF.
    ENDFUNCTION.
    Kindly suggest.Thanks in advance.
    Regards
    Ansumesh

    Hi..
    The code given by me will work fine provided the Final commit should happen.
    Because to execute FM with update task,final commit work should happen which is mandatory and after that it will call the update task.
    In my case final commit work was not happeneing because..SAP standard program was not able to detect wether there is any change in my sub-screen or not as the standard program & my custom program,subscreens are different.
    As it was not able to detect the change,so final commit was not happening and hence update task also.
    To provide the reference of change in my subscreen to standard program ,i set a flag as per the change in the subscreen
    And exported the same to memory.
    Then Implemented one enhacement spot in MATERIAL_CHANGE_CHECK_RETAIL Fm where i have Imported the flag value.
    Based on my custom flag value,i have set one standard flag FLG_AENDERUNG_GES which tells SAP standard program for MM42 wether any change has happened or not.
    The above solved my purpose.
    Regards
    Ansumesh

  • DB nonresponsive with ORA-12514 after weeks of running fine

    I have seen many prior users complain about getting ORA-12514 after a reboot after installation, and looking at the suggestions, I couldn't find anything addressed a situation where the whole system works for a long period of time and then stops working.
    I have a working APEX installation that I can access over the web from remote clients
    I have a working JDBC connection that I can use from several remote clients.
    After several weeks/months of happily running, I no longer can access the apex website, and I can no longer access the db through the JDBC connection getting:
    ORA-12514, TNS:listener does not currently know of service requested in connect
    descriptor
    The Connection descriptor used by the client was:
    //###.###.###.###:1521/XE
    Going to look at Windows services, both OracleServiceXE and OracleXETNSListener are still running. To try and recover, I do the following:
    * Stop the db and start the db. (Which simply stops OracleServiceXE, and then starts both it and the listener)
    Doesn't fix it
    * Thinking that the listener might be the problem, I then stop the db, and also manually stop the listener. I then start both of the services
    Doesn't fix it
    * I reboot the computer
    Fixes it.
    What is different about rebooting the computer that could possibly fix this? Also, someone had suggested in another thread to look at the listener log in db_1/network/admin/listener.log. I don't know what db_1 is, but I have a oraclexe\app\oracle\product\10.2.0\server\NETWORK\ADMIN directory that does not have this log. If anyone has any insight into what this is, or what log I should look at to identify it, I would be greatly appreciative.
    Thanks,
    Dan

    Thanks!
    Ah, the alert log, that is a useful find. For those who don't know, you can find where the alert log is stored by doing the following:
    select value from v$parameter where name = 'background_dump_dest';
    I looked at the alert log, and i think it indicates that the process is running out of memory. Since it can run for a very long time without this, I'm assuming it's some kind of memory leak. Especially since I think the less I develop pages with APEX, the longer the system goes between crashing (That is only anecdotal). I looked at the .trc files that it spit out as well, but they seem to have internal stack traces and dumps. Where do I go from here? Is this something I post to an Oracle bug list? Below is the relevant section of the alert log:
    Tue May 27 18:00:52 2008
    Thread 1 advanced to log sequence 340
    Current log# 1 seq# 340 mem# 0: D:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_39DP20B8_.LOG
    Wed May 28 04:00:10 2008
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=26, OS id=3900
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_01', 'MYUSERNAME', 'KUPC$C_1_20080528040025', 'KUPC$S_1_20080528040025', 0);
    Wed May 28 20:50:58 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_arc0_2060.trc:
    ORA-04030: out of process memory when trying to allocate 82444 bytes (pga heap,control file i/o buffer)
    Wed May 28 20:50:58 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_arc0_2060.trc:
    ORA-04030: out of process memory when trying to allocate 82444 bytes (pga heap,control file i/o buffer)
    Wed May 28 20:50:58 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_arc0_2060.trc:
    ORA-04030: out of process memory when trying to allocate 82444 bytes (pga heap,control file i/o buffer)
    Wed May 28 20:50:58 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_arc0_2060.trc:
    ORA-04030: out of process memory when trying to allocate 82444 bytes (pga heap,control file i/o buffer)
    Wed May 28 20:50:58 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_arc0_2060.trc:
    ORA-04030: out of process memory when trying to allocate 82444 bytes (pga heap,control file i/o buffer)
    Wed May 28 20:50:58 2008
    Master background archival failure: 4030
    Wed May 28 20:51:04 2008
    Process startup failed, error stack:
    Wed May 28 20:51:06 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_pmon_4052.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:06 2008
    PMON: terminating instance due to error 490
    Wed May 28 20:51:06 2008
    Error occured while spawning process J000; error = 490
    Wed May 28 20:51:06 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_lgwr_3964.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:06 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_dbw0_1748.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:06 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_mman_2128.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:06 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_q001_2200.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:07 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_ckpt_1288.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:08 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_q003_3424.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:14 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_reco_3524.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:14 2008
    Errors in file d:\oraclexe\app\oracle\admin\xe\bdump\xe_smon_1644.trc:
    ORA-00490: PSP process terminated with error
    Wed May 28 20:51:14 2008
    Instance terminated by PMON, pid = 4052
    Dump file d:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
    Thu May 29 10:39:58 2008

  • ORA-00603 by using transactions. Unable to enlist in distributed transactio

    I have a test application built with odp.net which does batches of inserts. The program might call a method that inserts 1000 rows ten times. I want all of these to be in one transaction so if I want to rollback I can restart the whole procedure. So I started a transaction and enlisted each connection to use it.
    This seems to work OK for a while, but maybe after 5-10 calls to my batch-insert method I receive an ORA-00603 exception. In some rare cases I also get "Unable to enlist connection in distributed transaction."
    Can someone help me or shed some light in how to get this to work?
    From alert.log:
    Incident details in: d:\app\exkatr\diag\rdbms\testdb\testdb\incident\incdir_63768\testdb_ora_8848_i63768.trc
    Errors in file d:\app\exkatr\diag\rdbms\testdb\testdb\incident\incdir_63768\testdb_ora_8848_i63768.trc:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-00600: internal error code, arguments: [ktcirs:hds], [0x01AF68078], [0x006F10BF0], [0x021728078], [], [], [], [], [], [], [], []
    ORA-00600: internal error code, arguments: [ktcirs:hds], [0x01AF68078], [0x006F10BF0], [0x021728078], [], [], [], [], [], [], [], []
    ORA-00600: internal error code, arguments: [ktcirs:hds], [0x01AF68078], [0x006F10BF0], [0x021728078], [], [], [], [], [], [], [], []I tried running tkprof on the trc file but it didn't do anything. The generated file only looks like this:
    TKPROF: Release 11.1.0.7.0 - Production on On Jul 21 11:40:37 2010
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Trace file: d:\app\exkatr\diag\rdbms\testdb\testdb\incident\incdir_63768\testdb_ora_8848_i63768.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    Trace file: d:\app\exkatr\diag\rdbms\testdb\testdb\incident\incdir_63768\testdb_ora_8848_i63768.trc
    Trace file compatibility: 10.01.00
    Sort options: default
           0  session in tracefile.
           0  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           0  SQL statements in trace file.
           0  unique SQL statements in trace file.
        7741  lines in trace file.
           0  elapsed seconds in trace file.

    Have a look at Bug 8539335 (or 7510712)

  • ORA-02063: preceding line from view from dblink

    Hi,
    Iam creating a standared report.using a query from a view.
    view is built on a select query which retrives data from a view through a dblink. view exists in another schema.
    report error:
    ORA-01858: a non-numeric character was found where a numeric was expected
    ORA-02063: preceding line from <<DBLink Name>>
    when i run the query from sql workshop it executed perfectly.
    any one has any suggestions??
    Thanks

    Always include the following information when asking a question:
    <ul>
    <li>Full APEX version</li>
    <li>Full DB/version/edition/host OS</li>
    <li>Web server architecture (EPG, OHS or APEX listener/host OS)</li>
    <li>Browser(s) and version(s) used</li>
    <li>Theme</li>
    <li>Template(s)</li>
    <li>Region/item type(s)</li>
    </ul>

  • Ora-24367 when trying to connect by dblink to SqlServer

    Hello,
    I know that it is OCI forum but i hope that someone could me help with error
    ora- 24367 which cause and action is below :
    Cause: This occurs during authentication of a migratable user. the service handle has not been set with non-migratable user handle.
    Action: Service handle must be set with non-migratable user handle when it is used to authenticate a migratable user.
    I would like to connect to Sql Server database from Oracle through shared dblink created with this command :
    CREATE shared public DATABASE LINK apost_shared1
    CONNECT TO cbd_test IDENTIFIED BY cbd_testt
    AUTHENTICATED BY cbd_test IDENTIFIED BY cbd_testt
    USING 'post';
    I use ODBC connectivity to Sql Server joined with oracle connectivity (tnsnames, listener).
    When i try to select table in remote db i get ora-24367 error. could you explain me what is happening ? I don't know what cause and action means. I'm not OCI programmer. Maybe someone who knows OCI could explain me how all works ?

    it's difficult to understand your wifi setup
    is it both a shared internet from adhoc connection to a pc
    and a airport express?
    for adhoc connection to a pc to work the pc can't be connected to the wifi router just so you know

  • Long time on buffer sort with a insert and select through a dblink

    I am doing a fairly simple "insert into select from" statement through a dblink, but something is going very wrong on the other side of the link. I am getting a huge buffer sort time in the explain plan (line 9) and I'm not sure why. When I try to run sql tuning on it from the other side of the dblink, I get an ora-600 error "ORA-24327: need explicit attach before authenticating a user".
    Here is the original sql:
    INSERT INTO PACE_IR_MOISTURE@PRODDMT00 (SCHEDULE_SEQ, LAB_SAMPLE_ID, HSN, SAMPLE_TYPE, MATRIX, SYSTEM_ID)
    SELECT DISTINCT S.SCHEDULE_SEQ, PI.LAB_SAMPLE_ID, PI.HSN, SAM.SAMPLE_TYPE, SAM.MATRIX, :B1 FROM SCHEDULES S
    JOIN PERMANENT_IDS PI ON PI.HSN = S.SCHEDULE_ID
    JOIN SAMPLES SAM ON PI.HSN = SAM.HSN
    JOIN PROJECT_SAMPLES PS ON PS.HSN = SAM.HSN
    JOIN PROJECTS P ON PS.PROJECT_SEQ = PS.PROJECT_SEQ
    WHERE S.PROC_CODE = 'DRY WEIGHT' AND S.ACTIVE_FLAG = 'C' AND S.COND_CODE = 'CH' AND P.WIP_STATUS IN ('WP','HO')
    AND SAM.WIP_STATUS = 'WP';
    Here is the sql as it appears on proddmt00:
    INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
    SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
    FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
    WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND ("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND "A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID";
    Here is the explain plan on proddmt00:
    PLAN_TABLE_OUTPUT
    SQL_ID cvgpfkhdhn835, child number 0
    INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
    SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
    FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
    WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND
    ("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND
    "A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND
    "A5"."HSN"="A6"."SCHEDULE_ID"
    Plan hash value: 3310593411
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | INSERT STATEMENT | | | | | 5426M(100)| | | |
    | 1 | HASH UNIQUE | | 1210K| 118M| 262M| 5426M (3)|999:59:59 | | |
    |* 2 | HASH JOIN | | 763G| 54T| 8152K| 4300M (1)|999:59:59 | | |
    | 3 | REMOTE | | 231K| 5429K| | 3389 (2)| 00:00:41 | ! | R->S |
    | 4 | MERGE JOIN CARTESIAN | | 1254G| 61T| | 1361M (74)|999:59:59 | | |
    | 5 | MERGE JOIN CARTESIAN| | 3297K| 128M| | 22869 (5)| 00:04:35 | | |
    | 6 | REMOTE | SCHEDULES | 79 | 3002 | | 75 (0)| 00:00:01 | ! | R->S |
    | 7 | BUFFER SORT | | 41830 | 122K| | 22794 (5)| 00:04:34 | | |
    | 8 | REMOTE | PROJECTS | 41830 | 122K| | 281 (2)| 00:00:04 | ! | R->S |
    | 9 | BUFFER SORT | | 380K| 4828K| | 1361M (74)|999:59:59 | | |
    | 10 | REMOTE | PROJECT_SAMPLES | 380K| 4828K| | 111 (0)| 00:00:02 | ! | R->S |
    Predicate Information (identified by operation id):
    2 - access("A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID")

    Please use code tags... your formatted message is below:
    From the looks of your explain plan... these entries :
    Id      Operation      Name      Rows      Bytes     TempSpc      Cost (%CPU)      Time      Inst     IN-OUT
    4      MERGE JOIN CARTESIAN            1254G      61T            1361M (74)     999:59:59           
    5      MERGE JOIN CARTESIAN            3297K      128M            22869 (5)      00:04:35            Are causing extensive cpu processing, probably due to the cartesian join (includes sorting)... does "61T" mean 61 terabytes? Holy hell
    From the looks of the explain plan these tables don't look partitioned.... can you confirm?
    Why are you selecting distinct? If this is for ETL or data warehouse related procedure it ain't a good idea to use distinct... well ever... it's horrible for performance.
    INSERT INTO PACE_IR_MOISTURE@PRODDMT00 (SCHEDULE_SEQ, LAB_SAMPLE_ID, HSN, SAMPLE_TYPE, MATRIX, SYSTEM_ID)
    SELECT DISTINCT S.SCHEDULE_SEQ, PI.LAB_SAMPLE_ID, PI.HSN, SAM.SAMPLE_TYPE, SAM.MATRIX, :B1 FROM SCHEDULES S
    JOIN PERMANENT_IDS PI ON PI.HSN = S.SCHEDULE_ID
    JOIN SAMPLES SAM ON PI.HSN = SAM.HSN
    JOIN PROJECT_SAMPLES PS ON PS.HSN = SAM.HSN
    JOIN PROJECTS P ON PS.PROJECT_SEQ = PS.PROJECT_SEQ
    WHERE S.PROC_CODE = 'DRY WEIGHT' AND S.ACTIVE_FLAG = 'C' AND S.COND_CODE = 'CH' AND P.WIP_STATUS IN ('WP','HO')
    AND SAM.WIP_STATUS = 'WP';
    Here is the sql as it appears on proddmt00:
    INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
    SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
    FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
    WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND ("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND "A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID";
    Here is the explain plan on proddmt00:
    PLAN_TABLE_OUTPUT
    SQL_ID cvgpfkhdhn835, child number 0
    INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
    SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
    FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
    WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND
    ("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND
    "A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND
    "A5"."HSN"="A6"."SCHEDULE_ID"
    Plan hash value: 3310593411
    Id      Operation      Name      Rows      Bytes     TempSpc      Cost (%CPU)      Time      Inst     IN-OUT
    0      INSERT STATEMENT                              5426M(100)                 
    1      HASH UNIQUE            1210K      118M      262M      5426M (3)     999:59:59           
    * 2      HASH JOIN            763G      54T      8152K      4300M (1)     999:59:59           
    3      REMOTE            231K      5429K            3389 (2)      00:00:41      !      R->S
    4      MERGE JOIN CARTESIAN            1254G      61T            1361M (74)     999:59:59           
    5      MERGE JOIN CARTESIAN            3297K      128M            22869 (5)      00:04:35           
    6      REMOTE      SCHEDULES      79      3002            75 (0)      00:00:01      !      R->S
    7      BUFFER SORT            41830      122K            22794 (5)      00:04:34           
    8      REMOTE      PROJECTS      41830      122K            281 (2)      00:00:04      !      R->S
    9      BUFFER SORT            380K      4828K            1361M (74)     999:59:59           
    10      REMOTE      PROJECT_SAMPLES      380K      4828K            111 (0)      00:00:02      !      R->S
    Predicate Information (identified by operation id):
    2 - access("A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID")Edited by: TheDudeNJ on Oct 13, 2009 1:11 PM

  • After call commit sql , data can not flush to disk

    I use berkey db which support sql . It's version is db-5.1.19.
    1, Open a database.
    2. Create a table.
    3. exec "begin;" sql
    4. exec sql which is insert record into table
    5. exec "commit;" sql
    6. copy database file (SourceDB_912_1.db and SourceDB_912_1.db-journal) to Local Disk of D, then use a tool of dbsql to open the database.
    7. use select sql to check data, there is no record in table.
    1
    sqlite3 * m_pDB;
    int nRet = sqlite3_open_v2(strDBName.c_str(), & m_pDB,SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE,NULL);
    2
    string strSQL="CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );";
    char * errors;
    nRet = sqlite3_exec(m_pDB, strSQL.c_str(), NULL, NULL, &errors);
    3
    nRet = sqlite3_exec(m_pDB, "begin;", NULL, NULL, &errors);
    4
    nRet = sqlite3_exec(m_pDB, "INSERT INTO TBLClientAccount (ClientId,AccountId) VALUES('dd','ddd'); ", NULL, NULL, &errors);
    5
    nRet = sqlite3_exec(m_pDB, "commit;", NULL, NULL, &errors);
    Edited by: 887973 on Sep 27, 2011 11:15 PM

    Hi,
    Here is a simple test case program I used based on your description:
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include "sqlite3.h"
    int error_handler(sqlite3*);
    int main()
         sqlite3 *m_pDB;
         const char *strDBName = "C:/SRs/OTN Core 2290838 - after call commit sql , data can not flush to disk/SourceDB_912_1.db";
         char * errors;
         sqlite3_open_v2(strDBName, &m_pDB, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL);
         error_handler(m_pDB);
         sqlite3_exec(m_pDB, "CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );", NULL, NULL, &errors);
         error_handler(m_pDB);
         sqlite3_exec(m_pDB, "begin;", NULL, NULL, &errors);
         error_handler(m_pDB);
         sqlite3_exec(m_pDB, "INSERT INTO TBLClientAccount (ClientId,AccountId) VALUES('dd','ddd'); ", NULL, NULL, &errors);
         error_handler(m_pDB);
         sqlite3_exec(m_pDB, "commit;", NULL, NULL, &errors);
         error_handler(m_pDB);
         //sqlite3_close(m_pDB);
         //error_handler(m_pDB);
    int error_handler(sqlite3 *db)
         int err_code = sqlite3_errcode(db);
         switch(err_code) {
         case SQLITE_OK:
         case SQLITE_DONE:
         case SQLITE_ROW:
              break;
         default:
              fprintf(stderr, "ERROR: %s. ERRCODE: %d.\n", sqlite3_errmsg(db), err_code);
              exit(err_code);
         return err_code;
    }Than I copied the SourceDB_912_1.db database and the SourceDB_912_1.db-journal directory containing the environment files (region files, log files) to D:\, opened the database using the "dbsql" command line tool, and queried the table; the data is there:
    D:\bdbsql-dir>ls -al
    -rw-rw-rw-   1 acostach 0 32768 2011-10-12 12:51 SourceDB_912_1.db
    drw-rw-rw-   2 acostach 0     0 2011-10-12 12:51 SourceDB_912_1.db-journal
    D:\bdbsql-dir>C:\BerkeleyDB\db-5.1.19\build_windows\Win32\Debug\dbsql SourceDB_912_1.db
    Berkeley DB 11g Release 2, library version 11.2.5.1.19: (August 27, 2010)
    Enter ".help" for instructions
    Enter SQL statements terminated with a ";"
    dbsql> .tables
    TBLClientAccount
    dbsql> .schema TBLClientAccount
    CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );
    dbsql> select * from TBLClientAccount;
    dd|dddI do not see where the issue is. The data can be successfully retrieved, it is present in the database.
    Could you try putting in the sqlite3_close() call and see if you still get the error?
    Did you remove the __db.* files from the SourceDB_912_1.db-journal directory?
    Did you use PRAGMA synchronous, and if so, what is the value you set?
    If this is still an issue for you, please describe in more detail the exact steps needed to get this reproduced and provide a simple stand-alone test case program that reproduces it.
    Regards,
    Andrei

  • Raise an Event in Redwood after a Commit to an External Oracle DB?

    Hello Netweaver Community / Redwood specific,
    I am at customer site and have been presented with the following scenario and subsequent question...
    The scenario is that there is a 3rd party product (powerBuilder program / windows executable) that is updating an Oracle database on a server outside of the SAP landscape.  This PB program is writing records to the oracle db that need to be picked up by XI process.  The idea is that after the PB program makes its COMMIT of the records, then the XI process needs to be triggered to start so it can pick up the records.
    So the question is...  How can an event be raised in Redwood after a COMMIT of records has been made to this oracle DB that is on a server outside of the SAP landscape?
    I am familiar with (and we are already doing) the raising of an event in Redwood from an ABAP program with function module that raises external events.
    This is what I know - please do not hesitate to ask for further clarification.
    Thank you,
    Dean Atteberry.

    Carol,
    Sorry for the delay! I am not that frequent to SAP forums. You can write to me on my mail if you have something important. We can always update forum note for everyone.
    I do not know XI that well. But If I understand correctly, you can trigger events in it based on arrival of a file. You can create a dummy job in Redwood on Unix or any filesystem you use to create a zero byte file once Powerbuilder job is ready.
    Let me know if this helps!
    - Bhushan

  • Mic problem after installing win7 through bootcamp

    i am having a problem of mic in my macbook after installing win7 through bootcamp if any one have solution please mail me

    yes, no luck there either.
    Now I cannot boot at all.
    I never had a problem wih the Mac OSX side only that  crappy microsoft software. Unfortunately
    there are a lot of woftware that I need for my work that only exist under micorsoft.

  • Calling Delta Merge in DS after every commit

    Hi Folks,
    I am using an Delta extraction logic in DS to extract large table from ECC (50 Million rows) to the HANA database. The commits in DS job have been configured fopr every 10,000 records. Three questions
    1) Should I disable the delta merge in HANA database for this target table prior to the initial load of table. Once the initial load is complete, manually perform the delta merge in HANA is the right approach or
    2) Should I be calling manually performing Delta merge in DS job to make sure the table is merged after every commit? If yes how do I call the Delta merge command in DS jobs and how can I do it per commit?
    3) Can I invoke Delta merge in DS as part of Delta extraction logic after the initial load is completed in DS?
    Any advise will definately be appreciated.
    Thanks,
    -Hari

    Hi Jim
    if your big table requires a merge, AUTOMERGE will pick it up. The mergedog process checks it every 60 seconds, so that should be alright for your requiremen.
    If the table doesn't need to be merged, it won't.
    Manually handling the delta merge is a fine-tuning action that is most often not required or recommendable.
    - Lars

  • Reorder upd.,ins. records after pressing commit (forms 6i) is it possible=

    Hello, looking for simple way or is it possible to reorder saving of records in a block.
    (FORMS 6i)
    Something like this:
    1)I have detail block DB table (after Database query the data is) :
    abc_fk , abc_order, abc_text
    001 01 some text1
    001 02 some text2
    001 03 some text3
    001 04 some text4
    2)Now the user, wants to add new text after 1st line. I reorder the abc_order fields to
    001 01 some text1
    *001 02 NEW RECORD <-- new user input.*
    001 03 some text2
    001 04 some text3
    001 05 some text4
    After pressing commit button i get constrain error (uniqe key: abc_fk , abc_order).
    Commiting process is going from top down , is it possible from buttom-up?
    Now what happans is: forms put UPDATE on new record: 001 03 some text2 /when in database allready exsists record with same (sq, and order) that's: 001 03 some text3, so i get constrain error. (both record have same values: 001,03)
    If forms could save records starting with
    update 001 05 some text4,
    update 001 04 some text3,
    update 001 03 some text2,
    insert *001 02 NEW RECORD <-- new user input.*
    the problem is solved. Is this possible, to change order of saving?
    Thank you for any help. :)

    Hey, your update kind'a work :)
    If user inserts and updates (in between) more then one row.. the order changes...(not the way it shows on the screen)
    picture this DB table:
    001 01
    <-------- insert here (new record 2)
    001 02
    001 03
    001 04
    <-------- insert here (new record 6)
    001 05
    001 06
    AFTER COMMIT WE GET:
    001 01
    <-------- insert here (new record 2) ok 001 08
    001 03
    001 04
    001 05
    001 06
    <-------- insert here (new record 6) not ok 001 07
    001 08
    So in pre-insert trigger how can i/we/you/program know that it should do :)
    UPDATE TABLE_NAME
         SET abc_order = abc_order + 1
         WHERE abc_fk >= '002'
         AND abc_order >= '05' and not ... AND abc_order >= '06' (which is the current new value of newly 2nd inserted statment
    I changed pre-insert to:
         UPDATE abc
         SET abc_sq =:abc.abc_sq + 1
         WHERE abc_id = :abc.abc_id
         AND abc_sq >= :abc.abc_sq;
    So i am still asking if it's possible, to change order of FORMS commitng from top-down to buttom - up. :)
    I still want to know if there si a chance to change forms order of saving to save records starting with:
    update 001 05 some text4,
    update 001 04 some text3,
    update 001 03 some text2,
    insert 001 02 NEW RECORD <-- new user input.
    Abut datatype you are right. We all use pading, and it's old database/table..

  • Remove spaces after a comma

    Hi,
    I have a string that goes like this "abc, abc def, abcd,abc"
    How do I specify that I want to remove only the space (if there is 1) after the comma and not the one in between "abc def"?
    Thanks....

    Try this:
    String s = "abc,   abc def,    abcd,abc";
    // The following method replaces all comma's with one
    // or more spaces, with a single comma.
    s = s.replaceAll(", +", ",");Good luck.

  • HT201407 "Your iPhone could not be activated because the activation server is temporarily unavailable." this massage showing after i update through itune

    "Your iPhone could not be activated because the activation server is temporarily unavailable." this massage showing after i update through itune

    See this discussion.
    https://discussions.apple.com/message/21189708

  • Need to know how to disconnect the camera Nikon Ds3 (USB cable) properly after uploading images through bridge-do not want to lose images on compact flash card.

    Need to know how to disconnect the camera Nikon Ds3 (USB cable) properly after uploading images through bridge…do not want to lose images on compact flash card.

    Give #navbar a width that is wide enough to hold all of the buttons within it.
    #navbar {
         width:####px;
    Replace #### with a pixel number large enough to hold the nav buttons.

Maybe you are looking for

  • Redirect 302 doesn't work

    I need to do a redirect 302, I made this script but it is doesn't work. I modified a script, I include inside it a redirect 302. Could you tell me what is wrong? .. ... .. Over this script there is othr code. .. .. . IF Request.QueryString("id") = "o

  • Downpayment on assets

    Hi gurus, Your opnion is required regarding GL entry that is created whenever downpayment is made against an asset for which proper PR and PO was raised from MM side. Downpayments are done with M indicator and we use F-48 to pay vendor advances. PO i

  • IWeb Crashes When Trying to Publish

    My iWeb has been consistently crashing anytime I try to publish a site. I was hoping updated versions, would correct the problem, but that did not help. I have repaired permissions and deleted the iWeb plist file, yet iWeb still will not publish to M

  • Problem in Updating LIPS from USEREXIT_SAVE_DOCUMENT

    Hi, I am working with VL01N,VL02N. I have created a custom field in table lips and am supposed to update that based on certain logic. I have written my logic in include MV50AFZ1, USEREXIT_SAVE_DOCUMENT. Here, I have updated XLIPS-custom field with my

  • Some dummies need help in master detail jsp page

    We don't know how to use data control to generate a jsp master detail page in case of many-to-many relationships. We didn't find any info about this and our testcase never display the data of the last iterator. We have a collection that contains depa