Log Table History With Trigger

Hello,
I'm trying to log all changes on a table in a second table with a trigger.
This is my code:
CREATE OR REPLACE
TRIGGER TRG_INS_ROTOKOLL
AFTER INSERT ON vertraege
FOR EACH ROW
REFERENCING OLD AS alt NEW AS neu
BEGIN
INSERT INTO vertraege_protokoll SELECT * FROM vertraege vt WHERE vt.id = :NEW.id;
END;
Does anybody know why I get an "invalid trigger" - error (ORA-04079)?
Does anybody know a better method to solve this problem?
Thanks in advance -
Timo Paschke

There is nothing wrong
REFERENCING OLD AS alt NEW AS neu
but the FOR EACH ROW must be placed at the
end.
CREATE OR REPLACE TRIGGER TRG_INS_ROTOKOLL
AFTER INSERT ON emp REFERENCING OLD AS alt NEW AS neu
FOR EACH ROW
BEGIN
END;
** Then you will encounter trigger mutation , because
your are selecting from the same table.
instead you can directly insert data into your new
table with your :neu values
insert into vertraege_protokoll values (
:neu.<colname>,:neu.<colname>,:neu.<colname>,..);
-S.KTo me ?
REFERENCING OLD AS alt NEW AS neu,
will cause problems as he is using :NEW in the trigger body.
After noting that I found the trigger is completely wrong.So recommended to read docs..

Similar Messages

  • Where to see the images other than change log table

    Dear all,
                   How to view images for extraction's (example Generic )  in  DSO without using Change log table.
    With Regards,
    Baskaran

    Hi,
    There is no other option other than change log table to view the images.
    in PSA..we will get only raw data..i.e...its non other than the data how its posted in source sytem.we can say..this is only copy of source sytem data.
    after PSA,if the datasource is DELTA enabled(which can sustain NEW or CHANGED records-here only we get concept of IMAGES),we have to load this data only to DSO.from there after activation of the data,the data can be updated to further data targets only from CHANGE LOG table which can maintain images.
    Even further targets are also DSOs,we will have to load from this DSO only.so we can see images only in change log table.
    hope this is clear for you.
    Regards
    Ramsunder

  • Logging table for period open/close

    Dear all,
    does anybody know if the BCS maintains a logging table (history) or similar for the activity open/close periods within the Consolidation monitor.
    Thanks in advance!
    Jochen
    Edited by: Jochen Röhr on Aug 3, 2011 2:23 PM
    No ideas from our BCS-Experts?
    Must be a typical question for all BCS application!!!

    Look at /1SEM/UCS* and UCL* tables, you might find some information.

  • When cache log table modified "nologging" , Does any problem occur?

    test environment   :
        *. readonly cache group :
    create readonly cache group cg_tb_test1
    autorefresh interval 1 seconds
    from
    TB_TEST1
    (       C1      tt_integer,
             C2      CHAR (10),
             C3      tt_integer,
             C4      CHAR (10),
             C5      CHAR (10),
             C6      CHAR (10),
             C7      CHAR (10),
             C8      CHAR (10),
             C9      tt_integer,
             C10     DATE,
      PRIMARY KEY (C1)
        *. oracle's tables
        SQL> select * from tab;
             TB_TEST1                       TABLE
             TT_06_147954_L                 TABLE
             TT_06_AGENT_STATUS             TABLE
            TT_06_AR_PARAMS                TABLE
            TT_06_CACHE_STATS              TABLE
            TT_06_DATABASES                TABLE
            TT_06_DBSPECIFIC_PARAMS        TABLE
            TT_06_DB_PARAMS                TABLE
            TT_06_DDL_L                    TABLE
            TT_06_DDL_TRACKING             TABLE
            TT_06_LOG_SPACE_STATS          TABLE
            TT_06_SYNC_OBJS                TABLE
            TT_06_USER_COUNT               TABLE
    15 rows selected.
        SQL>
    After generated cache group , Lots of archive logs generated.  So, I modified the log table "TT_06_147954_L" with "nologging".
    Does any problem occur?
    Thank you.

    If you ever need to recover the Oracle database, or this table, in any way then you are hosed and things will break. Also, I'm pretty sure this is not supported. Why is the logging a problem?
    Chris

  • Issue with trigger, multi-table insert and error logging

    I find that if I try to perform a multi-table insert with error logging on a table that has a trigger, then some constraint violations result in an exception being raised as well as logged:
    <pre>
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL> create table t1 (id integer primary key);
    Table created.
    SQL> create table t2 (id integer primary key, t1_id integer,
    2           constraint t2_t1_fk foreign key (t1_id) references t1);
    Table created.
    SQL> exec dbms_errlog.create_error_log ('T2');
    PL/SQL procedure successfully completed.
    SQL> insert all
    2 into t2 (id, t1_id)
    3 values (x, y)
    4 log errors into err$_t2 reject limit unlimited
    5 select 1 x, 2 y from dual;
    0 rows created.
    SQL> create or replace trigger t2_trg
    2 before insert or update on t2
    3 for each row
    4 begin
    5 null;
    6 end;
    7 /
    Trigger created.
    SQL> insert all
    2 into t2 (id, t1_id)
    3 values (x, y)
    4 log errors into err$_t2 reject limit unlimited
    5 select 1 x, 2 y from dual;
    insert all
    ERROR at line 1:
    ORA-02291: integrity constraint (EOR.T2_T1_FK) violated - parent key not found
    </code>
    This doesn't appear to be a documented restriction. Does anyone know if it is a bug?

    Tony Andrews wrote:
    This doesn't appear to be a documented restriction. Does anyone know if it is a bug?Check The Execution Model for Triggers and Integrity Constraint Checking.
    SY.

  • Mutating table exception on trigger with After Insert but not with before

    Hi
    I need to maintain some constraint on the table to have only one row for values of the columns but not by primary key constraint.
    Because in case of primary key the insert would fail and the rest of the operation would be discontinued, I cannot change in the somponent that inserts the row so I have to prevent that on the table I have.
    I created a before insert trigger on the table which checks if any row exists in the table with same column values as the one being inserted. if found I delete the rows and let the insert happen (w/o raising any error). if the rows do not exist then the insert shall be continued.
    I read at place that modifying the dame table in the trigger body shall raise a mutating table exception, but I donot get the exception when the trigger is fired.
    Just when I change the trigger to after insert trigger then the nutating table exception is thrown.
    Is it the right behavior i.e. the Before insert trigger does not raise the exception and only after insert does that, since I could not find the example for before insert triggers throwing such exception so I think it is better to confirm it before finalizing the implementation.
    Thanks
    Sapan

    sapan wrote:
    Hi Tubby
    I cannot user unique constraint because that would raise an exception upon violation and the third party component that is inserting in the table would fail.
    That component does some other tasks as well after this insert and if an exception is raised then those tasks would not be performed.
    Also I cannot change the component to ignore this exception.Well then, you're in a bit of a pickle.
    I'm guessing the trigger you have been working on isn't "safe". By that i mean that it doesn't account for multi-user scenarios. You'll need to serialize access to the data elements in question and implement some sort of locking mechanism to ensure that only 1 session can work with those values.
    After you work out how to do that it sounds as though you would be better served using an INSTEAD OF trigger (you'd need to implement this on a view which is made off of your base table).
    Here's one way you can work on serializing access to your table on a relatively fine grained level (as opposed to locking the entire table).
    Re: possible to lock stored procedure so only one session may run it at a time?
    Cheers,

  • Materialized View Log table with sequence

    I have a materialized view log in Database A..a very simple one...similar to "create materialized view log on t WITH PRIMARY KEY"...The primary key is a composite key of 2 columns...Description of the mlog table looks like this...
    desc mlog$_t
    emp_ctr NUMBER
    emp_date DATE
    SNAPTIME$$ DATE
    DMLTYPE$$ VARCHAR2(1)
    OLD_NEW$$ VARCHAR2(1)
    CHANGE_VECTOR$$ RAW(255)
    The Materialized view is in database B...again a simple one...no aggregation...This gets refreshed on demand manually evey week. The records that get accumulated in the MLOG every week is ~ 300K..The manual refresh used to finish in about 20mins but since last week it is running for ever. Upon reviewing the contents of MLOG I noticed that it contained a mixture of inserts and updates. I'm not sure if the MLOG contanied only inserts in the past when the MV used to refresh very quickly...Based on the documentation, Oracle recommends creating MLOGS with "sequence" whenever inserts/updates/deleted are expected. I'm planning to drop and recreate the MLOG with sequence with the hope that this would fix the slow refresh issue...Appreciate if you could let me know your thoughs...
    I'm also planning to trace the session to get some wait event statistics to find out the reason for the slow refresh...

    There are a few reasons why refreshes slow down over time.
    1. Is the log table being cleared out after a refresh? If not, you may have another mview (maybe even one that doesn't exist any more) registered against this log. When this happens, the log table never stops growing.
    2. How big is the segment occupied by the table? If it has grown very large at some time in the past (e.g., because refresh was delayed), then when the refresh does a FTS on the log it will scan the whole segment - even if only a few rows are present.
    3. It may help to add an index to the PK columns, and another to SNAPTIME$$. Without them, the refresh will FTS the log. For a big segment, that can take a long time.
    -- Phil

  • Ever since enabling Soft Delete on one of my tables, the logs are flooded with "... does not support the 'deleted' system property"

    I enabled Soft Delete on one of my Azure Mobile Services tables, and ever since then, the logs are flooded with tons of warnings that say something like this:
    The table 'Section' does not support the 'deleted' system property.
    Is there a way to suppress these warnings, or is it advisable to enable soft delete for all of my tables?
    As a follow on, is there a way to export the logs so that it's easier to peruse through them and search?
    Thanks :)

    Hi
    You can set your logging level in Azure Admin portal to be Error only, so it only logs errors and warnings will be ignored.
    Regards
    Aram

  • Insert into two tables with trigger on PK in second table

    Hi evereone, ineed help.
    I have two tables (organizations, addresses).
    On addresses table i have trigger on PK. When i do insert i must get this param and insert into OrganizationTable for reference.
    Without ADF i can do insert with returning on addresses, than do insert on organizations with this returning param. How can i do this in ADF business logic using trainTaskFlow?
    Thanks all.
    Edited by: WaterStream on 15.10.2012 15:10

    thanks for reply, but i found solution in this materials:
    http://liuwuhua.blogspot.com/2010/11/master-detail-crud-in-adf-bc.html
    (on this link anyone can download model and see all params)
    but in my project i will get JBO-25030 error.
    Solution founded here:
    http://vtkrishn.com/2011/02/09/oracle-jbo-invalidownerexception/
    Work great!!!

  • How to update two tables with trigger

    Hi:
    how to update two tables with trigger ?
    I have two tables :
    (1)ASIA
    MI number;
    (2)ASIA_P
    ID number;
    When I insert a new value into the asia.MI ,I also can
    insert the same value into the asia_p.id field.
    I have write a trigger as follows but it does't work.
    create or replace trigger MI_TRG
    before insert on asia
    for each row
    declare
    seq number;
    begin
    select MI_SEQ.Nextval into seq from dual;
    :new.MI:=seq;
    insert into ASIA_PRO(MI_ID)
    values
    (seq);
    end MI_TRG;
    How to realize it ?
    thanks
    zzm

    Why do you say it does not work?

  • Logging Table changes and transactions

    I am using oracle 11.2 DB and need to track some changes made to specific tables (need to capture old/new values of columns). I have experimented with creating a MV log on a table which seems to get me most of what I want:
    CREATE materialized view log on students WITH SEQUENCE,rowid (id,last_name,first_name,middle_name) INCLUDING NEW VALUES
    Since some of the processes update more that 1 table at a time before doing a commit, I need to be able to capture that these 3 table updates are all part of the same transaction (commit) . Is there any way of doing that?
    ie.
    update table 1 set x = '1';
    update table 2 set y = '2';
    update table 3 set z = '3';
    commit;

    Triggers:
    http://www.adp-gmbh.ch/ora/sql/trigger/before_after_each_row.html
    http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/triggers.htm
          bobmagan     
    Handle:      bobmagan 
    Status Level:      Newbie
    Registered:      Feb 22, 2010
    Total Posts:      431
    Total Questions:      183 (115 unresolved)
    remember close your threads marking them as answered when your doubt will be solved, keep the forum clean

  • Delete Entries of Change Log Table

    Can any one tell me when do we require to delete the content of the change log table of standard DSO. I am just fresher to SAP BI.
    Thank You

    Hi
    Deleting data from the change log for an ODS object is recommended if several requests, which are no longer required for the delta update and also are no longer used for an initialization from the change log, have already been loaded into the ODS object. If a delta initialization for the update exists in connected data targets, the requests have to be updated first before the respective data can be deleted in the change log.
    Only a temporary, limited history is then retained. The change log can possible become so large you might want to reduce the amount of data volume and delete data for a specific time period.
    How to Delete it
    Since the change log is also stored as a PSA table, you can use the function for deleting data from the PSA to delete data from the change log.
    In the ODS object administration, use the main menu to choose Environment -
    > Delete Change Log Data.
    Or
    Go to the PSA tree.
    Use the main menu to choose Settings -
    > Display Generated Objects, so that you can display the InfoSource for your ODS object. Your InfoSource has the same name as your ODS object, along with the prefix u20188u2019.
    Use the context menu to choose Delete Change Log Data.
    Santosh

  • Problem with trigger and entity in JHeadsart, JBO-25019

    Hi to all,
    I am using JDeveloper 10.1.2 and developing an application using ADF Business Components and JheadStart 10.1.2.27
    I have a problem with trigger and entity in JHeadsart
    I have 3 entity and 3 views
    DsitTelephoneView based on DsitTelephone entity based on DSIT_TELEPHONE database table.
    TelUoView based on TelUo entity based on TEL_UO database table.
    NewAnnuaireView based on NewAnnuaire entity based on NEW_ANNUAIRE database view.
    I am using JHS to create :
    A JHS table-form based on DsitTelephoneView
    A JHS table based on TelUoView
    A JHS table based on NewAnnuaireView
    LIB_POSTE is a :
    DSIT_TELEPHONE column
    TEL_UO column
    NEW_ANNUAIRE column
    NEW_ANNUAIRE database view is built from DSIT_TELEPHONE database table.
    Lib_poste is an updatable attribut in TelUo entity, DsitTelephone entity, NewAnnuaire entity.
    Lib_poste is upadated in JHS table based on TelUoView
    I added a trigger on my database shema « IAN » to upadate LIB_POSTE in DSIT_TELEPHONE database table :
    CREATE OR REPLACES TRIGGER “IAN”.TEL_UO_UPDATE_LIB_POSTE
    AFTER INSERT OR UPDATE OFF lib_poste ONE IAN.TEL_UO
    FOR EACH ROW
    BEGIN
    UPDATE DSIT_TELEPHONE T
    SET t.lib_poste = :new.lib_poste
    WHERE t.id_tel = :new.id_tel;
    END;
    When I change the lib_poste with the application :
    - the lib_poste in DSIT_TELEPHONE database table is correctly updated by trigger.
    - but in JHS table-form based on DsitTelephoneView the lib_poste is not updated. If I do a quicksearch it is updated.
    - in JHS table based on NewAnnuaireView the lib_poste is not updated. if I do a quicksearch, I have an error:
    oracle.jbo.RowAlreadyDeletedException: JBO-25019: The row of entity of the key oracle.jbo. Key [null 25588] is not found in NewAnnuaire.
    25588 is the primary key off row in NEW_ANNUAIRE whose lib_poste was updated by the trigger.
    It is as if it had lost the bond with the row in the entity.
    Could you help me please ?
    Regards
    Laurent

    The following example should help.
    SQL> create sequence workorders_seq
      2  start with 1
      3  increment by 1
      4  nocycle
      5  nocache;
    Sequence created.
    SQL> create table workorders(workorder_id number,
      2  description varchar2(30),
      3   created_date date default sysdate);
    Table created.
    SQL> CREATE OR REPLACE TRIGGER TIMESTAMP_CREATED
      2  BEFORE INSERT ON workorders
      3  FOR EACH ROW
      4  BEGIN
      5  SELECT workorders_seq.nextval
      6    INTO :new.workorder_id
      7    FROM dual;
      8  END;
      9  /
    Trigger created.
    SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
    Session altered.
    SQL> insert into workorders(description) values('test1');
    1 row created.
    SQL> insert into workorders(description) values('test2');
    1 row created.
    SQL> select * from workorders;
    WORKORDER_ID DESCRIPTION                    CREATED_DATE
               1 test1                          30-NOV-2004 15:30:34
               2 test2                          30-NOV-2004 15:30:42
    2 rows selected.

  • Deleting data from a very large log table (custom table in our namespace)

    Hello,
    I have been tasked with clearing a log table in our landscape to only include the most recent entries.  Is it possible to do this given that the table has already got 230 000 000 entries and will need to keep around 600 000 recent entries?
    Should I do this via ABAP and if so, how?  Thanks,
    Samir

    Hi,
    so you are going to keep 0,3 % of your data?
    If you should do it in ABAP or on the database is your decission.
    In my opinion doing things on the database directly should be done
    exceptional cases only e.g. for one time actions or actions that have to
    be done very rarely and with different parameters / options. Regular
    and similar tasks should be done in ABAP i think.
    In any case i would not delete the majority of the records but copy
    the records to keep in an empty table with the same structure, delete the
    table as a whole (check clients!) and "copy" the new table back in ABAP
    or rename the new table to the old table after droping the old table on the database.
    If you have only one client you can copy the data you need in a new
    table and truncate the old table (fast deletion for all clients). If you have
    data to keep  for other clients as well check how much data it is per client
    in comparison to the total number of lines (if only a small fraction, pefer copying
    them too).
    On the database you can use CTAS (create table as select) and drop table and
    rename table. Those commands shoudl be very efficient but work client independently.
    If you have to consider clients SELECT; INSERT; DELETE or TRUNCATE (depends on if you have
    copied all data considering clients) are
    your friends.
    Kind regards,
    Hermann

  • R3load export of  table REPOSRC with lob col - error ora-1555 and ora-22924

    Hello,
    i have tried to export data from our production system for system copy and then upgrade test. while i export the R3load job has reported error in table REPOSRC, which has lob column DATA. i have apsted below the conversation in which i have requested SAP to help and they said it comes under consulting support. this problem is in 2 rows of the table.
    but i would like to know if i delete these 2 rows and then copy from our development system to production system at oracle level, will there be any problem with upgrade or operation of these prorgams and will it have any license complications if i do it.
    Regards
    Ramakrishna Reddy
    __________________________ SAP SUPPORT COnveration_____________________________________________________
    Hello,
    we have are performing Data Export for System copy of our Production
    system, during the export, R3load Job gave error as
    R3LOAD Log----
    Compiled Aug 16 2008 04:47:59
    /sapmnt/DB1/exe/R3load -datacodepage 1100 -
    e /dataexport/syscopy/SAPSSEXC.cmd -l /dataexport/syscopy/SAPSSEXC.log -stop_on_error
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): WE8DEC
    (DB) INFO: Export without hintfile
    (NT) Error: TPRI_PAR: normal NameTab from 20090828184449 younger than
    alternate NameTab from 20030211191957!
    (SYSLOG) INFO: k CQF :
    TPRI_PAR&20030211191957&20090828184449& rscpgdio 47
    (CNV) WARNING: conversion from 8600 to 1100 not possible
    (GSI) INFO: dbname = "DB120050205010209
    (GSI) INFO: vname = "ORACLE "
    (GSI) INFO: hostname
    = "dbttsap "
    (GSI) INFO: sysname = "AIX"
    (GSI) INFO: nodename = "dbttsap"
    (GSI) INFO: release = "2"
    (GSI) INFO: version = "5"
    (GSI) INFO: machine = "00C8793E4C00"
    (GSI) INFO: instno = "0020111547"
    (DBC) Info: No commits during lob export
    DbSl Trace: OCI-call 'OCILobRead' failed: rc = 1555
    DbSl Trace: ORA-1555 occurred when reading from a LOB
    (EXP) ERROR: DbSlLobGetPiece failed
    rc = 99, table "REPOSRC"
    (SQL error 1555)
    error message returned by DbSl:
    ORA-01555: snapshot too old: rollback segment number with name "" too
    small
    ORA-22924: snapshot too old
    (DB) INFO: disconnected from DB
    /sapmnt/DB1/exe/R3load: job finished with 1 error(s)
    /sapmnt/DB1/exe/R3load: END OF LOG: 20100816104734
    END of R3LOAD Log----
    then as per the note 500340, i have chnaged the pctversion of table
    REPOSRC of lob column DATA to 30, but i get the error still,
    i have added more space to PSAPUNDO and PSAPTEMP also, still the same
    error.
    the i have run the export as
    exp SAPDB1/sap file=REPOSRC.dmp log=REPOSRC.log tables=REPOSRC
    exp log----
    dbttsap:oradb1 5> exp SAPDB1/sap file=REPOSRC.dmp log=REPOSRC.log
    tables=REPOSRC
    Export: Release 9.2.0.8.0 - Production on Mon Aug 16 13:40:27 2010
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit
    Production
    With the Partitioning option
    JServer Release 9.2.0.8.0 - Production
    Export done in WE8DEC character set and UTF8 NCHAR character set
    About to export specified tables via Conventional Path ...
    . . exporting table REPOSRC
    EXP-00056: ORACLE error 1555 encountered
    ORA-01555: snapshot too old: rollback segment number with name "" too
    small
    ORA-22924: snapshot too old
    Export terminated successfully with warnings.
    SQL> select table_name, segment_name, cache, nvl(to_char
    (pctversion),'NULL') pctversion, nvl(to_char(retention),'NULL')
    retention from dba_lobs where
    table_name = 'REPOSRC';
    TABLE_NAME | SEGMENT_NAME |CACHE | PCTVERSION | RETENTION
    REPOSRC SYS_LOB0000014507C00034$$ NO 30 21600
    please help to solve this problem.
    Regards
    Ramakrishna Reddy
    Dear customer,
    Thank you very much for contacting us at SAP global support.
    Regarding your issue would you please attach your ORACLE alert log and
    trace file to this message?
    Thanks and regards.
    Hello,
    Thanks for helping,
    i attached the alert log file. i have gone through is, but i could
    not find the corresponding Ora-01555 for table REPOSRC.
    Regards
    Ramakrishna Reddy
    +66 85835-4272
    Dear customer,
    I have found some previous issues with the similar symptom as your
    system. I think this symptom is described in note 983230.
    As you can see this symptom is mainly caused by ORACLE bug 5212539 and
    it should be fixed at 9.2.0.8 which is just your version. But although
    5212539 is implemented, only the occurrence of new corruptions will be
    avoided, the already existing ones will stay in the system regardless of the patch.
    The reason why metalink 452341.1 was created is bug 5212539, since this
    is the most common software caused lob corruption in recent times.
    Basically any system that was running without a patch for bug 5212539 at some time in the past could be potentially affected by the problem.
    In order to be sure about bug 5212539 can you please verify whether the
    affected lob really is a NOCACHE lob? You can do this as described in
    mentioned note #983230. If yes, then there are basically only two
    options left:
    -> You apply a backup to the system that does not contain these
    corruptions.
    -> In case a good backup is not available, it would be possible to
    rebuild the table including the lob segment with possible data loss . Since this is beyond the scope of support, this would have to be
    done via remote consulting.
    Any further question, please contact us freely.
    Thanks and regards.
    Hello,
    Thanks for the Help and support,
    i have gone through  the note 983230 and metalink 452341.1.
    and i have ran the script and found that there are 2 rows corrupted in
    the table REPOSRC. these rows belong to Standard SAP programs
    MABADRFENTRIES & SAPFGARC.
    and to reconfirm i have tried to display them in our development system
    and production system. the development systems shows the src code in
    Se38 but in production system it goes to short dump DBIF_REPO_SQL_ERROR.
    so is it possible to delete these 2 rows and update ourselves from our
    development system at oracle level. will it have any impact on SAP
    operation or upgrade in future.
    Regards
    Ramakrishna Reddy

    Hello, we have solved the problem.
    To help someone with the same error, what we have done is:
    1.- wait until all the processes has finished and the export is stopped.
    2.- startup SAP
    3.- SE14 and look up the tables. Crete the tables in the database.
    4.- stop SAP
    5.- Retry the export (if you did all the steps with sapinst running but the dialogue window in the screen) or begin the sapinst again with the option: "continue with the old options".
    Regards to all.

Maybe you are looking for

  • How can I move from one hard rive to another to search for my photos?

    I have many of my photos on an external hardrive. iPhoto will not let me go to that external hardrive to search. It only shows the photos I have on my laptop. What is an easy way to do this?

  • 9iAS Portal on NT Service Unavailable? help pls

    after going through the minimal 9iAS installation for NT, i am able to verify the Oracle HTTP Server (http://localhost/servlets/IsItWorking). However, when i navigate to the portal start page, it returns service temporarily unavailable. i verified al

  • What is backup error (-43) when trying to update an iPod touch?

    Error (-43) occurs everytime I try to update my iPod, so I would like to know more about what it is.

  • New ssd. Please help !!!!

    I want to put a ssd to my mbp and i need some tips. First i have an external hard where i want to copy all my data from the currently installed hdd. Then change the hdd to a ssd. My question is : after i install the ssd where can i get osx(i don't ha

  • Macbook takes long to go to sleep

    Hello, My macbook used to take about 5 seconds or less to go to sleep when I would manually put it to sleep. It now takes about a minute for it to go to sleep. I read  on a forum that you may have a job stuck in the printer que. This is not my proble