Datapump expdp impdp and sequences

Hi all. I have a 10g XE database on my dev machine witha whole load of test data in it. I want to copy the schema to another machine without the data which I can do with expdp usr/pwd CONTENT=metadata_only and I can import it with impdp and everyhing works tickety boo. Except that all the sequences are populated where the test data left off. Can someone please tell me how to copy the schema with the sequences reset please ? I'm guessing either I can export the schema resetting the sequences (somehow) or export EXCLUDING the sequeces and create them seperately (somehow). Thanks for reading.

I don't think you can reset the sequences directly. You can run the import to an sql file and then use search/replace on it:
$ impdp user/pass dumpfile=test.dmp directory=MY_DIR include=SEQUENCE sqlfile=sequences.sql
You will have several lines like this inside "sequences.sql":
CREATE SEQUENCE "USER"."SEQ_NAME" MINVALUE 1 MAXVALUE 99999999 INCREMENT BY 1 START WITH 1857 CACHE 20 ORDER CYCLE ;
Then just use regular expressions to replace "START WITH NNNN" by "START WITH 1"

Similar Messages

  • Expdp/impdp: update sequences

    I'm trying to use impdp to copy database from A to B. I want the sequences to match on both A and B. However, since the sequences already exists in B, the sequences will not update on B compared to A. Thanks.

    Import cannot resolve your problem. You should do it manually, sequence by sequence on your target db. Recreate with the expected start value.
    By the way, who cares the value of Oracle sequence, they are nothing, but unique value.
    Nicolas.

  • System generated Index names different on target database after expdp/impdp

    After performing expdp/impdp to move data from one database (A) to another (B), the system name generated indexes has different names on the target database, which caused a major issue with GoldenGate. Could anyone provide any tricks on how to perform the expdp/impdp and have the same system index name generated to be the same on both source and target?
    Thanks in advance.
    JL

    While I do not agree with Sb choice of wording his solution is correct. I suggest you drop and recreate the objects using explicit naming then you will get the same names on the target database after import for constraints, indexes, and FK's.
    A detailed description of the problem this caused with Golden Gate would be interesting. The full Oracle and Golden Gate versions is use might also be important to the solution, if one exists other than explicitl naming.
    HTH -- Mark D Powell --
    Edited by: Mark D Powell on May 30, 2012 12:26 PM

  • [ETL] TTS vs expdp/impdp vs ctas (dblink)

    Hi, all.
    The database is oracle 10gR2 on a unix machine.
    Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
    how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
    As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
    I just would like to hear general guide from your experience.
    Thanks in advance.
    Best Regards.

    869578 wrote:
    Hi, all.
    The database is oracle 10gR2 on a unix machine.
    Assuming that the db size is about 1 tera bytes (table : 500 giga, index : 500 giga),
    how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ?
    As you know, the speed of etl depends on the hardware capacity. (io capacity, network bandwith, the number of cpu)
    I just would like to hear general guide from your experience.
    Thanks in advance.
    Best Regards.http://docs.oracle.com/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01101
    Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.
    If you really want to know "how much faster" you're going to have to benchmark. Lots of variables come in to play so best to determine this in your actual environment.
    Cheers,

  • EXP/IMP..of table having LOB column to export and import using expdp/impdp

    we have one table in that having colum LOB now this table LOB size is approx 550GB.
    as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
    we are come to clusion that we need to take backup of this table then truncate this table and then start import
    we need help on bekow ponts.
    1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
    2)once truncate done,does import will complete successfully..?
    any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
    current SGA 2GB
    PGA 398MB
    undo retention 1800
    undo tbs 6GB
    please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
    thanks an advance.

    Hi,
    From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
    You might want to consider DBMS_REDEFINITION instead?
    Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
    Regards,
    Harry

  • Expdp+Impdp: Does the user have to have DBA privilege?

    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?
    If he is not allowed: Which GRANT is necessary to be able to perform such expdp/impdp operations?
    Peter
    Edited by: user559463 on Feb 28, 2010 7:49 AM

    Hello,
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) his own schema?Yes, a User can always export its own objects.
    Is a "normal" user (=Without DBA privilege) allowed to export and import (with new expdp/impdp) other (=not his own) schemas ?Yes, if this User has EXP_FULL_DATABASE and IMP_FUL_DATABASE Roles.
    So, you can create a User and GRANT it EXP_FULL_DATABASE and IMP_FULL_DATABASE Roles and, being connected
    to this User, you could export/import any Object from / to any Schemas.
    On databases, on which there're a lot of export/import operations, I always create a special User with these Roles.
    NB: In DataPump you should GRANT also READ, WRITE Privileges on the DIRECTORY (if you use "dump") to the User.
    Else, be accurate on the choice of your words, as previously posted, DBA is a Role not a Privilege which has another meaning.
    Hope this help.
    Best regards,
    Jean-Valentin

  • I need help with IMPDP and the parameter 'EXCLUDE'...

    Hi all... Im new at this forum and Im looking for some help =)
    I'm working with DataPump (EXPDP and IMPDP), and I'm importing some tables from HR... I'm able to import those tables without any problem, but I have to exclude the TRIGGER and PROCEDURES. I'm using the parameter EXCLUDE=TRIGGER,PROCEDURE and it shows me an error message , because its unable to find the object call "PROCEDURE"... if I remove the EXCLUDE=PROCEDURE then it shows no error message... can someone help me?
    this is the script Im using....
    IMPDP system/mypassword directory=DATA_PUMP REMAP_SCHEMA=HR:EXAMEN include=table:\"in('DEPARTAMENTS','EMPLOYEES','JOB_HISTORY','JOBS')\" EXCLUDE=TRIGGER,PROCEDURE DUMPFILE=IMP_HR.DMP LOGFILE=IMP_HR_.LOG

    Hi,
    Use:
    EXCLUDE=TRIGGER
    EXCLUDE=PROCEDURE
    Always read the docs first:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1010670

  • RAC 11GR2 + expdp/impdp

    Hi all,
    I have installed Rac 11gr2 on linux RedHat.
    I have to migrate data from a 10.1 database to the 11.2 database.
    Could you tell me which version of expdp and impdp should i use?
    Thanks.

    You use the 10g datapump expdp on the 10g DB (source) and the 11g impdp on the 11g DB (target).
    Kind regards
    Uwe Hesse
    http://uhesse.wordpress.com

  • Log file's format in expdp\impdp

    Hi all,
    I need to set log file format for expdp\impdp utility. I have this format for my dump file - filename=<name>%U.dmp which generates unique names for dump files. How can i generate unique names for log files? It'd better if dump file name and log file names will be the same.
    Regards,
    rustam_tj

    Hi Srini, thanks for advice.
    I read doc which you suggest me. The only thing which i found there is:
    Log files and SQL files overwrite previously existing files.
    So i cant keep previos log files?
    My OS is HP-UX (11.3) and database version is 10.2.0.4
    Regards,
    rustam

  • Expdp/impdp :: Constraints in Parent child relationship

    Hi ,
    I have one table parent1 and tables child1, child2 and chld3 have foreign key created on this parent1.
    Now I want to do some deletion on parent1. But since number of records are very high on parent1 , we are going with expdp / impdp with querry option.
    I have taken query level expdp on parent1. Now I dropped parent1 with cascade constraints option and all the foreign keys created by child1 ,2 and 3 which references parent1 are automatically dropped.
    Now If i fire the impdp for the query level dump file , will these foreign key constraints will get created automatically on child1 ,2 and 3 or I need to manually re-create it ??
    Regards,
    Anu

    Hi,
    The FK's will not be in the dumpfile - see the example code below where i generate a sqlfile following pretty much the process you would have done. THis is because the FK is part of the DDL for the child table not the parent.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    OPS$ORACLE@EMZA3>create table a (col1 number);
    Table created.
    OPS$ORACLE@EMZA3>alter table a add primary key (col1);
    Table altered.
    OPS$ORACLE@EMZA3>create table b (col1 number);
    Table created.
    OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
    Table altered.
    OPS$ORACLE@EMZA3>
    EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
    Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04":  /******** include=TABLE:"='A'"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    . . exported "OPS$ORACLE"."A"                                0 KB       0 rows
    Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
    Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
      /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
    Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
    Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
    Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01":  /******** sqlfile=a.sql
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
    -- CONNECT OPS$ORACLE
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: SCHEMA_EXPORT/TABLE/TABLE
    CREATE TABLE "OPS$ORACLE"."A"
       (    "COL1" NUMBER
       ) SEGMENT CREATION IMMEDIATE
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
    NOCOMPRESS LOGGING
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM" ;
    -- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
    -- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    DECLARE I_N VARCHAR2(60);
      I_O VARCHAR2(60);
      NV VARCHAR2(1);
      c DBMS_METADATA.T_VAR_COLL;
      df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
    stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
    BEGIN
      DELETE FROM "SYS"."IMPDP_STATS";
      c(1) := 'COL1';
      DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
      EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
      DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
      DELETE FROM "SYS"."IMPDP_STATS";
    END;
    /Regards,
    Harry
    http://dbaharrison.blogspot.com/

  • Trigger-Body isnt remaped during IMPDP and REMAP_SCHEMA in Oracle 10g

    Hello,
    we are facing the following problem:
    During an IMPDP we are remaping the schema. The schema includes triggers. The trigger bodys are of normal syntax without a relation to the schema/user.
    If we start the import with datapump using the remap_schema option the triggers are remap to the new schema. But the trigger bodys point to the old schema.
    For example:
    CREATE OR REPLACE TRIGGER "OLD_SCHEMA".NAME_OF_TRIGGER
    BEFORE INSERT ON "OLD_SCHEMA"."TABLE_NAME" REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW BEGIN
    END;
    After using the REMAP_SCHEMA option:
    CREATE OR REPLACE TRIGGER "NEW_SCHEMA".NAME_OF_TRIGGER
    BEFORE INSERT ON "OLD_SCHEMA"."TABLE_NAME" REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW BEGIN
    END;
    So the Trigger Body in the NEW_SCHEMA uses a link to the OLD_SCHEMA and the trigger doesn't work. We wrote the trigger without referencing to a schema. So the OLD_SCHEMA resp. NEW_SCHEMA syntax must be added by datapump.
    The question: Is there a way to remap the triggers correctly in the new schema or is this behavior "normal" to Oracle 10g Datapump?
    We are using Oracle 10g Database 10.2.0.4 with the latest Patches on Windows Server 2003 SP2.
    Thank you for you attention.

    Hello,
    I had the same problem and you can't fix it by changing a parameter of impdp. You really need to change the source of those triggers so they don't use hard-code schema references.
    Solution would be to change the source code of those triggers. Or to use the "script" parameter of impdp and recreate the triggers yourself after the impdp has finished.
    For more info check Metalink note: 750783.1
    I hope this helps.
    Regards,
    Michiel.

  • Expdp fail and create table SYS_EXPORT_SCHEMA_20

    Hi Gurus
    I am using Oracle 10.2.0.3 in AIX env
    My database size is around 1600 GB. Sometime my expdp fail and create table like SYS_EXPORT_SCHEMA_20, SYS_EXPORT_SCHEMA_05. As I run expdp from system user , I notice that it create this type of table into system tablespace. It time it consume around 5gb space. Now my system tablespace size is 68 GB.
    Can I drop those table? If I drop these table then it create any problem? This is my production database.
    Regards
    Rabi

    user13134974 wrote:
    Hi Gurus
    I am using Oracle 10.2.0.3 in AIX env
    Regards
    RabiThose tables you were mentioning, SYS_EXPORT_SCHEMA_nn , are the data pump master tables used for data pump jobs;their purpose is to
    hold the info about the job details.
    Once the job has finished table should be droped, but in case of a job failure table remains so every new dp job must create new SYS_EXPORT_SCHEMA_nn table
    with the +1 nn iteration depending on the name of the last master table that was left due to the dp job failure.
    Cleaning those tables can be done with the dbms_datapump STOP_JOB Procedure, check the docs about the details :
    http://download.oracle.com/docs/cd/B12037_01/server.101/b10825/dp_export.htm
    You can also go visit youroracle support to see examples and instructions for cleaning your db from those dismised master tables,
    How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? [ID 336014.1]

  • XE11: expdp/impdp

    Hello,
    i would like to use expdp and impdp.
    As i installed XE11 on Linux, i unlocked the HR account:
    ALTER USER hr ACCOUNT UNLOCK IDENTIFIED BY hr;
    and use the expdp:
    expdp hr/hr DUMPFILE=hrdump.dmp DIRECTORY=DATA_PUMP_DIR SCHEMAS=HR
    LOGFILE=hrdump.log
    This quits with:
    ORA-39006: internal error
    ORA-39213: Metadata processing is not available
    The alert_XE.log reported:
    ORA-12012: error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB"
    ORA-06550: line 1, column 807:
    PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
    I read some entries here and did:
    sqlplus sys/******* as sysdba @?/rdbms/admin/catnsnmp.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/catsnmp.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/catdpb.sql
    sqlplus sys/******* as sysdba @?/rdbms/admin/utlrp.sql
    I restarted the database, but the result of expdp was the same:
    ORA-39006: internal error
    ORA-39213: Metadata processing is not available
    What's wrong with that? What can i do?
    Do i need "BSLN_MAINTAIN_STATS_JOB" or can this set ro FALSE?
    I created the database today on 24.07. and the next run for "BSLN_MAINTAIN_STATS_JOB"
    is on 29.07. ?
    In the Windows-Version it is working correct, but not in the Linux-Version.
    Best regards

    Hello gentlemen,
    back to the origin:
    'Is expdp/impdp working on XE11'
    The answer is simply yes.
    After a view days i found out that:
    - no stylesheets installed are required for this operation
    - a simple installation is enough
    And i did:
    SHELL:
    mkdir /u01/app > /dev/null 2>&1
    mkdir /u01/app/oracle > /dev/null 2>&1
    groupadd dba
    useradd -g dba -d /u01/app/oracle oracle > /dev/null 2>&1
    chown -R oracle:dba /u01/app/oracle
    rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
    /etc/init.d/./oracle-xe configure responseFile=xe.rsp
    ./sqlplus sys/********* as sysdba @/u01/app/oracle/product/11.2.0/xe/rdbms/admin/utlfile.sql
    SQLPLUS:
    ALTER USER hr IDENTIFIED BY hr ACCOUNT UNLOCK;
    GRANT CONNECT, RESOURCE to hr;
    GRANT read, write on DIRECTORY DATA_PUMP_DIR TO hr;
    expdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_exp.log
    impdp hr/hr dumpfile=hr.dmp directory=DATA_PUMP_DIR schemas=hr logfile=hr_imp.log
    This was carried out on:
    OEL5.8, OEL6.3, openSUSE 11.4
    For explanation:
    We did the style-sheet-installation for XE10 to have the expdp/impd functionality.
    Thanks for your assistance
    Best regards
    Achim
    Edited by: oelk on 16.08.2012 10:20

  • Sequence of tables in from clause and sequence of "where clause" conditions

    Is Sequence of tables in "From Clause" and sequence of "where clause" conditions matters in 10g for performance?
    Edited by: user6763079 on Jun 1, 2011 3:33 AM

    user6763079 wrote:
    Is Sequence of tables in "From Clause" and sequence of "where clause" conditions matters in 10g for performance?In general it does not matter.
    It could matter if the Rule Based Optimizer (RBO) is used. However this RBO is only used if enforced by a hint or if no table statistics are collected. Starting from 10g the table statistics are automatically selected by a regular database job. So in general the CBO would be used.
    The CBO will consider different access paths. If the number of tables is low enough, then all possible combinations are considered and the order does not make any difference.
    Edited by: Sven W. on Jun 1, 2011 4:00 PM

  • Error while using trigger and sequences

    hi friends
    i am leraning oracle now. i have confronted a difficulty while working with triggers and sequence . iF anybody knows this problem ,please help me
    i created atable with the specifications given below
    CREATE TABLE TESTSEQ
    idno NUMERIC(10),
    data1 VARCHAR2(50)
    and i created a sequence named seqcol1
    CREATE SEQUENCE seqcol1
    MINVALUE 1
    MAXVALUE 100
    START WITH 1
    INCREMENT BY 1
    CACHE 20;
    I created a trigger also named
    CREATE OR REPLACE TRIGGER trigADD2
    BEFORE
    INSERT ON TESTSEQ
    FOR EACH ROW
    BEGIN
    SELECT seqcol.NEXTVAL INTO idno;
    END;
    My plan is to add the idno automatically from sequence while inserting a value in data1 column
    while doing insert operation ,
    INSERT INTO TESTSEQ (data1) VALUES ('ram1')
    some error is showing
    the error is SCOTT.trigADD2 is invalid and failed re-validation
    if anybody can help me please help me
    thanks and regards

    ops$oskar@test9i$ create table t (n number, s varchar2(1));
    Table created.
    ops$oskar@test9i$ create sequence s;
    Sequence created.
    ops$oskar@test9i$ insert into t values (s.nextval,'X');
    1 row created.
    ops$oskar@test9i$ insert into t values (s.nextval,'Y');
    1 row created.
    ops$oskar@test9i$ insert into t values (s.nextval,'Z');
    1 row created.
    ops$oskar@test9i$ select * from t;
             N S
             1 X
             2 Y
             3 ZWhat do you need a trigger for?

Maybe you are looking for

  • Audio Out always working?

    I have a question about the audio analog output of the 19SL410U TV.  Will muting or changing the volume of the TV have any effect on the output of the L and R stereo jacks on the back of the set? Is my assumption that the output terminals provide sta

  • App keeps loading problem solve

    Anyone with an app still loading problem can get rid of it by going into the app store. tap on "updates"- bottem right corner. tap on purchased where you get a list of your downloads then tap on the cloud with the arrow, enter your pin. hit ok for ap

  • How to bind JNDI in WSAD

    I am new to WSAD. I need call a web service from J2EE project. How can I bind web service's JNDI in WSAD? Is it the right place to add the variables in the server's variable tab? How can I access the JNDI value after binding it? Thanks

  • Generating Dependencies for Calculation/Analytical View in SAP HANA.

    Hi Everyone, I am trying to find out the dependencies for a specific Calculation/Analytical View. I have used sys.object_dependencies but that gives only the table names on which the calculation view is dependent on, but not theanalytical and attribu

  • Role level validation or check.

    Hi Experts, I have one requirement of search help exits for hiding certain fields (column in search help pop up) IF  SY-UNAME = '111111'."     IF SY-SUBRC IS INITIAL.       IF CALLCONTROL-STEP = 'SELONE'.         IF SY-SUBRC IS INITIAL.           DEL