Looking for an insert statement to insert into a table around 500 rows

Hi  Gurus,
Your help is greatly appreciated !!,
I need to have a query to insert into the mer_spec_feature table with the following , Please sugesst apart from the Utl_file ,input thing
as the data is differnt in the regions and am lokking if it workes out with a insert and select query .
1)I have to create a  script for inserting records into  mer_spec_feature where  on appl_id ‘VXRR’
having feature ID 786 and have to Exclude TIDs which already have the 786 feature added for this appl_id like in the eblwo select query .
2)There are 509 such TIDs where it doesnt have the 786 feature with appl_id -'VXRR' in the mer_spec_feature table
3)
select distinct terminal_id,  appl_id, from  mer_spec_feature
where appl_id = 'VXRR'
minus
select distinct terminal_id, appl_id from mer_spec_feature
where appl_id = 'VXRR'
and terminal_feature_id = 786
desc mer_spec_feature :
Terminal_id  --varachr2(9)
appl_id    --varcahr2(10 )
terminal_feature-id --number(8)
TERMINAL_FEATURE_ID
TERMINAL_APPL_ID
TERMINAL_ID
299
405T330a
1004665
786
VXRR
1004665

@Frank Kulash !!
Thanks always for your replys .!!
i have tried with the above sql , and am having issue with the pk constraint after inserting  232 rows.
And a Tertminal_id  will have 100 feature_id's for one  appl_id.
becoz it has pk constraint on  -- TERMINAL_FEATURE_ID, APPL_ID, TERMINAL_ID
TERMINAL_FEATURE_ID
APPL_ID
TERMINAL_ID
299
405T330a
1004665
786
VXRR
1004665
can you please sugeest any other option.

Similar Messages

  • How to Improve inserts into Template table for Query Processing

    Hi guys!
    I need your advice. How can i improve inserts into template table (/BI0/0P00000010 for exemple)? In my scenario i forced to load data from cube C_X to cubes C_1, C_2 and C_3.
    To get a goal i created a transformations and a DTP process with delta upload to each of cubes (C_1, C_2, C3). And that proccess takes about 3 hours! (it doesn't matter 1.000 or 100.000 records). But when i tried to load data with full update (with a filter) a proccess get data takes about 1 minutes.
    I traced process and saw that inserting into template table (which create each time when proccess started) take the longest time. How can i improve it?

    Hi Mahendar,
    It will require some efforts to investigate it so I propose to open a support ticket with Microsoft (through portal).
    Your first question interest me and I am not sure if you are doing proactive or reactive real time reporting. You can try with
    HBase for storing and retrieving data. HBase known for providing good read and write speed. If you are not comfortable writing HBase syntax then you can add an abstraction layer i.e. HIVE. It might take little more time when you
    query HBase from Hive.
    If you are doing real time analytic, then you can choose from Strom and Microsoft Azure Stream Analytic.
    Thanks and Regards,  
    Sudhir Rawat

  • Creating a button for inserting into a table

    What I can't do, because I've never needed to, is to create in a page (where I have a Interactive report) a button that when is clicked performs a simple sql (for example an insert into a table using some page item values).
    How can I do it?
    Thanks

    Here are some of the steps:
    1) Create the button.
    2) Then right click on the button name and select Create Dynamic Action
    3) Give the DA a name, click next
    4) On the "When" Step, Event should already be "Click" and the button name should be filled in
    5) For Condition, you would have a condition for when you want the button display.  Say, if your page items have a certain range of values then display this button.  No condition means the button is always displayed.
    6) For the "True Action" Step, for Action select "Execute PL/SQL Code"
        [There are other ways to do this, but this is straight-forward for me.]
    7) The in the PL/SQL Code block enter say:
    Begin
      INSERT INTO EMP (EMPNO, ENAME, HIREDATE)        
              VALUES (:P6_EMPNO, 'MARK1970', :P6_HIREDATE);
      COMMIT;
    END;   --- You would likely make them all page item variables
    8) Here the part I'm not 100% sure of.
        For "Page Items to Submit", name the page items used in YOUR QUERY -- in my example these are  P6_EMPNO,P6_HIREDATE   (Note no use of & or : or . here)
        Hmm.  Start by putting nothing in the "Page Items to Return".   If that fails, then try the same page items  P6_EMPNO,P6_HIREDATE there as well.
    [ I'm too lazy to go run this down at the moment.]
    9) I think that's it.
    Howard

  • Can select, but cannot insert into oracle tables using php.

    I am having trouble inserting data into a tables in oracle using PHP. I can insert into the tables using baninst1, but nothing else. I can't even insert using the tables' owner. Every time I try I get the ORA-00492 error of table or view does not exist. However, I can run a select statement just fine. So far, baninst1 is the only user that can actually do inserts. I've triple checked all the table grants and everything is fine. Also, I can copy/paste the exact insert statement into SQL*Plus and it works with the correct user. It doesn't have to be baninst1. I've tried everything I can think of. I've tried putting the table name in quotes, adding the schema name to the table, checking public synonyms, but everything appears to be ok. I would greatly appreciate any suggestions others may have.
    I'm using PHP 5.2.3 with an oracle 9i DB.

    Here is the code that doesn't work. It works only with baninst1, no other username will work and all I do is change the my_username/my_password to something different. This is one example function where I try to do an insert, but it doesn't work on any of the tables I try.
    Thanks for looking.
    class Oracle {
         private $oracleUser = "my_username";
         private $oraclePassword = "my_password";
         private $oracleDB = "MY_DB";
         private $c = null;
         function Connect() {
              if ( !($this->c = ocilogon( $this->oracleUser, $this->oraclePassword, $this->oracleDB ) ) )
                   return false;
              return true;
         function AddNewTest($testName, $course, $section, $term, $maxAttempts, $length, $startDate, $startTime, $endDate, $endTime)
              $result = "";
              if( $this->c == null )
                   $this->Connect();     
              $splitIndex = strpos($course, " ");
              $subjectCode = substr($course, 0, $splitIndex);
              $courseNumber = substr($course, $splitIndex + 1);
              $insertStatment = "insert into szbtestinfo(szbtestinfo_test_name,
                                                                szbtestinfo_course_numb,
                                                           szbtestinfo_section,
                                                                szbtestinfo_max_attempts,
                                                                szbtestinfo_length_minutes,
                                                                szbtestinfo_start,
                                                                szbtestinfo_end,
                                                                szbtestinfo_id,
                                                                szbtestinfo_subj_code,
                                                                szbtestinfo_eff_term)
                                                                values( '" . $testName .
                   "', '" . $courseNumber .
                   "', '" . $section .
                   "', " . $maxAttempts .
                   ", " . $length .
                   ", to_date('" . $startDate . " " . $startTime . "', 'MM/DD/YYYY HH24:MI')" .
                   ", to_date('" . $endDate . " " . $endTime . "', 'MM/DD/YYYY HH24:MI')" .
                   ", szsnexttest.nextval" .
                   ", '" . $subjectCode .
                   "', '" . $term . "')";
              //return $insertStatment;
              $s = OCIParse($this->c, $insertStatment);
              try
                   $result = OCIExecute($s);
              catch(exception $e)
              return $result;
         }

  • Oracle 10g performance degrades while concurrent inserts into a table

    Hello Team,
    I am trying to insert into single table via multiple threads, Some of these threads perform reasonably well but some will take really longer time, As the time goes on performance drastically degrades (even down by 500 to 600 times). With AWR report i see that there quite huge number of buffer gets there. I am not sure how can i reduce those. If i ran on a single thread this operation is consistent.
    I tried quite a few options like
    1. Increasing SGA Memory
    2. Moving redo logs to another disk drive.
    3. Trying it on a empty table
    4. Trying it on a table which has huge data (4 Million rows)
    5. I have even tried partitioning the table with HASH algoritm
    Note: Each thread i am pupming equal amount of data (let say 25K rows).
    Can any body suggest me a clue what could be the culprit here.
    Thanks in Advance
    Satish Kumar Ballepu

    user11150696 wrote:
    Can you please guide me how do i do that, I am not aware of how to generate explain plan for that query.Since you have the trace file already (and I don't mean the tkprof output), you could do the following:
    Read the trace file to find the statement you're interested id - the line above it will be a +"PARSING"+ line, and will include a reference to the statement hash_value look like +'hv=3838377475845'+.
    Use the hash_value to query v$sql to get the sql_id and child_number;
    Use the sql_id and child number in a call to dbms_xplan.display_cursor:
    PARSING IN CURSOR #7 len=68 dep=0 uid=55 oct=3 lid=55 tim=448839952404 *hv=3413100263* ad='2f6ede48'
    select ... etc.  (the statement I want the plan for)
    SQL> select sql_id , child_number from v$sql where hash_value = *3413100263*;
    SQL_ID        CHILD_NUMBER
    053tyaz5qzjr7            0
    SQL> select * from table(dbms_xplan.display_cursor(*'053tyaz5qzjr7'*,*0*));
    PLAN_TABLE_OUTPUT
    SQL_ID  053tyaz5qzjr7, child number 0
    select  /*+ use_concat */  small_vc from  t1 where  n1 = 1 or n2 = 2
    Plan hash value: 82564388
    | Id  | Operation                    | Name  | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT             |       |       |       |     4 |
    |   1 |  CONCATENATION               |       |       |       |       |
    |   2 |   TABLE ACCESS BY INDEX ROWID| T1    |    10 |   190 |     2 |
    |*  3 |    INDEX RANGE SCAN          | T1_N2 |    10 |       |     1 |
    |*  4 |   TABLE ACCESS BY INDEX ROWID| T1    |    10 |   190 |     2 |
    |*  5 |    INDEX RANGE SCAN          | T1_N1 |    10 |       |     1 |
    Predicate Information (identified by operation id):
       3 - access("N2"=2)
       4 - filter(LNNVL("N2"=2))
       5 - access("N1"=1)
    Note
       - cpu costing is off (consider enabling it)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "For every expert there is an equal and opposite expert."
    Arthur C. Clarke
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Link and insert into 2 tables

    Hi Everyone,
    I am building an application that that contains information about helpdesk calls. I am using 2 tables:
    Table 1 contains tracking info- TRK_CALLS
    ID -primary key
    USER_
    ASSIGNED_TO
    PROBLEM
    SOLUTION
    STATUS
    EDIT
    Table 2 contain date and time info - TRKCALLS_TIME
    ID - primary key
    CALL_ID - same number as ID in table 1
    DATE_
    TIME
    I have taken the advice that Denes Kubicek gave another poster and created a workspace at apex.oracle.com and places my app in there for others to look at
    workspace: kjwebb
    username: [email protected]
    password: gtmuc
    application: calltracking2
    I have a report called create/edit call tracking in there that I can either Edit or create an entry into TRK_CALLS. clicking create takes me to a form, after info is entered I have a create button that assigns the PK and inserts info into TRK_CALLS. I then have to click Edit Call Time button to input info on a form that inserts into TRKCALLS_TIME table. I would like to link these tables somehow so that when I go to the (Edit Call Time) form the Call ID is populated with the PK ID from the TRK_CALLS table.
    It would be easier to insert this info all in one page but I worked on that for a long time before giving up because I could not get anything to insert into the tables so I have taken this route.
    The basic desired outcome is to tie the tables with a PK, ID in table 1 and CALL_ID in table 2. So that they correspond and displayed on the report page.
    Please help in any way you can and make changes to my app.
    I would not be asking for help unless I have reached the ends of my apex knowledge.
    Thanks and please let me know if there are any questions,
    Kirk

    I can imagine it is pretty obscure when your knowledge of PL/SQL is not (yet) so big.
    The statement I wrote ar meant exactly for your situation.
    OK, here we go:
    First you have created a view in the Object Browser. Suppose it is called trkcalls_view .
    Then you go to SQL Workshop > SQL Commands.
    You cut the next statement and paste it in the upper white part of the screen, just under the autocommit checkbox. Replace the bold sequence references by the real name of the sequences that are used to populate the ID's of the two tables.
    You say Run and the trigger is created.
    A trigger on the view is created. Creation of such a trigger is not possible in the Object Browser, so I understand your confusion. This triggers performs when an insert in the view is performed. As you might see in the code, it creates seperate insert statements for both tables.
    CREATE OR REPLACE TRIGGER bi_trkcalls_view
    INSTEAD OF UPDATE ON trkcalls_view
    REFERENCING NEW AS NEW OLD AS OLD
    FOR EACH ROW
    DECLARE
    v_id number;
    bv_id2 number;
    BEGIN
    select sequence1.nextval into v_id from dual;
    select sequence2.nextval into v_id2 from dual;
    INSERT INTO TRK_CALLS
    ( ID
    , USER_ASSIGNED_TO
    , PROBLEM
    , SOLUTION
    , STATUS
    , EDIT
    VALUES
    ( v_id
    , :new.user_assigned_to
    , :new.problem
    , :new.solution
    , :new.status
    , :new.edit
    insert into TRKCALLS_TIME
    ( id
    , call_id
    , date_time
    values
    ( v_id2
    , v_id
    , :new.date_time
    end;
    END ;good luck,
    DickDral

  • Problem while inserting into a table which has ManyToOne relation

    Problem while inserting into a table *(Files)* which has ManyToOne relation with another table *(Folder)* involving a attribute both in primary key as well as in foreign key in JPA 1.0.
    Relevent Code
    Entities:
    public class Files implements Serializable {
    @EmbeddedId
    protected FilesPK filesPK;
    private String filename;
    @JoinColumns({
    @JoinColumn(name = "folder_id", referencedColumnName = "folder_id"),
    @JoinColumn(name = "uid", referencedColumnName = "uid", insertable = false, updatable = false)})
    @ManyToOne(optional = false)
    private Folders folders;
    public class FilesPK implements Serializable {
    private int fileId;
    private int uid;
    public class Folders implements Serializable {
    @EmbeddedId
    protected FoldersPK foldersPK;
    private String folderName;
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "folders")
    private Collection<Files> filesCollection;
    @JoinColumn(name = "uid", referencedColumnName = "uid", insertable = false, updatable = false)
    @ManyToOne(optional = false)
    private Users users;
    public class FoldersPK implements Serializable {
    private int folderId;
    private int uid;
    public class Users implements Serializable {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer uid;
    private String username;
    @OneToMany(cascade = CascadeType.ALL, mappedBy = "users")
    private Collection<Folders> foldersCollection;
    I left out @Basic & @Column annotations for sake of less code.
    EJB method
    public void insertFile(String fileName, int folderID, int uid){
    FilesPK pk = new FilesPK();
    pk.setUid(uid);
    Files file = new Files();
    file.setFilename(fileName);
    file.setFilesPK(pk);
    FoldersPK folderPk = new FoldersPK(folderID, uid);
         // My understanding that it should automatically handle folderId in files table,
    // but it is not…
    file.setFolders(em.find(Folders.class, folderPk));
    em.persist(file);
    It is giving error:
    Internal Exception: java.sql.SQLException: Field 'folderid' doesn't have a default value_
    Error Code: 1364
    Call: INSERT INTO files (filename, uid, fileid) VALUES (?, ?, ?)_
    _       bind => [hello.txt, 1, 0]_
    It is not even considering folderId while inserting into db.
    However it works fine when I add folderId variable in Files entity and changed insertFile like this:
    public void insertFile(String fileName, int folderID, int uid){
    FilesPK pk = new FilesPK();
    pk.setUid(uid);
    Files file = new Files();
    file.setFilename(fileName);
    file.setFilesPK(pk);
    file.setFolderId(folderId) // added line
    FoldersPK folderPk = new FoldersPK(folderID, uid);
    file.setFolders(em.find(Folders.class, folderPk));
    em.persist(file);
    My question is that is this behavior expected or it is a bug.
    Is it required to add "column_name" variable separately even when an entity has reference to ManyToOne mapping foreign Entity ?
    I used Mysql 5.1 for database, then generate entities using toplink, JPA 1.0, glassfish v2.1.
    I've also tested this using eclipselink and got same error.
    Please provide some pointers.
    Thanks

    Hello,
    What version of EclipseLink did you try? This looks like bug https://bugs.eclipse.org/bugs/show_bug.cgi?id=280436 that was fixed in EclipseLink 2.0, so please try a later version.
    You can also try working around the problem by making both fields writable through the reference mapping.
    Best Regards,
    Chris

  • How to insert into 2 tables from the same page (with one button  link)

    Hi,
    I have the following 2 tables....
    Employees
    emp_id number not null
    name varchar2(30) not null
    email varchar2(50)
    hire_date date
    dept_id number
    PK = emp_id
    FK = dept_id
    Notes
    note_id number not null
    added_on date not null
    added_by varchar2(30) not null
    note varchar2(4000)
    emp_id number not null
    PK = note_id
    FK = emp_id
    I want to do an insert into both tables via the application and also via the same page (with one button link). I have made a form to add an employee with an add button - adding an employee is no problem.
    Now, on the same page, I have added a html text area in another region, where the user can write a note. But how do I get the note to insert into the Notes table when the user clicks the add button?
    In other words, when the user clicks 'add', the employee information should be inserted into the Employees table and the note should be inserted into the Notes table.
    How do I go about doing this?
    Thanks.

    Hi,
    These are my After Submit Processes...
    After Submit
    30     Process Row of NOTES     Automatic Row Processing (DML)     Unconditional
    30     Process Row of EMPLOYEES     Automatic Row Processing (DML)     Unconditional
    40     reset page     Clear Cache for all Items on Pages (PageID,PageID,PageID)     Unconditional
    40     reset page     Clear Cache for all Items on Pages (PageID,PageID,PageID)     Unconditional
    40     reset page     Clear Cache for all Items on Pages (PageID,PageID,PageID)     Unconditional
    40     reset page     Clear Cache for all Items on Pages (PageID,PageID,PageID)     Unconditional
    50     Insert into Tables     PL/SQL anonymous block     Conditional
    My pl/sql code is the same as posted earlier.
    Upon inserting data into the forms and clicking the add button, I get this error...
    ORA-06550: line 1, column 102: PL/SQL: ORA-00904: "NOTES": invalid identifier ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored
         Error      Unable to process row of table EMPLOYEES.
    Is there something wrong with the pl/sql code or is it something else?

  • Insert into two tables, how to insert multiple slave records

    Hi, I have a problem with insert into two tables wizard.
    The wizard works fine and I can add my records, but I need to enter multiple slave table records.
    My database:
    table: paper
    `id_paper` INTEGER(11) NOT NULL AUTO_INCREMENT,
    `make` VARCHAR(20) COLLATE utf8_general_ci NOT NULL DEFAULT '',
    `model` VARCHAR(20) COLLATE utf8_general_ci NOT NULL DEFAULT '',
    `gsm` INTEGER(11) NOT NULL,
    PRIMARY KEY (`id_paper`)
    table: paper_data
    `id_paper_data` INTEGER(11) NOT NULL AUTO_INCREMENT,
    `id_paper` INTEGER(11) NOT NULL,
    `value` DOUBLE(15,3) NOT NULL,
    `nanometer` INTEGER(11) NOT NULL,
    PRIMARY KEY (`id_paper_data`)
    I need to add multiple fields "value" and "nanometer"
    Current form looks like this:
    Make:
    Model:
    Gsm:
    Value:
    nanometer:
    I need it to look like this:
    Make:
    Model:
    Gsm:
    Value:
    nanometer:
    Value:
    nanometer:
    Value:
    nanometer:
    Value:
    nanometer:
    and so on.
    The field "id_paper" in table paper_data needs to get same id for entire transaction. Also how do I set default values for each field "nanometer" on my form the must be different (370,380,390 etc)?
    Thanks.

    you can find an answer here: http://209.85.129.132/search?q=cache:PzQj57dsWmQJ:www.experts-exchange.com/Web_Development /Software/Macromedia_Dreamweaver/Q_23713792.html+Insert+Into+Two+Tables+Wizard&cd=3&hl=lt& ct=clnk&gl=lt
    This is a copy of the post:
    Hi experts,
    Im using ADDT to design a page that needs to insert one record into a master ALBUMS table, along with three records into a GENRES table, all linked by the primary, auto-incremented ALBUMS. ALBUM_ID.
    Ive tried many different ways of combining the Insert into Multiple Tables wizard and the insert record wizard with Link Transactions, all with no luck.  Either only the album info gets inserted, or a ALBUM_ID cannot be null error from MySQL.  Here is the structure of the tables
    ALBUMS
    ALBUM_ID, INT(11), Primary, Auto_Increment
    alb_name, varchar
    alb_release, YEAR
    USER_ID, int
    alb_image, varchar
    GENRES
    ALBUM_ID, int, NOT NULL
    GENRE_ID, int, NOT NULL
    ID, int, primary, auto-increment
    Many thanks in advance...
    ==========================================================================================
    //remove this line if you want to edit the code by hand
    function Trigger_LinkTransactions(&$tNG) {
      global $ins_genres;
      $linkObj = new tNG_LinkedTrans($tNG, $ins_genres);
      $linkObj->setLink("ALBUM_ID");
      return $linkObj->Execute();
    function Trigger_LinkTransactions2(&$tNG) {
      global $ins_genres2;
      $linkObj = new tNG_LinkedTrans($tNG, $ins_genres2);
      $linkObj->setLink("ALBUM_ID");
      return $linkObj->Execute();
    function Trigger_LinkTransactions3(&$tNG) {
      global $ins_genres3;
      $linkObj = new tNG_LinkedTrans($tNG, $ins_genres3);
      $linkObj->setLink("ALBUM_ID");
      return $linkObj->Execute();
    //end Trigger_LinkTransactions trigger
    //-----------------------Different Section---------------------//
    // Make an insert transaction instance
    //Add Record Genre 1
    $ins_genres = new tNG_insert($conn_MySQL);
    $tNGs->addTransaction($ins_genres);
    $ins_genres->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
    $ins_genres->registerTrigger("BEFORE", "Trigger_Default_FormValidation", 10, $detailValidation);
    $ins_genres->setTable("genres");
    $ins_genres->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID");
    $ins_genres->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
    $ins_genres->setPrimaryKey("ID", "NUMERIC_TYPE");
    // Add Record Genre 2
    $ins_genres2 = new tNG_insert($conn_MySQL);
    $tNGs->addTransaction($ins_genres2);
    $ins_genres2->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
    $ins_genres2->setTable("genres");
    $ins_genres2->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID2");
    $ins_genres2->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
    $ins_genres2->setPrimaryKey("ID", "NUMERIC_TYPE");
    // Add Record Genre 3
    $ins_genres3 = new tNG_insert($conn_MySQL);
    $tNGs->addTransaction($ins_genres3);
    $ins_genres3->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
    $ins_genres3->setTable("genres");
    $ins_genres3->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID3");
    $ins_genres3->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
    $ins_genres3->setPrimaryKey("ID", "NUMERIC_TYPE");
    =========================================================================================
    Hi Aaron,
    Nice job!!
    $ins_albums->registerTrigger("AFTER", "Trigger_LinkTransactions2", 98);
    $ins_albums->registerTrigger("AFTER", "Trigger_LinkTransactions3", 98);
    These lines, right? :-( Sorry I forgot to mention that
    Thanks a lot for the grading!

  • Direct Path Insert into temporary table

    Hi,
    I made the following experiment (temp_idtable is a temp table with a single column id, V_PstSigLink is a complex View):
    14:24:28 SQL> insert into temp_idtable select id from V_PstSigLink;
    17084 Zeilen wurden erstellt. (in english: rows inswerted)
    Statistiken
    24 recursive calls
    52718 db block gets
    38305 consistent gets
    16 physical reads
    4860544 redo size
    629 bytes sent via SQL*Net to client
    553 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    4 sorts (memory)
    0 sorts (disk)
    17084 rows processed
    14:24:41 SQL> commit;
    Transaktion mit COMMIT abgeschlossen.
    14:25:29 SQL> insert /*+ APPEND*/into temp_idtable select id from V_PstSigLink;
    17084 Zeilen wurden erstellt. (in english: rows inswerted)
    Statistiken
    1778 recursive calls
    775 db block gets
    38847 consistent gets
    40 physical reads
    427408 redo size
    613 bytes sent via SQL*Net to client
    565 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    27 sorts (memory)
    0 sorts (disk)
    17084 rows processed
    As you can see, with /*+ APPEND*/ there will be much less REDO generated.
    How can it be?
    1. /*+ APPEND*/ only doesn't mean that there shouldn't be REDO generated, does it? For that one has to declare the table with NOLOGGING.
    2. In the doc there is a statement that for temp. tables there is no REDO log generated, just UNDO (and REDO only for UNDO).
    Where is the difference coming from?
    If I use Direct Path (APPEND) for a temp table, where will be the table populated - in the Buufer Cache or directly in a temporary segment on the disk?
    Balazs

    Hi John,
    I'm a little confused about your statement " Oracle does not cache data blocks until they are read from a table.". I wanted to make a little experiment with KEEP Buffer Cache to see weather I can 'pin' the content of a temp table. But before getting to pin I realized an interesting behavior. Please see this sqlplus log:
    SQL> create global temporary table temp_ins(id NUMBER);
    Table created.
    SQL> insert into temp_ins values (1);
    1 row created.
    SQL> insert into temp_ins values (2);
    1 row created.
    SQL> insert into temp_ins values (3);
    1 row created.
    SQL> set autotrace on explain statistics;
    SQL> select * from temp_ins;
    ID
    1
    2
    3
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 TABLE ACCESS (FULL) OF 'TEMP_INS'
    Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    416 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    SQL>
    I realized exectly the opposit of your statement! Oracle apparently cached the content of the temp table before it read from it!! Could you please explain why? Now I'm going to try it with a normal table...
    Balazs

  • Selecting Records from 125 million record table to insert into smaller table

    Oracle 11g
    I have a large table of 125 million records - t3_universe.  This table never gets updated or altered once loaded,  but holds data that we receive from a lead company.
    I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads -  that will be updated with regard to when the lead is mailed and for other relevant information.
    My question is what is the best (fastest) approach to select records from this 125 million record table to insert into the smaller table.  I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches.
    My current attempt has been to create a View using the query that selects the records as shown below.  Then use a second query that inserts into T3_Leads from this View V_Market.  This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause?    My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key. 
    CREATE VIEW V_Market  as
    WITH got_pairs    AS  
         SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */  l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no 
         ,      ROW_NUMBER () OVER ( PARTITION BY  l.address_key 
                                      ORDER BY      l.hh_verification_date  DESC 
                      ) AS r_num   
         FROM   t3_universe  e   
         JOIN   t3_universe  l  ON   
                l.address_key  = e.address_key
                AND l.zip_code = e.zip_code
              AND   l.p1_gender != e.p1_gender
                 AND   l.household_key != e.household_key         
                 AND  l.hh_verification_date  >= e.hh_verification_date 
      SELECT  * 
      FROM  got_pairs
      where l_hh_type !=1 and l_hh_type !=2 and filler_data != 1 and filler_data != 2 and zip_code in (select * from M_mansfield_02048) and p1_exact_age BETWEEN 25 and 70 and narrowband_income >= '8' and r_num = 1
    Then
    INSERT INTO T3_leads(zip, zip4, firstname, lastname, address, city, state, household_key, hh_type, address_key, income, relationship_status, gender, age, person_key, filler_data, p1_seq_no, p2_seq_no)
    select zip_code, zip_plus_4, p1_givenname, surname, address, city, state, household_key, l_hh_type, address_key, narrowband_income, p1_ms, p1_gender, p1_exact_age, p1_personkey, filler_data, p1_seq_no, p2_seq_no
    from V_Market;

    I had no trouble creating the view exactly as you posted it.  However, be careful here:
    and zip_code in (select * from M_mansfield_02048)
    You should name the column explicitly rather than select *.  (do you really have separate tables for different zip codes?)
    About the performance, it's hard to tell because you haven't posted anything we can use, like explain plans or traces but simply encapsulating your query into a view is not likely to make it any faster.
    Depending on the size of the subset of rows you're selecting, the /*+ INDEX hint may be doing your more harm than good.

  • Inserting into a table which is created "on the fly" from a trigger

    Hello all,
    I am trying to insert into a table from a trigger in Oracle form. The table name however, is inputted by the user in am item form.
    here is what the insert looks like:
    insert into :table_name
    values (:value1, :value2);
    the problem is that forms does not recognize ::table_name. If I replace :table_name with an actual database table, it works fine. However, I need to insert to a table_name based from oracle form item.
    By the way, the table|_name is built on the fly using a procedure before I try to insert into it.
    Any suggestion on how can I do that? My code in the trigger is:
    declare
    dm_drop_tbl(:table_name,'table) // a call to an external procedure to drop the table
    dm_create_tbl(:table_name,'att1','att2');
    insert into :table_name
    values (:value1, :value2);
    this give me an error:
    encounter "" when the symbol expecting one.....

    Hi ,
    You should use the FORMS_DDL built_in procedure. Read the on-line documentation of forms ...
    Simon

  • Diffrence between backend insert and front end insert into a table.

    I am developing a conversion program for tax exemption. For this program only ZX_EXEMPTIONS table is used to populate the data and we got confirmation from Oracle also regarding this.For inserting the data into this table we are taking the max of tax_exemption_id which is pk for this table and adding one to it and inserting into the table. But problem here is after inserting from back end we are not able to insert from front end.
    It seems backend data is holding the tax_exemption_id which is suppose to reserve by front end data.Please explain the different behavior of populating of tax_exemption_id from front end and back end.

    Hi,
    i think the problem is that you are using max-value + 1 for tax_exemption_id. But as ZX_EXEMPTIONS is using sequence ZX_EXEMPTIONS_S
    for primary key generation, you encounter situation that you are increasing PK Id for this table without increasing sequence value.
    When trying to insert rows from front end - which probably uses sequence value - it tries to use a sequence value already used by your backend
    process (which generated it by Maxvalue + 1) and would then encounter a primary key violation.
    I think you should use sequence mentioned above to generate your PK Ids in backend process as well. And before doing so, check current value
    of ZX_EXEMPTIONS_S, as you might need to rebuild the sequence in order to select nextval sucessfully for both frontend and backend.
    Regards

  • How can I get a count at the same time I am inserting into another table

    I have a requirement where I have to find out the count(number of records inserted into another table) at the same time I insert into the table:
    Like
    I am copying records from table B to A, while doing this I have to find out how many records I've inserted since I need this count in subsequent steps
    INSERT INTO A
    SELECT * FROM B
    how can I store the count into any variable while doing above statement
    Please advice

    No, Warren that doesn't work!
    SQL> set serveroutput on
    SQL> declare
      2     vCtr  number := 0;
      3  begin
      4     insert into emp2
      5     select * from emp;
      6
      7     select count(*)
      8       into vCtr
      9       from emp2;
    10
    11     dbms_output.put_line('rows created '||to_char(vCtr));
    12  end;
    13  /
    rows created 15
    PL/SQL procedure successfully completed.
    SQL> declare
      2     vCtr  number := 0;
      3  begin
      4     insert into emp2
      5     select * from emp;
      6
      7     select count(*)
      8       into vCtr
      9       from emp2;
    10
    11     dbms_output.put_line('rows created '||to_char(vCtr));
    12  end;
    13  /
    rows created 30
    PL/SQL procedure successfully completed.
    SQL>

Maybe you are looking for