Insert result of query into a table with unique constraint

Hi,
I have a query result that I would like to store in a table. The target table has a unique constraint. In MySQL you can do
insert IGNORE into myResultTable <...select statement...>
The IGNORE clause means if inserting a row would violate a unique or primary key constraint, do not insert the row, but continue inserting the rest of the query. Leaving the IGNORE clause out would cause the insert to fail and an error to return.
I would like to do this in oracle... that is insert the results of a query that are not already in the target table. What is the best way to do this? One way is use a procedural language and loop through the first query, checking to see if each row is a duplicate before inserting it. I would think this would be slow if there are lots of records. Other options...
insert into myTargetTable
select value from mySourceTable where ... and not exists (select 'x' from myTargetTable where value = mySourceTable.value)
insert into myTargetTable
select mySourceTable.value
from myTargetTable RIGHT JOIN mySourceTable
ON myTargetTable.value = mySourceTable.value
where ...
and myTargetTable.value IS NULL
any other suggestions?
Thanks,
Simon

Try doing a MINUS instead of not exists., ie Source MINUS Target.
Disabling the constraint will not help you since this will allow the duplicate rows to be inserted into the table. I don't think you want this.
--kalpana                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Insert called before delete in a collection with unique constraint

    Hi all,
    I have a simple @OneToMany private mapping:
    private Collection<Item> items;
    @OneToMany(mappedBy = "parent", cascade = CascadeType.ALL)
    public Collection<Item> getItems() {
    return items;
    public void setItems(Collection<Item> items) {
    this.items = items;
    public void customize(ClassDescriptor classDescriptor) throws Exception {
    OneToManyMapping mapping = (OneToManyMapping)
    classDescriptor.getMappingForAttributeName("items");
    mapping.privateOwnedRelationship();
    I have a unique constraint on my Items table that a certain value cannot be duplicated.
    My problem appears when I remove a previously saved item from the collection and add a new item containing the same data, at the same time.
    After I save the parent and do a flush, I receive SQLIntegrityConstraintViolationException because TopLink performs first an insert query instead of deleting the existing item.
    I tested the application and everything went fine with: remove item / save parent / insert item / save parent
    I checked on the Internet and the documentation but didn't find anything similar to my problem. I tried debugging TopLink's internal calls but I'm missing some general ideas about all the inner workings and don't know what to look for. I use TopLink version: Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))
    Does anyone have a hint of what to look for?
    Edited by: wise_guybg on Sep 25, 2009 4:01 PM
    Edited by: wise_guybg on Oct 5, 2009 11:22 AM

    Thank you for the suggestions James
    As I mentioned briefly I have done some debugging but couldn't understand how collections are updated. What I did find out is that setShouldPerformDeletesFirst() doesn't come into play in this case because this is not a consecutive change on entities.
    What I have in my case is a collection inside an entity that the user has tampered with and now TopLink has to do a merge. I cannot call flush() in the middle since the user has not approved that the changes made to the entity should be saved.
    I see that for TopLink it's not easy to figure out the order in which changes were made to a collection. Here is pseudo-code of when the constraint is touched:
    entity.items.remove(a)
    entity.items.add(b)
    merge(entity)
    And here is code that executes without a problem:
    entity.items.remove(a)
    merge(entity)
    entity.items.add(b)
    merge(entity)
    So once again, I think that collection changes are managed differently but I don't find a way to tell TopLink how to handle them. Any ideas?

  • Table with unique Constraint

    Dear All
    i have a a project ADF-BC / JSF - JDeveloper 11.1.2.3.0 latest, and i have EO contains PK constrain in db in 2 fields (userid & Roleid) and i implemented bundle to handle error message with jbo error code and it works fine in AM test
    and i have VO contains LOV in one attribute of this unique constrain columns (Roleid), now i dropped VO in jsf page as a af:table as below with input text with list of values for roleid and auto submit = true , and i face unexpected behavior from lov attribute in case of entering repeated value ..
    when i enter another repeated value , it give me error message i created in the bundle and everything ok until now
    but when i tab out of input text with list of values , it go back to old value as may be validation fired in back ground , it is not a problem until now
    when i try to make anything else, he still gives me error message of duplicated key
    i change the value again to correct value to avoid duplication error message , i am surprised , still i get the error message and shows me the repeated value again !!
    simply it still save the old repeated value however i corrected , please any one help me to know what is happening and how to solve ?
    Attribute in EO :
    <Attribute
    Name="RoleId"
    Precision="10"
    ColumnName="ROLE_ID"
    SQLType="VARCHAR"
    Type="java.lang.String"
    ColumnType="VARCHAR2"
    TableName="USER_ROLES"
    PrimaryKey="true">
    <DesignTime>
    <Attr Name="_DisplaySize" Value="10"/>
    </DesignTime>
    <validation:ExistsValidationBean
    Name="RoleId_Rule_1"
    ResId="CS.model.BC.EO.UserRolesEO.RoleId_Rule_1"
    OperandType="EO"
    AssocName="CS.model.BC.ASS.UsersRolesFk2ASS"/>
    </Attribute>
    Interface af : table
    <af:table value="#{bindings.UserRoles2.collectionModel}" var="row"
    rows="#{bindings.UserRoles2.rangeSize}"
    emptyText="#{bindings.UserRoles2.viewable ? 'No data to display.' : 'Access Denied.'}"
    fetchSize="#{bindings.UserRoles2.rangeSize}"
    rowBandingInterval="0"
    filterModel="#{bindings.UserRoles2Query.queryDescriptor}"
    queryListener="#{bindings.UserRoles2Query.processQuery}"
    filterVisible="true" varStatus="vs"
    selectedRowKeys="#{bindings.UserRoles2.collectionModel.selectedRow}"
    selectionListener="#{bindings.UserRoles2.collectionModel.makeCurrent}"
    rowSelection="single" id="t1" columnSelection="none"
    columnStretching="column:c3">
    <af:column sortProperty="#{bindings.UserRoles2.hints.RoleId.name}"
    filterable="true" sortable="true"
    headerText="#{bindings.UserRoles2.hints.RoleId.label}"
    id="c2">
    <af:inputListOfValues id="roleIdId"
    popupTitle="Search and Select: #{bindings.UserRoles2.hints.RoleId.label}"
    value="#{row.bindings.RoleId.inputValue}"
    model="#{row.bindings.RoleId.listOfValuesModel}"
    required="#{bindings.UserRoles2.hints.RoleId.mandatory}"
    columns="#{bindings.UserRoles2.hints.RoleId.displayWidth}"
    shortDesc="#{bindings.UserRoles2.hints.RoleId.tooltip}"
    autoSubmit="true" editMode="select">
    <f:validator binding="#{row.bindings.RoleId.validator}"/>
    </af:inputListOfValues>
    </af:column>
    <af:column sortProperty="#{bindings.UserRoles2.hints.RoleName.name}"
    sortable="true"
    headerText="#{bindings.UserRoles2.hints.RoleName.label}"
    id="c3">
    <af:outputFormatted value="#{row.bindings.RoleName.inputValue}"
    id="of7" partialTriggers="roleIdId"/>
    </af:column>
    <af:column sortProperty="#{bindings.UserRoles2.hints.Active.name}"
    filterable="true" sortable="true"
    headerText="#{bindings.UserRoles2.hints.Active.label}"
    id="c4">
    <af:outputFormatted value="#{row.bindings.Active.inputValue}"
    id="of8" partialTriggers="roleIdId"/>
    </af:column>
    </af:table>
    Edited by: user8854969 on Oct 7, 2012 1:34 PM
    Edited by: user8854969 on Oct 7, 2012 2:16 PM

    I believe there is a little confusion here. The error I am encountering has to do with a unique constraint violation and not a foreign key constraint. If I have the data:
    PK FK sequence
    1 5 1
    2 5 2
    3 5 3
    with a unique constraint on (FK, sequence) and want to change it to:
    PK FK sequence
    1 5 1
    4 5 2 --insert
    2 5 3 --update on sequence
    3 5 4 --update on sequence
    I am currently getting a unique constraint violation because the insert is issued before the updates, and the updates alone cause problems because they are issued out of order (i.e. if I do the shifting operation without the insertion of a new record).

  • Insert into a table with unique columns from another table.

    There are two tables,
    STG_DATA                                                          
    ORDER_NO    DIR_CUST_IND
    1002                     DNA
    1005                     GEN
    1005    
    1008                     NULL
    1001                     NULL
    1001                     NULL
    1006                     NULL
    1000                     ZZZ
    1001                     ZZZ
    FACT_DATA
    ORDER_NO    DIR_CUST_IND
    1005                      NULL
    1006                      NULL
    1008                      NULL
    I need to insert only unique [ORDER_NO] from STG_DATA to FACT_DATA with corresponding [DIR_CUST_IND]. Though STG_DATA has multiple rows with same ORDER_NO, I need to insert only one of that it can be any record.
    Sarvan

    CREATE TABLE #Level(ORDER_NO INT, DIR_CUST_IND CHAR(3))
    INSERT #Level
    SELECT 1002,'DNA' UNION
    SELECT 1005,'GEN' UNION
    SELECT 1005,NULL UNION
    SELECT 1008,NULL UNION
    SELECT 1001,NULL UNION
    SELECT 1001,NULL UNION
    SELECT 1006,NULL UNION
    SELECT 1000,'ZZZ' UNION
    SELECT 1001,'ZZZ'
    SELECT ORDER_NO,DIR_CUST_IND
    FROM( SELECT ROW_NUMBER()OVER(PARTITION BY ORDER_NO ORDER BY ORDER_NO) RowNum,*
    FROM #Level)A
    WHERE RowNum=1
    I hope this would give you enough idea. All you have to do is just write insert statement.
    Next time please post DDL & DML.
    Chaos isn’t a pit. Chaos is a ladder. Many who try to climb it fail and never get to try again. The fall breaks them. And some are given a chance to climb, but they refuse. They cling to the realm, or the gods, or love. Illusions. Only the ladder is real.
    The climb is all there is.

  • Inserting results from select query into another table

    Hi: Is there a way to insert results returned from a SQL query into another table within the same SQL statement.
    For example
    SELECT a.SRCID,
    SUM(DECODE(a.CLASS_ID,91147,1,0)) BLOCK,
    SUM(DECODE(a.CLASS_ID,91126,1,0)) ALLOW,
    FROM EVENT_DATA a
    where
    a.class_id in (91147,91126)
    GROUP BY SRCID
    order by TOTAL DESC;The columns returned SRCID, BLOCK and ALLOW need to go into another table called AGGREGATE_COUNT. This table has fields SRCID, BLOCK and ALLOW. Or is the only way this can be done from outside of SQL like using perl or C.
    Thanks
    Ray

    INSERT INTO aggregate_count
         ( chart_id
         , srcid
         , block
         , allow
    SELECT 2234  chart_id
         , srcid
         , SUM(DECODE(class_id, 91147, 1, 0))
         , SUM(DECODE(class_id, 91126, 1, 0))
      FROM event_data
    WHERE class_id IN (91147, 91126)
    GROUP BY chart_id, srcid
    ORDER BY total DESC;

  • Insert result of sql into table

    Hi all, how do I insert result of sql into table marco_ttmp ?
    I tried this, but it doesn't work, it returns "ORA-00928: missing SELECT keyword"
    insert into marco_ttmp (var_id,arcdate,contragentid,lpnd,lusd,lkk)
    with
    rrk as (
    select '01112010' as mydate FROM dual d
    union all
    select '01122010' as mydate FROM dual d
    rru as (
    select '33' contragentid from dual d
    union all 
    select '56' contragentid from dual d
    b as (
    select '01112010' as arcdate, '33' as contragentid, 'tt' as t from dual
    union all
    select '01122010' as arcdate, '56' as contragentid, 'un' as t from dual
    union all
    select '01122010' as arcdate, '33' as contragentid, 'kp' as t from dual
    union all
    select '01112010' as arcdate, '56' as contragentid, 'ur' as t from dual
    d as (
    select '01112010' as arcdate, '33' as contragentid, 'dr' as y from dual
    union all
    select '01122010' as arcdate, '56' as contragentid, 'rh' as y from dual
    union all
    select '01122010' as arcdate, '33' as contragentid, 'tr' as y from dual
    union all
    select '01112010' as arcdate, '56' as contragentid, 'wn' as y from dual
    v as ( select '555' as kkt from dual ),
    kkt as ( select count(*)+5 kkt from v )
    --insert into marco_ttmp (var_id,arcdate,contragentid,lpnd,lusd,lkk)
    select substr(rrk.mydate,4,2) || '.' || rru.contragentid var_id,
    rrk.mydate as arcdate,
    rru.contragentid as contragentid,
    count(b.arcdate) lpnd,count(d.arcdate) lusd,
    kkt.kkt lkk
    from kkt cross join rrk cross join rru
    left outer join b on b.arcdate = rrk.mydate and b.contragentid = rru.contragentid
    left outer join d on d.arcdate = rrk.mydate and d.contragentid = rru.contragentid
    group by rrk.mydate, rru.contragentid, kkt.kkt

    Moved "insert into" to the beginning of statement, everything works.
    insert into marco_ttmp (var_id,arcdate,contragentid,lpnd,lusd,lkk)
    with
    rrk as (
    select '01112010' as mydate FROM dual d
    union all
    select '01122010' as mydate FROM dual d
    rru as (
    select '33' contragentid from dual d
    union all 
    select '56' contragentid from dual d
    b as (
    select '01112010' as arcdate, '33' as contragentid, 'tt' as t from dual
    union all
    select '01122010' as arcdate, '56' as contragentid, 'un' as t from dual
    union all
    select '01122010' as arcdate, '33' as contragentid, 'kp' as t from dual
    union all
    select '01112010' as arcdate, '56' as contragentid, 'ur' as t from dual
    d as (
    select '01112010' as arcdate, '33' as contragentid, 'dr' as y from dual
    union all
    select '01122010' as arcdate, '56' as contragentid, 'rh' as y from dual
    union all
    select '01122010' as arcdate, '33' as contragentid, 'tr' as y from dual
    union all
    select '01112010' as arcdate, '56' as contragentid, 'wn' as y from dual
    v as ( select '555' as kkt from dual ),
    kkt as ( select count(*)+5 kkt from v )
    select substr(rrk.mydate,4,2) || '.' || rru.contragentid var_id,
    rrk.mydate as arcdate,
    rru.contragentid as contragentid,
    count(b.arcdate) lpnd,count(d.arcdate) lusd,
    kkt.kkt lkk
    from kkt cross join rrk cross join rru
    left outer join b on b.arcdate = rrk.mydate and b.contragentid = rru.contragentid
    left outer join d on d.arcdate = rrk.mydate and d.contragentid = rru.contragentid
    group by rrk.mydate, rru.contragentid, kkt.kkt

  • Error inserting a row into a table with identity column using cfgrid on change

    I got an error on trying to insert a row into a table with identity column using cfgrid on change see below
    also i would like to use cfstoreproc instead of cfquery but which argument i need to pass and how to use it usually i use stored procedure
    update table (xxx,xxx,xxx)
    values (uu,uuu,uu)
         My component
    <!--- Edit a Media Type  --->
        <cffunction name="cfn_MediaType_Update" access="remote">
            <cfargument name="gridaction" type="string" required="yes">
            <cfargument name="gridrow" type="struct" required="yes">
            <cfargument name="gridchanged" type="struct" required="yes">
            <!--- Local variables --->
            <cfset var colname="">
            <cfset var value="">
            <!--- Process gridaction --->
            <cfswitch expression="#ARGUMENTS.gridaction#">
                <!--- Process updates --->
                <cfcase value="U">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                    <cfquery datasource="#application.dsn#">
                    UPDATE SP.MediaType
                    SET #colname# = '#value#'
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <!--- Process deletes --->
                <cfcase value="D">
                    <!--- Perform actual delete --->
                    <cfquery datasource="#application.dsn#">
                    update SP.MediaType
                    set Deleted=1
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <cfcase value="I">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                   <cfquery datasource="#application.dsn#">
                    insert into  SP.MediaType (#colname#)
                    Values ('#value#')              
                    </cfquery>
                </cfcase>
            </cfswitch>
        </cffunction>
    my table
    mediatype:
    mediatypeid primary key,identity
    mediatypename
    my code is
    <cfform method="post" name="GridExampleForm">
            <cfgrid format="html" name="grid_Tables2" pagesize="3"  selectmode="edit" width="800px" 
            delete="yes"
            insert="yes"
                  bind="cfc:sp3.testing.MediaType.cfn_MediaType_All
                                                                ({cfgridpage},{cfgridpagesize},{cfgridsortcolumn},{cfgridsortdirection})"
                  onchange="cfc:sp3.testing.MediaType.cfn_MediaType_Update({cfgridaction},
                                                {cfgridrow},
                                                {cfgridchanged})">
                <cfgridcolumn name="MediaTypeID" header="ID"  display="no"/>
                <cfgridcolumn name="MediaTypeName" header="Media Type" />
            </cfgrid>
    </cfform>
    on insert I get the following error message ajax logging error message
    http: Error invoking xxxxxxx/MediaType.cfc : Element '' is undefined in a CFML structure referenced as part of an expression.
    {"gridaction":"I","gridrow":{"MEDIATYPEID":"","MEDIATYPENAME":"uuuuuu","CFGRIDROWINDEX":4} ,"gridchanged":{}}
    Thanks

    Is this with the Travel database or another database?
    If it's another database then make sure your columns
    allow nulls. To check this in the Server Navigator, expand
    your DataSource down to the column.
    Select the column and view the Is Nullable property
    in the Property Sheet
    If still no luck, check out a tutorial, like Performing Inserts, ...
    http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/index.jsp
    John

  • Inserting a associative array into a table

    Hi,
    I am working on a process where I want to store the temporary results in an associative array. At the end of the process I plan to write the contents of the array into a table that has the same structure. (the array is defined as a %rowtype of the table)
    Is it possible to execute ONE sql statement that transfers the array into the table? I am on 10g (10.2.0.4.0)
    Here's what I have now:
    declare
      type t_emp is table of emp%rowtype index by pls_integer;
      v_emp t_emp;
    begin
      -- a process that fills v_emp with lots of data
      for i in v_emp.first .. v_emp.last
      loop
        insert into emp values v_emp(i);
      end loop;  
    end;But it would be better if the loop could be replaced by sql one statement that inserts all of v_emp into emp. Is that possible?
    Thanks!
    PS: I'm not sure whether it's a good idea to use this approach... building a table in memory and inserting it into the database at the end of the process. Maybe it's better to insert every record directly into the table... but in any case I'm curious to see if it is possible in theory :)
    Edited: Added version info

    True, SQL cannot access the PL/SQL type.
    So you can not do something like this:
    insert into emp
    select * from table(v_emp)That is not possible.
    But FORALL is possible:
    FORALL i in v_emp.first .. v_emp.last
        insert into emp values v_emp(i);FORALL makes just one "round trip" to SQL bulk binding the entire contents of the array.
    It is much more efficient than the normal FOR loop, though it is mostly a PL/SQL optimization - even the FORALL actually inserts the rows "one at the time", it just hands the entire array to the SQL engine in one go.
    If the logic has to be done procedurally, FORALL is efficient.
    If the logic can be changed so arrays can be avoided completely and everything done in a single "insert into ... select ..." statement, that is the most efficient.

  • Insert or copy query into workbook 7.0

    How Can I copy a query into a workbook with new Analyzer 7.0 ?
    In the old version (Analyzer 3.x), I can choose Tools->Insert (or copy) query.
    How Can I do the same with Analyzer 7.0 ?
    Thanks

    Hi,
    .) Open the analyser and connect to the BI system,
    .) open an empty excel
    .) switch to design mode (first icon on second task menu)
    .) insert a table (second icon on second task menu)
    .) select the query
    and so on...
    BR
    /C

  • Best way to insert millions of records into the table

    Hi,
    Performance point of view, I am looking for the suggestion to choose best way to insert millions of records into the table.
    Also guide me How to implement in easier way to make better performance.
    Thanks,
    Orahar.

    Orahar wrote:
    Its Distributed data. No. of clients and N no. of Transaction data fetching from the database based on the different conditions and insert into another transaction table which is like batch process.Sounds contradictory.
    If the source data is already in the database, it is centralised.
    In that case you ideally do not want the overhead of shipping that data to a client, the client processing it, and the client shipping the results back to the database to be stored (inserted).
    It is must faster and more scalable for the client to instruct the database (via a stored proc or package) what to do, and that code (running on the database) to process the data.
    For a stored proc, the same principle applies. It is faster for it to instruct the SQL engine what to do (via an INSERT..SELECT statement), then pulling the data from the SQL engine using a cursor fetch loop, and then pushing that data again to the SQL engine using an insert statement.
    An INSERT..SELECT can also be done as a direct path insert. This introduces some limitations, but is faster than a normal insert.
    If the data processing is too complex for an INSERT..SELECT, then pulling the data into PL/SQL, processing it there, and pushing it back into the database is the next best option. This should be done using bulk processing though in order to optimise the data transfer process between the PL/SQL and SQL engines.
    Other performance considerations are the constraints on the insert table, the triggers, the indexes and so on. Make sure that data integrity is guaranteed (e.g. via PKs and FKs), and optimal (e.g. FKs should be indexes on the referenced table). Using triggers - well, that may not be the best approach (like for exampling using a trigger to assign a sequence value when it can be faster done in the insert SQL itself). Personally, I avoid using triggers - I rather have that code residing in a PL/SQL API for manipulating data in that table.
    The type of table also plays a role. Make sure that the decision about the table structure, hashed, indexed, partitioned, etc, is the optimal one for the data structure that is to reside in that table.

  • How could I insert the deleted row into another table within a trigger?

    Hi,
    How could I insert the deleted row into another table within a trigger? The destination table has the same columns as the source table. Since the statements are in the trigger, it is not allowed to query the source table named 'test'. Thanks! The trigger is as follows, uncompleted:
    CREATE TRIGGER delete_trigger
    AFTER DELETE
    ON test
    FOR EACH ROW
    BEGIN
    -- How could I insert the deleted row into another table
    END delete_trigger;
    Message was edited by:
    user569548

    Hi,
    I'm not sure what's wrong there.
    I read the oracle docs about ANALYZE and ALL_TAB_COLUMNS, and did the following:
    ANALYZE TABLE my_tab VALIDATE STRUCTURE; //went ok.
    SELECT column_name
    FROM all_tab_columns
    WHERE table_name = 'my_tab'; //but no rows selected?
    This topic might not be what this thread should be about. Here I posted a new thread:
    How to get colum names of the newly created table?
    Thanks.
    Message was edited by:
    user569548

  • How to insert a gif file into a table?

    Hi,
    I need to insert a gif file into a table. Can anyone tell me where I can find the information on how to create this kind of table, how to insert a gif file into a table and how to select?
    Thanks,
    Helen

    Hi Helen,
    You could read about that in the documentation which is available online.
    For a starter: BLOB. And I bet there are many examples to be found on the web.
    Good luck. :)
    Regards,
    Guido

  • BizTalk 2006 Event Log Warnings - Cannot insert duplicate key row in object 'dta_MessageFieldValues' with unique index 'IX_MessageFieldValues'.

    We have been seeing the following 'warnings' in the event log of our BizTalk machine since upgrading to BTS 2006. They seem to occur randomly 6 or 8 times per day.
    Does anyone know what this means and what needs to be done to clear it up? we have only one BizTalk server which is running on only one machine.
    I am new to BizTalk, so I am unable to find how many tracking host instances running for BizTalk server. Also, can you please let me know that we can configure only one instance for one server/machine?
    Source: BAM EventBus Service
    Event: 5
    Warning Details: Execute batch error. Exception information: TDDS failed to batch execution of streams. SQLServer: bizprod, Database: BizTalkDTADb.Cannot insert duplicate key row in object 'dta_MessageFieldValues'
    with unique index 'IX_MessageFieldValues'. The statement has been terminated..

    Other than ensuring that there exists a separate and single tracking host instance, you're getting an error about duplicate keys.. which implies that you're trying to Create a BAM Activity twice with the same data.
    I suggest you have a in-depth examination of the BAM (TPE or API) associated with the orchestration. In TPE ensure that the first binding you select is the "Instance Id" or "Message Id" before going ahead to map the ports or others.
    Regards.

  • Insert into two tables with a single query (same ID)

    Hello,
    I want to insert two tables at the same time ( with a single query) provided that both records get inserted with the same id. How do I do this?
    Table Movies
    id
    name
    Table Category
    movie_id
    cat_typea) Insert into first table, retrieve the id (may be by using my_sequence.currval and then insert into another table.
    issue: Makes three query to the db, I am also guessing that when multiple people try to insert there will be an issue, I might be wrong.
    I don't have any other idea.
    Greatly appreciated!

    Why don't use multitable insert ? It's available from 9i.
    A sequence.nextval will return the same value within the whole instruction
    so all records can be inserted with the same id.
    Look at this example:
    DROP TABLE A;
    DROP TABLE B;
    drop sequence a_seq;
    CREATE TABLE A(
      ID NUMBER,
      FIRSTNAME VARCHAR2(50)
    CREATE TABLE B AS
    SELECT id, firstname lastname FROM a;
    CREATE SEQUENCE a_seq
    START WITH 1;
    INSERT ALL
    INTO A(ID, FIRSTNAME) VALUES(A_SEQ.NEXTVAL, FNAME)
    INTO B(ID, LASTNAME) VALUES(A_SEQ.NEXTVAL, LNAME)
    SELECT 'fname ' || LEVEL FNAME, 'lname ' || LEVEL LNAME
    FROM DUAL
    CONNECT BY LEVEL < 10
    COMMIT;
    SELECT * FROM A;
    SELECT * FROM b;
    DROP TABLE A succeeded.
    DROP TABLE B succeeded.
    drop sequence a_seq succeeded.
    CREATE TABLE succeeded.
    CREATE TABLE succeeded.
    CREATE SEQUENCE succeeded.
    18 rows inserted
    commited
    ID                     FIRSTNAME                                         
    3                      fname 1                                           
    4                      fname 2                                           
    5                      fname 3                                           
    6                      fname 4                                           
    7                      fname 5                                           
    8                      fname 6                                           
    9                      fname 7                                           
    10                     fname 8                                           
    11                     fname 9                                           
    9 rows selected
    ID                     LASTNAME                                          
    3                      lname 1                                           
    4                      lname 2                                           
    5                      lname 3                                           
    6                      lname 4                                           
    7                      lname 5                                           
    8                      lname 6                                           
    9                      lname 7                                           
    10                     lname 8                                           
    11                     lname 9                                           
    9 rows selected

Maybe you are looking for

  • ME59N Automatic PO creation

    When i try to create PR into PO with Tcode ME59N.... error it is showing Requisition could not be converted I checked in Both Material master and vendor master also  i have selected Automatic PO check box.. Pls guide me....

  • I reformat my ipod with MBP disc utility. Is my ipod DEAD?

    i have a 20Gb 3rd gen. ipod that i was using with my window pc (so it was formatted FAT). Now i have a macbookpro and the ipod wouldn't sync. neither restore to factory setting. So i try to reformat it (mac OS extended journaled) using macbookpro dis

  • Airport Base Station not showing up and network disappeared

    I only use Airport Express to stream music from iTunes to my stereo speakers.  It had been working just fine until a couple of weeks ago.  Now, it appears that the base station cannot connect to my network, and I can't even find my network.  When I c

  • Windows authentication from an enterprise application

    Hi All, Does anyone has any idea how to go about implementing windows active directory authentication from an enterprise application.The requirement is that the users across a particular domain should be able to use the application by using their win

  • I am not happy with Yosemite. How do I go back to Mavericks?

    I recently upgraded to OSx Yosemite (10.10.1) on my Mac BookPro. Its is very slow loading on WiFi. Processor 2.2 GHz Intel Core i7 Memory 4GB 1333 MHz DDR3 How can I either fix this problem or go back to Mavericks? Thanks