Temp tables and deferred updates

Does anyone know why the following update to #test and #test1 is deferred, but the same update to the permanent table inputtable is direct?
I haven't found any documentation that would explain this.
@@version is Adaptive Server Enterprise/15.7.0/EBF 22305 SMP SP61 /P/Sun_svr4/OS 5.10/ase157sp6x/3341/64-bit/FBO/Fri Feb 21 11:55:38 2014
create proc proctest
as
begin
    -- inputtable.fiId is int not null
    -- Why this is a deferred update?
    select fiId into #test from inputtable
    update #test set fiId = 0
    -- Why this is a deferred update?
    create table #test1(fiId int not null)
    insert #test1 select fiId from inputtable
    update #test1 set fiId = 0
    -- Yay. This is a direct update.
    update inputtable set fiId = 0
end
go
set showplan on
go
exec proctest
go
       |ROOT:EMIT Operator (VA = 2)
       |
       |   |UPDATE Operator (VA = 1)
       |   |  The update mode is deferred.
       |   |
       |   |   |SCAN Operator (VA = 0)
       |   |   |  FROM TABLE
       |   |   |  #test
       |   |   |  Table Scan.
       |   |   |  Forward Scan.
       |   |   |  Positioning at start of table.
       |   |   |  Using I/O Size 16 Kbytes for data pages.
       |   |   |  With LRU Buffer Replacement Strategy for data pages.
       |   |
       |   |  TO TABLE
       |   |  #test
       |   |  Using I/O Size 2 Kbytes for data pages.
       |ROOT:EMIT Operator (VA = 2)
       |
       |   |UPDATE Operator (VA = 1)
       |   |  The update mode is deferred.
       |   |
       |   |   |SCAN Operator (VA = 0)
       |   |   |  FROM TABLE
       |   |   |  #test1
       |   |   |  Table Scan.
       |   |   |  Forward Scan.
       |   |   |  Positioning at start of table.
       |   |   |  Using I/O Size 16 Kbytes for data pages.
       |   |   |  With LRU Buffer Replacement Strategy for data pages.
       |   |
       |   |  TO TABLE
       |   |  #test1
       |   |  Using I/O Size 2 Kbytes for data pages.
       |ROOT:EMIT Operator (VA = 2)
       |
       |   |UPDATE Operator (VA = 1)
       |   |  The update mode is direct.
       |   |
       |   |   |SCAN Operator (VA = 0)
       |   |   |  FROM TABLE
       |   |   |  inputtable
       |   |   |  Table Scan.
       |   |   |  Forward Scan.
       |   |   |  Positioning at start of table.
       |   |   |  Using I/O Size 16 Kbytes for data pages.
       |   |   |  With LRU Buffer Replacement Strategy for data pages.
       |   |
       |   |  TO TABLE
       |   |  inputtable
       |   |  Using I/O Size 2 Kbytes for data pages.

I don't have a documentation reference but the optimizer appears to default to deferred mode when the #table and follow-on DML operation are in the same batch (ie, optimizer makes a 'safe' guess during optimization based on limited details of #table schema).
You can get the queries to operate in direct mode by forcing the optimizer to (re)compile the UPDATEs after the #tables have been created, eg:
- create #table outside of proc; during proc creation/execution the #tables already exist so optimizer can choose direct mode
- perform UPDATEs within exec() construct; exec() calls are processed within a separate/subordinate context, ie, #table is know at time exec() call is compiled so direct mode can be chosen; obvious downside is the overhead for the exec() call and associated compilation phase ... which may be an improvement over a) executing UPDATE in deferred mode and/or b) recompiling the proc (see next bullet), ymmv
- induce a schema change to the #table so proc is recompiled (with #table details known during the recompile) thus allowing use of direct mode; while adding/dropping indexes/constraints/columns will suffice these also add extra processing overhead; I'd suggest a fairly benign schema change that also has little/no effect on table (eg, alter table #test replace fiId default null); obvious downside to this approach is the forced recompilation of the stored proc, which could add considerably to proc run times depending on volume/complexity of queries in the rest of the proc

Similar Messages

  • Difference between Temp table and Variable table and which one is better performance wise?

    Hello,
    Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
    Which one is recommended to use for better performance?
    also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
    In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
    Is that Table variable using Memory or Disk space?
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Check following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
    TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
    Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
    On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
    But it also depends upon specific scenarios you are dealing with , can you share it?
    ~manoj | email: http://scr.im/m22g
    http://sqlwithmanoj.wordpress.com
    MCCA 2011 | My FB Page

  • Query is taking too much time for inserting into a temp table and for spooling

    Hi,
    I am working on a query optimization project where I have found a query which takes hell lot of time to execute.
    Temp table is defined as follows:
    DECLARE @CastSummary TABLE (CastID INT, SalesOrderID INT, ProductionOrderID INT, Actual FLOAT,
    ProductionOrderNo NVARCHAR(50), SalesOrderNo NVARCHAR(50), Customer NVARCHAR(MAX), Targets FLOAT)
    SELECT
    C.CastID,
    SO.SalesOrderID,
    PO.ProductionOrderID,
    F.CalculatedWeight,
    PO.ProductionOrderNo,
    SO.SalesOrderNo,
    SC.Name,
    SO.OrderQty
    FROM
    CastCast C
    JOIN Sales.Production PO ON PO.ProductionOrderID = C.ProductionOrderID
    join Sales.ProductionDetail d on d.ProductionOrderID = PO.ProductionOrderID
    LEFT JOIN Sales.SalesOrder SO ON d.SalesOrderID = SO.SalesOrderID
    LEFT JOIN FinishedGoods.Equipment F ON F.CastID = C.CastID
    JOIN Sales.Customer SC ON SC.CustomerID = SO.CustomerID
    WHERE
    (C.CreatedDate >= @StartDate AND C.CreatedDate < @EndDate)
    It takes almost 33% for Table Insert when I insert the data in a temp table and then 67% for Spooling. I had removed 2 LEFT joins and made it as JOIN from the above query and then tried. Query execution became bit fast. But still needs improvement.
    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables?? Please suggest.
    -Pep

    How I can improve further. Will it be good enough if I create Indexes on the columns for the temp table and try.or what If I use derived tables??
    I suggest you start with index tuning.  Specifically, make sure columns specified in the WHERE and JOIN columns are properly indexed (ideally clustered or covering, and unique when possible).  Changing outer joins to inner joins is appropriate
    if you don't need outer joins in the first place.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Temp table, and gather table stats

    One of my developers is generating a report from Oracle. He loads a subset of the data he needs into a temp table, then creates an index on the temp table, and then runs his report from the temp table (which is a lot smaller than the original table).
    My question is: Is it necessary to gather table statistics for the temp table, and the index on the temp table, before querying it ?

    It depends yesterday I have very bad experience with stats one of my table has NUM_ROWS 300 and count(*)-7million and database version is 9206(bad every with optimizer bugs) so queries starts breaking and lot of buffer busy and latch free it took while to figure out but I have deleted the stats and every thing came under control - my mean to say statistics are good and bad. Once you start collecting you should keep an eye.
    Thanks.

  • Insert data from an tabular to a temp table and fetching a columns.

    Hi guys ,
    I am working in apex 3.2 in which in a page i have a data's fom various tables and displays it in tabular form. Then i have to insert the tabular form data to a temp table and fetch the data from the temp table and insert into my main table. I think that i have to use a cursor to fetch the data from the temp table and insert into the main table but i didnt get the perfect example for doing this. Can any one help me to sort it out.
    Thanks With regards
    Balaji

    Hi,
    Follow this scenario.
    Your Query:
    SELECT t1.col1, t1.col2, t2.col1, t2.col2, t3.col1
    FROM table1 t1, table2 t2, table3 t3
    (where some join conditions);On insert button click call this process
    DECLARE
    temp1 VARCHAR2(100);
    temp2 VARCHAR2(100);
    temp3 VARCHAR2(100);
    temp4 VARCHAR2(100);
    temp5 VARCHAR2(100);
    BEGIN
         FOR i IN 1..apex_application.g_f01.COUNT
         LOOP
              temp1    := apex_application.g_f01(i);
              temp2    := apex_application.g_f02(i);
              temp3    := apex_application.g_f03(i);
              temp4    := apex_application.g_f04(i);
              temp5    := apex_application.g_f05(i);
              INSERT INTO table1(col1, col2) VALUES(temp1, temp2);
              INSERT INTO table2(col1, col2) VALUES(temp3, temp4);
              INSERT INTO table3(col1) VALUES(temp5);
         END LOOP;
    END;You don't even need temp tables and cursor to make an insert into different tables.
    Thanks,
    Ramesh P.
    *(If you know you got the correct answer or helpful answer, please mark as corresponding.)*

  • Global temp table and edit

    Hi all,
    Can someone tell me why when I create a GTT and insert the data like the followijng ,I get insert 14 rows msg. But when I do a select statement from sqlwork shop , sometimes i get the data sometimes I don't. my understanding is this data is supposed to stay during my logon session then got cleaned out when I exit session.
    I am developing a screen in apex and will use this temp table for user to do some editing work. Once ithe editing is done then I save the data into a static table. Can this be done ? So far my every attempt to update the temp table always result to 0 rows updated and the temp table reversed back to 0 rows. CAn you help me ?
    CREATE GLOBAL TEMPORARY TABLE "EMP_SESSION"
    (     "EMPNO" NUMBER NOT NULL ENABLE,
         "ENAME" VARCHAR2(10),
         "JOB" VARCHAR2(9),
         "MGR" NUMBER,
         "HIREDATE" DATE,
         "SAL" NUMBER,
         "COMM" NUMBER,
         "DEPTNO" NUMBER
    ) ON COMMIT PRESERVE ROWS
    insert into emp_session( EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    select * from emp
    select * from emp_session
    -- sometimes I get 14 rows, sometimes 0 rows
    Thanks.
    Tai

    Tai,
    To say that Apex doesn't support GTT's is not quite correct. In order to understand why it is not working for you and how they may be of use in an Apex application, you have to understand the concept of a session in Apex as opposed to a conventional database session.
    In a conventional database session, as when you are connected with sqlplus then you have what is known as a dedicated session, or a synchronous connection. Temporary objects such as GTTs and packaged variables can persist across calls to the database. A session in Apex however is asynchronous by nature and a connection to the database is done through some sort of a server such as the Oracle HTTP server or the Apex Listener, which in effect maintains a pool of connections to the database and calls by your application aren't guaranteed to get the same connection for each call.
    To get over this, the guys who developed Apex came up with various methods to maintain session state and global objects that are persistent within the context of an Apex session. One of these is Apex collections, which are a device for maintaining collection like (array like) data that is persistent within an Apex session. These are Apex session specific objects in that they are local to the session that creates and maintains them.
    With this knowledge, you can then see why the GTT is not working for you and also how a GTT may be of use in an Apex application, provided you don't expect the data to persist across a call, as in a PL/SQL procedure. You should note though, that unless you are dealing with very large datasets, then a regular Oracle collection is preferable.
    I hope this explains your issue.
    Regards
    Andre

  • Temp tables and transaction log

    Hi All,
    I am on SQL 2000.
    When I am inserting(or updating or deleting) data to/from temp tables (i.e. # tables), is transaction log created for those DML operations?
    The process is, we have a huge input dataset to process. So, we insert subset(s) of input data in temp table, treat that as our input set and do the processing in parts. Can I avoid transaction log generation for these intermediate steps?
    Soon, we will be moving to 2008 R2. Are there any features in 2008, which can help me in avoiding this transaction logging?
    Thanks in advance

    Every DML operation is logged in the LOG file. Is that possible to insert the data in small chunks?
    http://www.dfarber.com/computer-consulting-blog/2011/1/14/processing-hundreds-of-millions-records-got-much-easier.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Blog:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance

  • CDC and deferred updates

    I am using CDC to generate events based on the entries in the CDC tables. I need to distingush between insert, update and delete events but when SQL chooses to do a deferred update, it changes an update into a delete and insert - is there a way I can affect
    this behaviour so that any update command results in an update record in the cdc tables?
    I am aware of the DBCC TRACEON (8207,-1) but this seems to only apply to update commands affecting 1 record and which don't affect fields used by unique constraints - my update commands will affect multiple records and fields with unique constraints applied
    to them.

    This problem is caused by a bug in the fn_cdc_get_net_changes_<capture_instance> functions. The bug works in 2 ways, the first is indentified in this thread: an additional row is exported with __$operation = 1. A 2nd problem, resulting from the same
    bug, is that some rows with __$operation = 1 are incorrectly suppressed. These missing rows are however less easily spotted and thus this incarnation of the problem is nowhere reported.
    The bug is reported on Connect as
    ID 690476 back in 2011 already. Below is a copy of the corrected cdc.fn_get_net_changes_dbo_NETTEST function in my test database. The fix can easily be extracted from this sample code. I would suggest you do not adapt the functions yourself in a production
    environment. Instead we should put all our combined powers in to get Microsoft to fix this issue. Please Vote and if possible have the case reopened as soon as possible. 
    create function [cdc].[fn_cdc_get_net_changes_dbo_NETTEST]
    ( @from_lsn binary(10),
    @to_lsn binary(10),
    @row_filter_option nvarchar(30)
    returns table
    return
    select NULL as __$start_lsn,
    NULL as __$operation,
    NULL as __$update_mask, NULL as [ID], NULL as [A]
    where ( [sys].[fn_cdc_check_parameters]( N'dbo_NETTEST', @from_lsn, @to_lsn, lower(rtrim(ltrim(@row_filter_option))), 1) = 0)
    union all
    select __$start_lsn,
    case __$count_23BAE034
    when 1 then __$operation
    else
    case __$min_op_23BAE034
    when 2 then 2
    when 4 then
    case __$operation
    when 1 then 1
    else 4
    end
    else
    case __$operation
    when 2 then 4
    when 4 then 4
    else 1
    end
    end
    end as __$operation,
    null as __$update_mask , [ID], [A]
    from
    select t.__$start_lsn as __$start_lsn, __$operation,
    case __$count_23BAE034
    when 1 then __$operation
    else
    ( select top 1 c.__$operation
    from [cdc].[dbo_NETTEST_CT] c with (nolock)
    where ( (c.[ID] = t.[ID]) )
    and ((c.__$operation = 2) or (c.__$operation = 4) or (c.__$operation = 1))
    and (c.__$start_lsn <= @to_lsn)
    and (c.__$start_lsn >= @from_lsn)
    order by c.__$seqval) end __$min_op_23BAE034, __$count_23BAE034, t.[ID], t.[A]
    from [cdc].[dbo_NETTEST_CT] t with (nolock) inner join
    ( select r.[ID], max(r.__$seqval) as __$max_seqval_23BAE034,
    count(*) as __$count_23BAE034
    from [cdc].[dbo_NETTEST_CT] r with (nolock)
    where (r.__$start_lsn <= @to_lsn)
    and (r.__$start_lsn >= @from_lsn)
    group by r.[ID]) m
    on t.__$seqval = m.__$max_seqval_23BAE034 and
    ( (t.[ID] = m.[ID]) )
    where lower(rtrim(ltrim(@row_filter_option))) = N'all'
    and ( [sys].[fn_cdc_check_parameters]( N'dbo_NETTEST', @from_lsn, @to_lsn, lower(rtrim(ltrim(@row_filter_option))), 1) = 1)
    and (t.__$start_lsn <= @to_lsn)
    and (t.__$start_lsn >= @from_lsn)
    and ((t.__$operation = 2) or (t.__$operation = 4) or
    ((t.__$operation = 1) and not exists (
    select top(1) *
    from [cdc].[dbo_NETTEST_CT] c with (nolock)
    where ( (c.[ID] = t.[ID]) )
    and c.__$operation = 2
    and c.__$start_lsn = t.__$start_lsn
    and c.__$seqval = t.__$seqval
    --(2 not in
    -- ( select top 1 c.__$operation
    -- from [cdc].[dbo_NETTEST_CT] c with (nolock)
    -- where ( (c.[ID] = t.[ID]) )
    -- and ((c.__$operation = 2) or (c.__$operation = 4) or (c.__$operation = 1))
    -- and (c.__$start_lsn <= @to_lsn)
    -- and (c.__$start_lsn >= @from_lsn)
    -- order by c.__$operation desc
    and t.__$operation = (
    select
    max(mo.__$operation)
    from
    [cdc].[dbo_NETTEST_CT] as mo with (nolock)
    where
    mo.__$seqval = t.__$seqval
    and
    ( (t.[ID] = mo.[ID]) )
    group by
    mo.__$seqval
    ) Q
    union all
    select __$start_lsn,
    case __$count_23BAE034
    when 1 then __$operation
    else
    case __$min_op_23BAE034
    when 2 then 2
    when 4 then
    case __$operation
    when 1 then 1
    else 4
    end
    else
    case __$operation
    when 2 then 4
    when 4 then 4
    else 1
    end
    end
    end as __$operation,
    case __$count_23BAE034
    when 1 then
    case __$operation
    when 4 then __$update_mask
    else null
    end
    else
    case __$min_op_23BAE034
    when 2 then null
    else
    case __$operation
    when 1 then null
    else __$update_mask
    end
    end
    end as __$update_mask , [ID], [A]
    from
    select t.__$start_lsn as __$start_lsn, __$operation,
    case __$count_23BAE034
    when 1 then __$operation
    else
    ( select top 1 c.__$operation
    from [cdc].[dbo_NETTEST_CT] c with (nolock)
    where ( (c.[ID] = t.[ID]) )
    and ((c.__$operation = 2) or (c.__$operation = 4) or (c.__$operation = 1))
    and (c.__$start_lsn <= @to_lsn)
    and (c.__$start_lsn >= @from_lsn)
    order by c.__$seqval) end __$min_op_23BAE034, __$count_23BAE034,
    m.__$update_mask , t.[ID], t.[A]
    from [cdc].[dbo_NETTEST_CT] t with (nolock) inner join
    ( select r.[ID], max(r.__$seqval) as __$max_seqval_23BAE034,
    count(*) as __$count_23BAE034,
    [sys].[ORMask](r.__$update_mask) as __$update_mask
    from [cdc].[dbo_NETTEST_CT] r with (nolock)
    where (r.__$start_lsn <= @to_lsn)
    and (r.__$start_lsn >= @from_lsn)
    group by r.[ID]) m
    on t.__$seqval = m.__$max_seqval_23BAE034 and
    ( (t.[ID] = m.[ID]) )
    where lower(rtrim(ltrim(@row_filter_option))) = N'all with mask'
    and ( [sys].[fn_cdc_check_parameters]( N'dbo_NETTEST', @from_lsn, @to_lsn, lower(rtrim(ltrim(@row_filter_option))), 1) = 1)
    and (t.__$start_lsn <= @to_lsn)
    and (t.__$start_lsn >= @from_lsn)
    and ((t.__$operation = 2) or (t.__$operation = 4) or
    ((t.__$operation = 1) and not exists (
    select top(1) *
    from [cdc].[dbo_NETTEST_CT] c with (nolock)
    where ( (c.[ID] = t.[ID]) )
    and c.__$operation = 2
    and c.__$start_lsn = t.__$start_lsn
    and c.__$seqval = t.__$seqval
    --(2 not in
    -- ( select top 1 c.__$operation
    -- from [cdc].[dbo_NETTEST_CT] c with (nolock)
    -- where ( (c.[ID] = t.[ID]) )
    -- and ((c.__$operation = 2) or (c.__$operation = 4) or (c.__$operation = 1))
    -- and (c.__$start_lsn <= @to_lsn)
    -- and (c.__$start_lsn >= @from_lsn)
    -- order by c.__$operation desc
    and t.__$operation = (
    select
    max(mo.__$operation)
    from
    [cdc].[dbo_NETTEST_CT] as mo with (nolock)
    where
    mo.__$seqval = t.__$seqval
    and
    ( (t.[ID] = mo.[ID]) )
    group by
    mo.__$seqval
    ) Q
    union all
    select t.__$start_lsn as __$start_lsn,
    case t.__$operation
    when 1 then 1
    else 5
    end as __$operation,
    null as __$update_mask , t.[ID], t.[A]
    from [cdc].[dbo_NETTEST_CT] t with (nolock) inner join
    ( select r.[ID], max(r.__$seqval) as __$max_seqval_23BAE034
    from [cdc].[dbo_NETTEST_CT] r with (nolock)
    where (r.__$start_lsn <= @to_lsn)
    and (r.__$start_lsn >= @from_lsn)
    group by r.[ID]) m
    on t.__$seqval = m.__$max_seqval_23BAE034 and
    ( (t.[ID] = m.[ID]) )
    where lower(rtrim(ltrim(@row_filter_option))) = N'all with merge'
    and ( [sys].[fn_cdc_check_parameters]( N'dbo_NETTEST', @from_lsn, @to_lsn, lower(rtrim(ltrim(@row_filter_option))), 1) = 1)
    and (t.__$start_lsn <= @to_lsn)
    and (t.__$start_lsn >= @from_lsn)
    and ((t.__$operation = 2) or (t.__$operation = 4) or
    ((t.__$operation = 1) and not exists (
    select top(1) *
    from [cdc].[dbo_NETTEST_CT] c with (nolock)
    where ( (c.[ID] = t.[ID]) )
    and c.__$operation = 2
    and c.__$start_lsn = t.__$start_lsn
    and c.__$seqval = t.__$seqval
    --(2 not in
    -- ( select top 1 c.__$operation
    -- from [cdc].[dbo_NETTEST_CT] c with (nolock)
    -- where ( (c.[ID] = t.[ID]) )
    -- and ((c.__$operation = 2) or (c.__$operation = 4) or (c.__$operation = 1))
    -- and (c.__$start_lsn <= @to_lsn)
    -- and (c.__$start_lsn >= @from_lsn)
    -- order by c.__$operation desc
    and t.__$operation = (
    select
    max(mo.__$operation)
    from
    [cdc].[dbo_NETTEST_CT] as mo with (nolock)
    where
    mo.__$seqval = t.__$seqval
    and
    ( (t.[ID] = mo.[ID]) )
    group by
    mo.__$seqval
    SQL expert for JF Hillebrand IT BV - The Netherlands.

  • What are these DR$TEMP % tables and can they be deleted?

    We are generating PL/SQL using ODMr and then periodically running the model in batch mode. Out of 26,542 objects in the DMUSER1 schema, 25,574 are DR$TEMP% tables. Should the process be cleaning itself up or is this supposed to be a manual process? Is the cleanup documented somewhere?
    Thanks

    Hi Doug,
    The only DR$ tables/indexes built are the ones generated by the Build,Apply and Test Activities. I confirmed that they are deleted in ODMr 10.2.0.3. As I noted earlier, there was a bug in ODMr 10.2.0.2 which would lead to leakage when deleting Activities. You will have DR$ table around for existing Activities, so do not delete these without validating they are no longer part of an existing Activity.
    You can track down the DR$ objects associated to an Activity by viewing the text step in the activity and finding the table generated for the text data. This table will have a text index created on it. The name of that text index is used as a base name for several tables which Oracle text utilizes.
    Again, all of these are deleted when you delete an Activity with ODMr 10.2.0.3.
    Thanks, Mark

  • Creating a table and content updation in Java Stack

    Hi All
    I have a requirement that i need to capture the information of deployed content in a deploy log file present in the directory of a J2EE Engine and store it in a JAVA Persistence stack.I should not use any SAP tables so the JCO and RFC approch is not helpful in my case.Is it possible to create a table in  the Java persistence layer (NetWeaver DB partition for Java).
    So can any body helps me how to crate a table and update the content.As of now I am able to capture the log info.
    Some body suggested me to use open SQL for Java.But I am very new to this area.
    Any suggetions and code helps me to resolve this issue.
    Regards
    Kalyan

    Yes your assumption is correct
    What you need is a foreach loop based on ADO.Net enumerator which iterates through an object variable created in SSIS
    The object variable you will populate inside execute sql task using query below
    SELECT Col1,Col2
    From Table2
    Have two variables inside loop to get each iterated value of col1 and col2
    Then inside loop have a data flow task with oledb source and flat file destination
    Inside OLEDB Source use query as
    SELECT *
    FROM Table1
    WHERE col1 = ?
    Map parameter to Col1 variable inside loop
    Now link this to flat file destination
    Have a variable to generate filename using expression below
    @[User::Col2] + (DT_STR,1,1252) "\\" + (DT_STR,10,1252) @[User::Col1] + ".txt"
    Map this filename variable to connection string property of the flat file connection manager
    Once executed you will get the desired output
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Polling the master detail table and to update the LAST_UPDATED with SYSDATE

    Hi
    The requirement is polling the master detail table where read_flag is null and has to update the LAST_UPDATED with SYSDATE in both tables.
    Refered the MasterDetail and PollingPureSQLSysdateLogicalDelete samples of SOASuite.
    Used the delete polling strategy in polling process and modified the generated TopLink discriptor as follows.
    set the TopLink -> Custom SQL tab -> Delete tab with the following query
    for master table (RECEIVER_DEPT) :
    update RECEIVER_DEPT set READ_FLAG= 'S' , LAST_UPDATED=sysdate where DEPTNO=#DEPTNO
    set the TopLink -> Custom SQL tab -> Delete tab with the following query
    for Detail table (RECEIVER_EMP):
    update RECEIVER_EMP set LAST_UPDATED=sysdate where EMPNO=#EMPNO
    After deploying the bpel process data is updated in master(RECEIVER_DEPT) table with LAST_UPDATED as sysdate and read_flag as S
    however data is deleted in detail(RECEIVER_EMP) table rather than updated records.

    Xtanto,
    I suggest using JSP / Struts. UIX will be replaced by ADF Faces in JDeveloper 10.1.3 and thus I wouldn't suggest new developments to be started with UIX unless time doesn't allow to wait for ADF Faces. In this case develop UIX in an MVC1 model, using the UIX events for navigation because this model seems more likely to be mgratable, according to the UIX Statement of direction on Otn.
    Back to your question. You can create a search form in JSP that forwards the request to a StrutsData Action to set the scope of teh result set. The read only table can have a link or a button to call the detail page, passing the RoewKey as a string.
    Have a look at the Oracle by Example (OBE) tutorials that contain similar exaqmples.
    Frank

  • HOW TO STORE FETCH DATA IN TEMP TABLE AND HOW CAN I USE THAT FURTHER

    I WANT TO STORE THIS FETCH DATA IN  SUM VALUE IN TEMP TABLE THEN I WANT TO USE THIS VALUE IN ANOTHER
    CODING. HELP ME TO DO THIS?
    SELECT SUM(SIGNEDDATA) 
    FROM FACPLAN
    WHERE TIMEID IN
    (SELECT TIMEID FROM Time 
    WHERE ID IN
    (SELECT CURRENT_MONTH FROM mbrVERSION WHERE CURRENT_MONTH!=''))

    If you want assign to variable:
    DECLARE @SUMAMOUNT INT - -you may change the datatype as required
    Set @SUMAMOUNT = (SELECT SUM(SIGNEDDATA) 
    FROM FACPLAN
    WHERE TIMEID IN
    (SELECT TIMEID FROM Time 
    WHERE ID IN
    (SELECT CURRENT_MONTH FROM mbrVERSION WHERE CURRENT_MONTH!='')))
    And you can use @SUMAMOUNT for further processing
    If you want to store it in a table 
    SELECT SUM(SIGNEDDATA)  as SUMAMOUNT into #Temp
    FROM FACPLAN
    WHERE TIMEID IN
    (SELECT TIMEID FROM Time 
    WHERE ID IN
    (SELECT CURRENT_MONTH FROM mbrVERSION WHERE CURRENT_MONTH!=''))

  • On submit perform an insert on one table and an update on aother table

    I am trying to perform and insert on the table one table (the wizard created my form the insert is going against the table that I created using the wizard) and on the form is on field that is also in another table. Therefore, I am trying to perform an update on one attribute of one table and a insert into another table. How do I do this in apex?

    If you have used wizard to create form, then you may see a process of type 'Automatic Row Processing (DML)' in your page which will perform INSERT/UPDATE/DELETE on your form table. Here you can see APEX performs INSERT only when REQUEST is in 'INSERT, CREATE, CREATE_AGAIN, CREATEAGAIN'
    So create one more PL/SQL page process which will execute at 'on Submit after validations' and write update process as follows
    begin
    -- pseudo table/columns
    update tbl_second
    set col1 = :p1_item
    where pk_col = :p1_pk_item;
    end;Make this process conditional so that it will perform UPDATE only when request value is in 'INSERT, CREATE, CREATE_AGAIN, CREATEAGAIN' ( i.e. only when you are inserting into your form table)
    Cheers,
    Hari
    p.s. I think you may also need to update the second table when some-one updates your form table.
    Edited by: Hari_639 on Oct 26, 2009 9:46 AM

  • Difference betweem temp table and CTE as performance wise?

    Hi Techies,
    Can anyone explain CTE and Temp table performance wise. Which is the better object to use while implementing DML operations.
    Thanks in advance.
    Regards
    Cham bee

    Welcome to the world of performance tuning in SQL Server! The standard answer to this kind of question is:
    It depends.
    A CTE is a logical construct, which specifies the logical computation order for the query. The optimizer is free to recast computation order in such away that the intermediate result from the CTE never exists during the calculation. Take for instance this
    query:
    WITH aggr AS (
        SELECT account_no, SUM(amt) AS amt
        FROM   transactions
        GROUP  BY account_no
    SELECT account_no, amt
    FROM   aggr
    WHERE  account_no BETWEEN 199 AND 399
    Transactions is a big table, but there is an index on account_no. In this example, the optimizer will use that index and only compute the total amount for the accounts in the range. If you were to make a temp table of the CTE, SQL Server would have no choice
    to scan the entire table.
    But there also situations when it is better to use a temp table. This is often a good strategy when the CTE appears multiple times in the query. The optimizer is not able to pick a plan where the CTE is computed once, so it may compute the CTE multiple times.
    (To muddle the waters further, the optimizers in some competing products have this capability.)
    Even if the CTE is only referred to once, it may help to materialise the CTE. The temp table has statistics, and those statistics may help the optimizer to compute a better plan for the rest of the query.
    For the case you have at hand, it's a little difficult to tell, because it is not clear to me if the conditions are the same for points 1, 2 and 3 or if they are different. But the second one, removing duplicates, can be quite difficult with a temp table,
    but is fairly simple using a CTE with row_number().
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Trees, temp tables and apex

    Hello,
    Has anyone had any luck building trees that go against temp tables? My tree works great with a regular table but runs flaky when I change the table to a temp table. Is this a limitation with APEX?
    Thanks in advance,
    Sam

    Temporary tables that belong to a database session are not reliably accessible across Application Express page requests. You should look at apex collections for temporary storage that will be persistent for the life of the apex session.
    Scott

Maybe you are looking for