Updating a table with need of table joins
Hi,
I want to update a table (via PL/SQL process) but to do it I need to include joins - and my code throws an error...
How would I go about getting this to work?
UPDATE employees a, action b
SET a.met_employee = '0000'
WHERE b.date_met NOT BETWEEN QA1 AND QA2
AND b.date_met NOT BETWEEN QB1 AND QA2
AND b.date_met NOT BETWEEN QC1 AND QC2
AND b.date_met BETWEEN QD1 AND QD2
AND b.emp_id = P12_ID
AND a.emp_id = P12_ID;
Where QA1, QA2, QB1, QB2, QC1, QC2, QD1, QD2 are variable values of pre-defined dates and P12_ID is the ID variable.
The problem occurs because I need to use the two tables - how can I solve this?
Thanks,
Si
The employee table has multiple actions (action table)
We want to set certain values i.e the '0000' to certain employees that have had actions on certain dates (the defined variables)
So it needs to check all the actions within the set dates and give only employees within this criteria the value '0000'
with the following...
update emplyees
set met_emplyees = '0000'
where emp_id = P12_ID;
that will always set the employee with '0000'. However '0000' should only be set to employees that had actions on certain dates.
Cheers
Si
Similar Messages
-
Update Multiple Column with Single Command with Join between Two tables.
This is our command where we are trying to update a table's two columns by pulling values from another table after joining. This query takes too much time to execute. Any suggestions ?
UPDATE raw_alc_rmfscombinednew
SET (BSC, CELL_NAME) =
SELECT distinct raw_alc_R110combinednew.BSC, raw_alc_R110combinednew.CELL_NAME
FROM raw_alc_R110Combinednew
WHERE
raw_alc_R110Combinednew.OMC_ID = raw_alc_rmfscombinednew.OMC_ID
AND raw_alc_R110Combinednew.CELL_CI = raw_alc_rmfscombinednew.CI
AND RAW_ALC_R110COMBINEDNEW.START_TIME = RAW_ALC_RMFSCOMBINEDNEW.START_TIME
Results of Last execution:
Elapsed Time (sec) 4,476.56
CPU Time (sec) 4,471.88
Wait Time (sec) 4.68
Executions that Fetched all Rows (%) 100.00
Average Persistent Mem (KB) 32.43
Average Runtime Mem (KB) 31.59
--------------------------------------------------------------Your update requires a full execution of the subquery for each row in the destination table (raw_alc_rmfscombinednew). Try rewriting as a MERGE. Tons faster.
MERGE INTO raw_alc_rmfscombinednew D
USING ( SELECT distinct
BSC,
CELL_NAME
START_TIME
FROM raw_alc_R110Combinednew ) S
ON ( S.OMC_ID = D.OMC_ID
AND S.CELL_CI = D.CI
AND S.START_TIME = D.START_TIME )
WHEN MATCHED THEN
UPDATE
SET D.BSC = S.BSC,
D.CELL_NAME = S.CELL_NAME -
Updating a table with a query that return multiple values
Hi,
I'm trying to update a table which contain these fields : ItemID, InventoryID, total amounts
with a query that return these values itemId, inventoryid and total amounts for each items
Mind you, not all the rows in the table need to be updated. only a few.
This what i wrote but doesn't work since the query return multiple values so i can't assign it to journalAmounts.
UPDATE [bmssa].[etshortagetemp]
SET JournalAmounts = (SELECT sum(b.BomQty) FROM [bmssa].[Bom] b
JOIN [bmssa].[SalesLine] sl ON sl.ItemBomId = b.BomId
JOIN [bmssa].[SalesTable] st ON st.SalesId = sl.SalesId
WHERE st.SalesType = 0 AND (st.SalesStatus IN (0,1,8,12,13)) AND st.DataAreaId = 'sdi'
GROUP BY b.itemid, b.inventdimid)
Any advise how to do this task?Remember that link to the documentation posted above that explains exactly how to do this. When you read it which part exactly were you having trouble with?
-
URGENT update a table with a text that has a single quote in it
Hello, I am trying to update a table with a text that has a single quote in it. I believe I need to use two singles quotes but I am not sure how.
For example:
UPDATE TEST
SET DESCRLONG='Aux fins d'exportations'
WHERE etc...
Should I put 2 singles quotes before the quote in the text?
UPDATE TEST
SET DESCRLONG='Aux fins d'''exportations'
WHERE etc...
Thank you very much :)The best way depends on the version of Oracle.
But, the quick and universal answer is to use two single quotes
SQL> connect test/test
Connected.
SQL> create table test (descrlong varchar2(128));
Table created.
SQL> insert into test values ('This is a string with a '' single quote');
1 row created.
SQL> select * from test;
DESCRLONG
This is a string with a ' single quote
SQL> update test set descrlong='Aux fins d''exportations'
2 where descrlong like 'T%';
1 row updated.
SQL> select * from test;
DESCRLONG
Aux fins d'exportations
SQL> -
Hi,
--I need to update a table with only year 2010 data, it looks like this:
update abc_tab set dept_cd = replace(dept_cd, 'HRE','HW')
where dept_cd in (select dept_cd from edu_tab e, edu_lkp l
where l.dept_rnd_cd =e.dept_rnd_cd
and l.fy_cd ='2010');
--when I run above query, it updated all the data in the column , not just 2010 data, but if I do following
(select dept_cd from edu_tab e, edu_lkp l
where l.dept_rnd_cd =e.dept_rnd_cd
and l.fy_cd ='2010');
-- I got only 2010 data, so what did I do wrong on the update statement?
Thanks a lot.
Wendydata is not correlated based on DEPT_CD,
try this:
update abc_tab abc set abc.dept_cd = replace(abc.dept_cd, 'HRE','HW')
where abc.dept_cd in (select e.dept_cd
from edu_tab e
,edu_lkp l
where l.dept_rnd_cd = e.dept_rnd_cd
and e.dept_cd = abc.dept_cd
and l.fy_cd = '2010'
commit
/ -
1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed in the stat plan ?
2. Does rowsource statistics gives some kind of understanding of Extended stats ?You can get Row Source Statistics only *after* the SQL has been executed. An Explain Plan midway cannot give you row source statistics.
To get row source statistics either set STATISTICS_LEVEL='ALL' in the session that executes theSQL OR use the Hint "gather_plan_statistics" in the SQL being executed.
Then use dbms_xplan.display_cursor
Hemant K Chitale -
Issue with updating partitioned table
Hi,
Anyone seen this bug with updating partitioned tables.
Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
I'd be interested if anyone has seen this and has a version of Sybase without the issue.
Unfortunately when it happens on a replicated table - it takes down rep server.
CREATE TABLE #table1
( PK char(8) null,
FileDate date,
changed bit
CREATE TABLE partitioned (
PK char(8) NOT NULL,
ValidFrom date DEFAULT current_date() NOT NULL,
ValidTo date DEFAULT '31-Dec-9999' NOT NULL
LOCK DATAROWS
PARTITION BY RANGE (ValidTo)
( p2014 VALUES <= ('20141231') ON [default],
p2015 VALUES <= ('20151231') ON [default],
pMAX VALUES <= (MAX) ON [default]
CREATE UNIQUE CLUSTERED INDEX pk
ON partitioned(PK, ValidFrom, ValidTo)
LOCAL INDEX
CREATE TABLE unpartitioned (
PK char(8) NOT NULL,
ValidFrom date DEFAULT current_date() NOT NULL,
ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
LOCK DATAROWS
CREATE UNIQUE CLUSTERED INDEX pk
ON unpartitioned(PK, ValidFrom, ValidTo)
insert partitioned
select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
insert unpartitioned
select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
insert #table1
select "ET00jPzh", "Jan 15 2015", 1
union all
select "ET00jPzh", "Jan 15 2015", 1
go
update partitioned
set ValidTo = dateadd(dd,-1,FileDate)
from #table1 t
inner join partitioned p on (p.PK = t.PK)
where p.ValidTo = '99991231'
and t.changed = 1
go
update unpartitioned
set ValidTo = dateadd(dd,-1,FileDate)
from #table1 t
inner join unpartitioned u on (u.PK = t.PK)
where u.ValidTo = '99991231'
and t.changed = 1
go
drop table #table1
go
drop table partitioned
drop table unpartitioned
gowrt to replication - it is a bit unclear as not enough information has been stated to point out what happened. I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should. You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making.
wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself. There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
-- testing with tinyint
1> use demo_db
1>
2> CREATE TABLE #table1
3> ( PK char(8) null,
4> FileDate date,
5> -- changed bit
6> changed tinyint
7> )
8>
9> CREATE TABLE partitioned (
10> PK char(8) NOT NULL,
11> ValidFrom date DEFAULT current_date() NOT NULL,
12> ValidTo date DEFAULT '31-Dec-9999' NOT NULL
13> )
14>
15> LOCK DATAROWS
16> PARTITION BY RANGE (ValidTo)
17> ( p2014 VALUES <= ('20141231') ON [default],
18> p2015 VALUES <= ('20151231') ON [default],
19> pMAX VALUES <= (MAX) ON [default]
20> )
21>
22> CREATE UNIQUE CLUSTERED INDEX pk
23> ON partitioned(PK, ValidFrom, ValidTo)
24> LOCAL INDEX
25>
26> CREATE TABLE unpartitioned (
27> PK char(8) NOT NULL,
28> ValidFrom date DEFAULT current_date() NOT NULL,
29> ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
30> )
31> LOCK DATAROWS
32>
33> CREATE UNIQUE CLUSTERED INDEX pk
34> ON unpartitioned(PK, ValidFrom, ValidTo)
35>
36> insert partitioned
37> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
38>
39> insert unpartitioned
40> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
41>
42> insert #table1
43> select "ET00jPzh", "Jan 15 2015", 1
44> union all
45> select "ET00jPzh", "Jan 15 2015", 1
(1 row affected)
(1 row affected)
(2 rows affected)
1>
2> update partitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join partitioned p on (p.PK = t.PK)
6> where p.ValidTo = '99991231'
7> and t.changed = 1
Msg 2601, Level 14, State 6:
Server 'PHILLY_ASE', Line 2:
Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
Command has been aborted.
(0 rows affected)
1>
2> update unpartitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join unpartitioned u on (u.PK = t.PK)
6> where u.ValidTo = '99991231'
7> and t.changed = 1
(1 row affected)
1>
2> drop table #table1
1>
2> drop table partitioned
3> drop table unpartitioned
-- duplicating with 'int'
1> use demo_db
1>
2> CREATE TABLE #table1
3> ( PK char(8) null,
4> FileDate date,
5> -- changed bit
6> changed int
7> )
8>
9> CREATE TABLE partitioned (
10> PK char(8) NOT NULL,
11> ValidFrom date DEFAULT current_date() NOT NULL,
12> ValidTo date DEFAULT '31-Dec-9999' NOT NULL
13> )
14>
15> LOCK DATAROWS
16> PARTITION BY RANGE (ValidTo)
17> ( p2014 VALUES <= ('20141231') ON [default],
18> p2015 VALUES <= ('20151231') ON [default],
19> pMAX VALUES <= (MAX) ON [default]
20> )
21>
22> CREATE UNIQUE CLUSTERED INDEX pk
23> ON partitioned(PK, ValidFrom, ValidTo)
24> LOCAL INDEX
25>
26> CREATE TABLE unpartitioned (
27> PK char(8) NOT NULL,
28> ValidFrom date DEFAULT current_date() NOT NULL,
29> ValidTo date DEFAULT '31-Dec-9999' NOT NULL,
30> )
31> LOCK DATAROWS
32>
33> CREATE UNIQUE CLUSTERED INDEX pk
34> ON unpartitioned(PK, ValidFrom, ValidTo)
35>
36> insert partitioned
37> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
38>
39> insert unpartitioned
40> select "ET00jPzh", "Jan 7 2015", "Dec 31 9999"
41>
42> insert #table1
43> select "ET00jPzh", "Jan 15 2015", 1
44> union all
45> select "ET00jPzh", "Jan 15 2015", 1
(1 row affected)
(1 row affected)
(2 rows affected)
1>
2> update partitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join partitioned p on (p.PK = t.PK)
6> where p.ValidTo = '99991231'
7> and t.changed = 1
Msg 2601, Level 14, State 6:
Server 'PHILLY_ASE', Line 2:
Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
Command has been aborted.
(0 rows affected)
1>
2> update unpartitioned
3> set ValidTo = dateadd(dd,-1,FileDate)
4> from #table1 t
5> inner join unpartitioned u on (u.PK = t.PK)
6> where u.ValidTo = '99991231'
7> and t.changed = 1
(1 row affected)
1>
2> drop table #table1
1>
2> drop table partitioned
3> drop table unpartitioned -
Need Help on Joining multiple tables in Golden Gate
Hi,
Can you please help me with some examples on joining multiple tables in Golden Gate. i.e, my requirement is to Join Table 1 & Table 2 in Source and Load it in Target with 10 fields from Table 1 & 5 fields from Table 2 based on the join condition between Table 1.key = Table2.key
I have been trying to do that using SQLEXEC command in Golden Gate. But, is there a way I can do this in the Extract parameter file?
Thanks for your time
Regards
SureshHi,
Thanks a lot for the prompt reply. I am able to do that for the below scenario
Source.T1.Field1
Source.T1.Field2
Source.T2.Field1
Source.T2.Field2
Target Table
T1.Field1, T1.Field2, T2.Field1, T2.Field2.
But, if I already have T2.Field1 in T1 table, then T1.Field1 takes the precendence and getting loaded. i.e., I wanted to join the table 1 & Table 2 and based on the matching condition, I will need to populate the data either from T1 or T2.
Hope you got my requirement.
Below the Data Dump file & Replicat File.
EXTRACT dpump
USERID ********, PASSWORD ********
RMTHOST *******, MGRPORT 7809
RMTTRAIL /oracle/gg/dirdat/rt
--PASSTHRU
TABLE TABLE1,
SQLEXEC (ID LOOKUP,
QUERY "SELECT FIELD1 FROM SOURCE.TABLE2 WHERE FIELD1 = :v_field1",
PARAMS ( v_field1 = field1 )),
TOKENS (tk_field_1 = @GETVAL (lookup.field1));
Replicat file
REPLICAT repjoin
ASSUMETARGETDEFS
HANDLECOLLISIONS
USERID *******, PASSWORD ********
MAP SOURCE.T1, TARGET TARGET.GG_TABLE_T1,
COLMAP ( USEDEFAULTS ,
field1 = @token ("tk_party_id"));
I eventually wanted to join like below.
select t1.field1, t1.field2, t2.field1 from t1, t2
where t1.field1 = t2.field1;
Thanks for your time again
Regards
Suresh -
Best way to update a table with disinct values
Hi, i would really appreciate some advise:
I need to reguarly perform a task where i update 1 table with all the new data that has been entered from another table. I cant perform a complete insert as this will create duplicate data every time it runs so the only way i can think of is using cursors as per the script below:
CREATE OR REPLACE PROCEDURE update_new_mem IS
tmpVar NUMBER;
CURSOR c_mem IS
SELECT member_name,member_id
FROM gym.members;
crec c_mem%ROWTYPE;
BEGIN
OPEN c_mem;
LOOP
FETCH c_mem INTO crec;
EXIT WHEN c_mem%NOTFOUND;
BEGIN
UPDATE gym.lifts
SET name = crec.member_name
WHERE member_id = crec.member_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
IF SQL%NOTFOUND THEN
BEGIN
INSERT INTO gym.lifts
(name,member_id)
VALUES (crec.member_name,crec.member_id);
END;
END IF;
END LOOP;
CLOSE c_mem;
END update_new_mem;
This method works but is there an easier (faster) way to update another table with new data only?
Many thanks>
This method works but is there an easier (faster) way to update another table with new data only?
>
Almost anything would be better than that slow-by-slow loop processing.
You don't need a procedure you should just use MERGE for that. See the examples in the MERGE section of the SQL Language doc
http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm
MERGE INTO bonuses D
USING (SELECT employee_id, salary, department_id FROM employees
WHERE department_id = 80) S
ON (D.employee_id = S.employee_id)
WHEN MATCHED THEN UPDATE SET D.bonus = D.bonus + S.salary*.01
DELETE WHERE (S.salary > 8000)
WHEN NOT MATCHED THEN INSERT (D.employee_id, D.bonus)
VALUES (S.employee_id, S.salary*.01)
WHERE (S.salary <= 8000); -
[ADF BC | ADF Faces] Updating a table with values updated in transaction
Summary: A table is based on a view with a bind variable. Based on some action, I update rows in the view so that they no longer satisfy the query. How do I update the table to reflect this?
I have an ADF Faces page with a multi-select table component on it, mapped to an ADF Business Components view object. The view object has a bind variable (:Status), which is used in the query to only retrieve rows with a specific status.
The user selects rows in the table that should have their status changed, and then hits a button to perform the action. In the backing bean for the page, the button click is processed and the selected rows are updated to a different status. At this point the transaction is not committed.
Changing the status of these rows now means that they no longer match the where clause of the view object, yet they still appear in the table.
My question is, how can I update the table so that rows which don't match the where clause are removed?
This is a simplified version of the problem I am working on - this needs to work when rows that aren't in the table are being updated so that they now MATCH the where clause and rows may also be added or completely removed.Quite an interesting little problem....
I have created a simple test case to illustrate your scenario - perhaps Steve M could comment...
My scenario:
1). create an EO based on the HR.EMPLOYEES table
2). create an updatable VO based upon the aforementioned EO. Add "COMMISSION_PCT IS NULL" to the where clause.
3). Add the VO to an app module as usual.
4). Create an ADF Faces page by dropping the VO from the data control as an ADF table. Show only first name, last name, and commission pct.
5). Add a command button to the page so I can call various code from the backing bean.
Now, when I update the commision for the first record to ".1" and press the submit button (the one in the table actions facet), the update "takes" - I assume that it's just in the EO cache at this point. The record still shows on the screen, even though it no longer meets the where clause criteria.
When I add some code to the command button (#5) to perform a commit - the record STILL shows on the screen after committing to the DB. I can make the record disappear by either (in the same backing-bean method that does the commit)
a). Re-execute the iterator after the commit
b). Call "clearVOCaches" on the application module (forces the iterator to re-execute)
I even tried programmatically adding the where clause back in (via addWhereClause) and re-executing the iterator - no go. I suspect that the only way to do this would be to post the changes to the DB (without committing them) and then re-execute the query. This, however, would be a killer from a scalability perspective.
John -
How to update a table with huge data
Hi,
I have a scenario where I need to update tables that are having huge data (each table is having more than 10,00000)
I am writing this update in PLSQL block.
Is there any way to improve the performance of this update statement? Please suggest...
Thanks.user10403630 wrote:
I am storing all tables and columns that needs to be updated in tab_list and forming a dynamic qry.
for i in (select * from tab_list)
loop
v_qry := 'update '||i.table_name||' set '|| i.column_name ' = '''||new_value||'''' ||' where '||i.column_name = ''||old_value||'''';
execute immediate v_qry;
end loop;Sorry to say so but this code is aweful!
Well the only thing to make this even more slow would be to add a commit inside the loop.
Some advices. But I'm not sure which one works in your case.
The fastest way to update a million rows is: write a single update statement. On typical systems this should only run for like a couple of minutes.
if you need to update several tables then write a single update for each table.
If you have different values that need to be updated then find a way how to consider those different values in a single update or merge statement. Either by joining another table or by using some in-lists.
e.g.
update myTable
set mycolumn = decode(mycolumn
,'oldValue1','newvalue1'
,'oldValue2','newvalue2'
,'oldValue3','newvalue3')
where mycolumn in ('oldValue1','oldvalue2','oldvalue3'....);If you need to do this in pl/sql then
1) use bind variables to avoid hard parsing the same statement again and again
2) use bulk binding to avoid pl/sql context switches -
Need Fields KDAUF, KDPOS, BWART update in table GLPCA for movement type 633
Hi All,
I need following field update in Table GLPCA for movement type 633 (Consignment Issue):
KDAUF : sale document no.
KDPOS : Order Line Otem
BWART : Movement Type
The same fields are updated for movement type 601 (Standard order). Also I checked with Table MSEG and find the same as above.
Thanks in advance.
ATYou should use the BAPI instead BAPI_GOODSMVT_CREATE
of BDC.
http://www.sap-img.com/abap/bapi-goodsmvt-create-to-post-goods-movement.htm
Welcome to SDN. Please be sure to award points for any helpful answers and mark your post as solved when solved completely. Thanks.
REgards,
Rich Heilman -
3 Table Joins -- Need a more efficient Query
I need a 3 table join but need to do it more efficiently than I am currently doing. The query is taking too long to execute (in excess of 20 mins. These are huge tables with 10 mil + records). Here is what the query looks like right now. I need 100 distinct acctnum from the below query with all the conditions as requirements.
THANKS IN ADVANCE FOR HELP!!!
SELECT /*+ parallel */
FROM (SELECT /*+ parallel */ DISTINCT (a.acctnum),
a.acctnum_status,
a.sys_creation_date,
a.sys_update_date,
c.comp_id,
c.comp_lbl_type,
a.account_sub_type
FROM account a
LEFT JOIN
company c
ON a.comp_id = c.comp_id AND c.comp_lbl_type = 'IND',
subaccount s
WHERE a.account_type = 'I'
AND a.account_status IN ('O', 'S')
and s.subaccount_status in ('A','S')
AND a.account_sub_type NOT IN ('G', 'V')
AND a.SYS_update_DATE <= SYSDATE - 4 / 24)
where ROWNUM <= 100 ;Hi,
Whenever you have a question, post CREATE TABLE and INSERT statements for a little sample data, and the results you want from that data. Explain how you get those results from that data.
Simplify the problem, if possible. If you need 100 distinct rows, post a problem where you only need, say, 3 distinct rows. Just explain that you really need 100, and you'll get a solution that works for either 3 or 100.
Always say which version of Oracle you're using (e.g. 11.2.0.3.0).
See the forum FAQ: https://forums.oracle.com/message/9362002
For tuning problems, also see https://forums.oracle.com/message/9362003
Are you sure the query you posted is even doing what you want? You're cross-joining s to the other tables, producing all possible combinations of rows, and then picking 100 of those in no particular order (not even random order). That's not necessarily wrong, but it certainly is suspicious.
If you're only interested in 100 rows, there's probably some way to write the query so that it picks 100 rows from the big tables first. -
Updating a table with billion rows
It was an interview question, what's the best way to update a table with 10 billion rows. Give me your suggestions. Thanks in advance.
svkThe best way to answer questions such as this is NOT with a absolute and specific answer. Instead, discuss your strategy for approaching the problem. The first step is to understand your exact requirement. It is surprising how often people
write update statements with an under-qualified where clause. NEVER update a row that does not need to be updated. For example, a statement like:
update mytable set cola = 'ABC' where id in (1, 45, 212);
Assuming id is unique for the table and the specified values exist in the table, we know 3 rows will be updated. Do all of those rows need to be updated? Think about it. If cola is already set to 'ABC' for any of those rows, we could ignore
those rows and make the update more efficient. To do that, you need to add "and cola <> 'ABC' " to the where clause. That is just one example of understanding exactly what you need to do - and doing only that which needs to be done.
Once you understand exactly what you need to do, you need to analyze the impact of the update and identify any potential issues. Updating a lot of rows can take a lot of time and consume large amounts of log and disk space. What else is using
the table? Can you afford to lock the table for the duration of the update? Are there concurrency issues, regardless of whether you update in batches or in one single statement? When using a batch approach, is there an issue if someone runs
a query against the table (i.e., the result is different from that of the same query run after all updates have been completed)? Are you changing something which is included in an index? Are you changing part of the clustered index?
Ultimately, every question you are asked is (or should be) designed to test your problem-solving skills and your skillset. IMO, it is relatively easy to improve your skillset of any particular tool, language, or environment. The other - not so much
and that is why they are more valuable IMO. -
Updating database table with DUPLICATE keys
i have an internal table having data as follows.
emp_id name date proj_id activity_id Hours Remarks
101 Pavan 09.10.2007 123 1 2.00 Coding
101 Pavan 09.10.2007 124 2 1.00 Documentation
102 Raj 09.10.2007 123 3 6.00 Testing
Now i need to update a Ztable with above mentioned data.
The structure of the Ztable is as follows.
Mandt emp_id name date proj_id activity_id Hours Remarks
NOte: i have ticked both check boxes for the field MANDT in table.
Rest didnt select the check boxes.
I believe now the field MANDT alone is a primary key for the z-table.
NOw i have tried with UPDATE/INSERT statments to update the database table.
But instead of inserting all the rows, the system is over writing on the same emp_id row.
Even tried with the statments like INSERT INTO <Ztable> values <Internal table> ACCEPTING DUPLICATE KEYS.
But its not getting inserted as a separate row in the table.
Requirement is to insert the multiple rows in the database table without any over writing activity.
Can anyone give me the code to do this?
Regards
PavanHi Pavan,
Please let me know what are the key fields in your Ztable. Try with below code it may help you. change the "Ztablename" as your database table name and I_INTERNAL TABLE as your internal table name. still you are facing the problem please let me know.
lock the custom table before updating the table.
CALL FUNCTION 'ENQUEUE_E_TABLE'
EXPORTING
MODE_RSTABLE = 'E'
TABNAME = 'ZTABLENAME'
VARKEY =
X_TABNAME = ' '
X_VARKEY = ' '
_SCOPE = '2'
_WAIT = ' '
_COLLECT = ' '
EXCEPTIONS
FOREIGN_LOCK = 1
SYSTEM_FAILURE = 2
OTHERS = 3
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ELSE.
INSERT ZTABLENAME FROM TABLE I_INTERNALTABLE ACCEPTING DUPLICATE KEYS.
COMMIT WORK.
ENDIF.
unlock after updating the custom table After updation is done.
CALL FUNCTION 'DEQUEUE_E_TABLE'
EXPORTING
MODE_RSTABLE = 'E'
TABNAME = 'ZTABLENAME'
VARKEY =
X_TABNAME = ' '
X_VARKEY = ' '
_SCOPE = '3'
_SYNCHRON = ' '
_COLLECT = ' '
Maybe you are looking for
-
Add new fields in F-36 ( Add customer item dynpro ) .
Hi all. I want to add new fields in "Add Customer item" Dynpro of transaction F-36. It's possible ???. How I can do this ?? In the "More Data" dynpro is possible to add new fields ???? Thanks a lot Ismael
-
Wifi macbook and mac pro desktop
Hi, not sure if this is the right place to post this. I have a Macbook that can see my router and connect no problem. My Mac Pro desktop computer is far from the router and I want to connect wirelessly, but the Mac Pro can't see the router, even thou
-
Hddvd disc won't play in dvd player?
I took 49 mins of hd video edited in fcp and compressed it to quicktime h.264 video for dvdsp. The project is 4.5 gb. My prefs in dvdsp show 1280x720p resolution with a 16:9 letterbox. The encoding prefs are one pass vbr with 18.0 bitrate. I simulate
-
I updated ipod, and the large videos stop at secon 31
I updated today the ipod, and all goes well, but, when i open my video gallery, and go to the films, Onlye large videos, /at least 45 min) and only 2 short music videos, STOP AT SECOND 31, ALL af this , at second 31 stop, after 3 seconds, they run ag
-
Passing Arguments to Login-Procedure
Hi, i was trying to pass some own/more connection infos to our login procedure. I used the "CON=" parameter. CON=PROGRAM_USER=test_user@ PROGRAM_PASSWORD=pasword_hash@PROGRAM_CON_NAME=Test Connection Is there a better way to pass such infos? First I