Updating a big table
Hello,
i wanna update a column in a table (make it null). but the table is very big, and if i write something like
update my_table set column_name=nullit takes very much time (probably hours)
How can i optimize that? Any workaround for this to reduce the time? how can i update first 10000 recordsm then make a commit, then again 10000 and commit, and so on.. (i think it's a solution but not sure). but i wanna avoid CREATE TABLE AS SELECT .....
Thanks
I'm not sure an index is actually going to help here, you mentioned you are updating 80% of the rows in the table adding an index is going to make this worse as you would be updating the table and index values.
Why is the time taken a problem? is it causeing a performance problem or errors in the apllication when you run this?
One alternative would be to use DBMS_REDEFINITION to create a new table with the column set to null, this may be a bit of a complex solution and is likely to tkae even longer in terms of effort and elapsed time but the actual time to switchover to the newly defined table (with the null column) would be much shorter. Yuo would need to test this to determine what the impact will be in terms of performance and space usage also how you deal with constraints and index may depend on the version of Oracle you are runnning.
Chris
Similar Messages
-
How to UPDATE a big table in Oracle via Bulk Load
Hi all,
in a datastore target as Oracle 11g, I have a big table having 300milions of record; the structure is One integer key + 10 columns attributes .
In IQ Source i have the same table with the same size ; the structure is One integer key + 1 column attributes .
What i need to do is to UPDATE that single field in Oracle from the values stored in IQ .
Any idea on how to organize efficiently the dataflow and the target writing mode ? bulk load ? api ?
thank you
MaurizioHi,
You cannot do bulk load when you need to UPDATE a field. Because all a bulk load does is add records to your table.
Since you have to UPDATE a field, i would suggest to go for SCD with
source > TC > MO > KG >target
Arun -
Snapshot too old when deleting from a "big" table
Hello.
I think this is a basic thing (release 8.1.7.4). I must say I don't know how rollback segments really work.
A table, where new records are continuously inserted and the old ones can be updated in short transactions, should be purged every day by deleting old records.
This purge has never been done and as a result it has now almost 4 million records, and when I launch the stored procedure that deletes the old records I get the "snapshot too old" error because of the read consistency.
If I launch the procedure after stopping the application that inserts and updates in the table, then I don't get the error. I guess the problem is that meanwhile the procedure is being executed other transactions also need to use rollback segments so that the rollback segment space that the snapshot needs isn't enough. Do you think this is the problem?
If this is the case then I suppose that the only solution is increasing the size of the only datafile of the only tablespace for my 4 rollback segments. Am I wrong?
(Three more questions:
- Could the problem be solved by locking some rollback segments for the snapshot? How could I do that?
- What is a discrete transaction?
I'm a developer, not a dba, but don't tell me to ask my dba because it isn't that easy. Thanks in advance."snapshot too old indicates the undo tablespace does not have enough free space for a long running query" what does this mean? why do I get the same error in two different databases, in the first the size of the datafile of the undo tablespace is 2GB whilst in the second it is only 2MB? How can I know how big the datafile has to be?
One possible solution could be not deleting the whole table at once but only a few records? Would this work? Why when I try "select count(*) from my_table where rownum = 1" I also get "snapshot too old" when other transactions are running. -
I have Always-On synchronization on some databases, We want to change from asynchronize mode to synchronize mode, I tested to see what the change in performance between the two, and found that in general the select statements keep the same
duration with and without synchronize mode, most of the update\insert usually take around * 4 times from the asynchronize mode, BUT some updates take much longer. I would like to understand why things take MUCH longer (I understand that working with synchronize
mode is a 2 phase commit and should take * 4 times, but I can't understand why some of them take 50 - 100 times more).
1. One query is an update made on a table with few records (up to 40 records), this update run 250K times a day, and a very simple update on this table takes * 98 than without synchronize mode
1. An update on a big table with 2 million records, using a query that specify the unique column in the primary cluster key, takes 50 times more.
What are the factors that have dramatic influence on performance when 2 phase commit (asynchronize mode) is used?I've never even looked at the details for SQL Server, but on any kind of a system doing synchronous updates, you have to figure there is a fixed cost, a variable cost, and a queuing cost. The fixed cost always hits whether its a big transaction or
a small transaction, let's guess it's about 0.1 seconds for a round trip. Variable cost depends on how much is updated, let's say it's linear, and is maybe 0.05 seconds for a couple of small rows. The queuing cost varies from 0 to huge.
So if you have a small transaction on a local system that takes 0.05 seconds (50 milliseconds), then just one will incur the local 0.05, plus 0.10, plus 0.05, and zero queuing cost, so 0.05 -> 0.20, about 4x.
But if you have something that runs 250k times a day I hope it's faster, maybe it's only 0.01 seconds locally, and that turns to 0.01 + 0.10 + 0.01 + q, so 0.01 -> 0.12+q, so it's at *least* 12x slower, and maybe if you get 100 of them
in the same second you start incurring queuing delays also, in fact you may have similar queuing delays on the local system and remote system besides any communications queuing, but if just the synchronization system has some queuing limits and it even gets
to 0.30 seconds then 0.01 -> 0.42, or 42x, and you start seeing what can happen. If your local transaction is only 0.001 seconds when not synchronized, then you'd have 100x slowdown just on the fixed overhead!
IOW, synchronized systems and tons of small, fast transactions just don't fit together very well.
HTH,
Josh -
Updating a mysql table from several machines
Hi,
I have a big table with about 100,000 records (fileID, etc) and I need get the fileID from this table, then analyze the files, and making new tables based on the results.
Since the table is huge, I want to do the analysis on multiple machines.
Currently, I use "Limit start, offset" to select 1000 fileIDs per SELECT and do things with it in one machine.
The problem is that if I run the same code in another machine, it may select the duplicated records.
Is there a way that I can "Lock" the selected rows from other machines (not only lock the UPDATE, but also lock the READ so other machine will not do the redundant work)?
Thanks for your attention.
IperI think it would be better to multi-thread your processing solution instead of running on multiple solutions.
If you really must run on different machines then you'll have to come up with a better way. Something whereby you only fetch the result set from one machine and then divide it up from there would be one way. -
Very Big Table (36 Indexes, 1000000 Records)
Hi
I have a very big table (76 columns, 1000000 records), these 76 columns include 36 foreign key columns , each FK has an index on the table, and only one of these FK columns has a value at the same time while all other FK have NULL value. All these FK columns are of type NUMBER(20,0).
I am facing performance problem which I want to resolve taking in consideration that this table is used with DML (Insert,Update,Delete) along with Query (Select) operations, all these operations and queries are done daily. I want to improve this table performance , and I am facing these scenarios:
1- Replace all these 36 FK columns with 2 columns (ID, TABLE_NAME) (ID for master table ID value, and TABLE_NAME for master table name) and create only one index on these 2 columns.
2- partition the table using its YEAR column, keep all FK columns but drop all indexes on these columns.
3- partition the table using its YEAR column, and drop all FK columns, create (ID,TABLE_NAME) columns, and create index on (TABLE_NAME,YEAR) columns.
Which way has more efficiency?
Do I have to take "master-detail" relations in mind when building Forms on this table?
Are there any other suggestions?
I am using Oracle 8.1.7 database.
Please Help.Hi everybody
I would like to thank you for your cooperation and I will try to answer your questions, but please note that I am a developer in the first place and I am new to oracle database administration, so please forgive me if I did any mistakes.
Q: Have you gathered statistics on the tables in your database?
A: No I did not. And if I must do it, must I do it for all database tables or only for this big table?
Q:Actually tracing the session with 10046 level 8 will give some clear idea on where your query is waiting.
A: Actually I do not know what you mean by "10046 level 8".
Q: what OS and what kind of server (hardware) are you using
A: I am using Windows2000 Server operating system, my server has 2 Intel XEON 500MHz + 2.5GB RAM + 4 * 36GB Hard Disks(on RAID 5 controller).
Q: how many concurrent user do you have an how many transactions per hour
A: I have 40 concurrent users, and an average 100 transaction per hour, but the peak can goes to 1000 transaction per hour.
Q: How fast should your queries be executed
A: I want the queries be executed in about 10 to 15 seconds, or else every body here will complain. Please note that because of this table is highly used, there is a very good chance to 2 or more transaction to exist at the same time, one of them perform query, and the other perform DML operation. Some of these queries are used in reports, and it can be long query(ex. retrieve the summary of 50000 records).
Q:please show use the explain plan of these queries
A: If I understand your question, you ask me to show you the explain plan of those queries, well, first, I do not know how , an second, I think it is a big question because I can not collect all kind of queries that have been written on this table (some of them exist in server packages, and the others performed by Forms or Reports). -
How to Slice big table in chunks
I am trying to derive a piece of generic logic that would cut in chunks of definite size any big table. The goal is to perform update in chunks and avoid rollback too small issues. The full table scan on the update is unavoidable, since the update target every row of the table.
The BIGTABLE has 63 millions rows. The purpose of the bellow SQL to give the ROWID every two million rows. So I am using the auto row numering field 'rownum' and perfrom a test to see I could. I expected the fist chunk to have 2 millons rows but in fact it is not the case:
Here is the code +(NOTE I had many problems with quotes, so some ROWID appears without their enclosing quotes or they disappear from current output here)+:
select rn, mod, frow, rownum from (
select rowid rn , rownum frow, mod(rownum, 2000000) mod
from bigtable order by rn) where mod = 0
SQL> /
RN MOD FROW ROWNUM
AAATCjAA0AAAKAVAAd 0 4000000 1
AAATCjAA0AAAPUEAAv 0 10000000 2
AAATCjAA0AAAbULAAx 0 6000000 3
AAATCjAA0AAAsIeAAC 0 14000000 4
AAATCjAA0AAAzhSAAp 0 8000000 5
AAATCjAA0AABOtGAAa 0 26000000 6
AAATCjAA0AABe24AAE 0 16000000 7
AAATCjAA0AABjVgAAQ 0 30000000 8
AAATCjAA0AABn4LAA3 0 32000000 9
AAATCjAA0AAB3pdAAh 0 20000000 10
AAATCjAA0AAB5dmAAT 0 22000000 11
AAATCjAA0AACrFuAAW 0 36000000 12
AAATCjAA6AAAXpOAAq 0 2000000 13
AAATCjAA6AAA8CZAAO 0 18000000 14
AAATCjAA6AABLAYAAj 0 12000000 15
AAATCjAA6AABlwbAAg 0 52000000 16
AAATCjAA6AACBEoAAM 0 38000000 17
AAATCjAA6AACCYGAA1 0 24000000 18
AAATCjAA6AACKfBABI 0 28000000 19
AAATCjAA6AACe0cAAS 0 34000000 20
AAATCjAA6AAFmytAAf 0 62000000 21
AAATCjAA6AAFp+bAA6 0 60000000 22
AAATCjAA6AAF6RAAAQ 0 44000000 23
AAATCjAA6AAHJjDAAV 0 40000000 24
AAATCjAA6AAIR+jAAL 0 42000000 25
AAATCjAA6AAKomNAAE 0 48000000 26
AAATCjAA6AALdcMAA3 0 46000000 27
AAATCjAA9AAACuuAAl 0 50000000 28
AAATCjAA9AABgD6AAD 0 54000000 29
AAATCjAA9AADiA2AAC 0 56000000 30
AAATCjAA9AAEQMPAAT 0 58000000 31
31 rows selected.
SQL> select count(*) from BIGTABLE where rowid < AAATCjAA0AAAKAVAAd ;
COUNT(*)
518712 <-- expected around 2 000 000
SQL> select count(*) from BIGTABLE where rowid < AAATCjAA0AAAPUEAAv ;
COUNT(*)
1218270 <-- expected around 4 000 000
SQL> select count(*) from BIGTABLE where rowid < AAATCjAA0AAAbULAAx ;
COUNT(*)
2685289 <-- expected around 6 000 000Amzingly, This code works perfectly for small tables but fails for big tables. Does anybody has an explanation and possibly a solution to this?
Here is the full code of the SQL that is suppposed to generated all the predicates I need to add to the UPdate statements in order to cut them in piece :
select line from (
with v as (select rn, mod, rownum frank from (
select rowid rn , mod(rownum, 2000000) mod
from BIGTABLE order by rn ) where mod = 0),
v1 as (
select rn , frank, lag(rn) over (order by frank) lag_rn from v ),
v0 as (
select count(*) cpt from v)
select 1, case
when frank = 1 then ' and rowid < ''' || rn || ''''
when frank = cpt then ' and rowid >= ''' || lag_rn ||''' and rowid < ''' ||rn || ''''
else ' and rowid >= ''' || lag_rn ||''' and rowid <'''||rn||''''
end line
from v1, v0
union
select 2, case
when frank = cpt then ' and rowid >= ''' || rn || ''''
end line
from v1, v0 order by 1)
and rowid < AAATCjAA0AAAKAVAAd
and rowid >= 'AAATCjAA0AAAKAVAAd' and rowid < 'AAATCjAA0AAAPUEAAv''
and rowid >= 'AAATCjAA0AAAPUEAAv' and rowid < 'AAATCjAA0AAAbULAAx''
and rowid >= 'AAATCjAA0AAAbULAAx' and rowid < 'AAATCjAA0AAAsIeAAC''
and rowid >= 'AAATCjAA0AAAsIeAAC' and rowid < 'AAATCjAA0AAAzhSAAp''
and rowid >= 'AAATCjAA0AAAzhSAAp' and rowid < 'AAATCjAA0AABOtGAAa''
and rowid >= 'AAATCjAA0AAB3pdAAh' and rowid < 'AAATCjAA0AAB5dmAAT''
and rowid >= 'AAATCjAA0AAB5dmAAT' and rowid < 'AAATCjAA0AACrFuAAW''
and rowid >= 'AAATCjAA0AABOtGAAa' and rowid < 'AAATCjAA0AABe24AAE''
and rowid >= 'AAATCjAA0AABe24AAE' and rowid < 'AAATCjAA0AABjVgAAQ''
and rowid >= 'AAATCjAA0AABjVgAAQ' and rowid < 'AAATCjAA0AABn4LAA3''
and rowid >= 'AAATCjAA0AABn4LAA3' and rowid < 'AAATCjAA0AAB3pdAAh''
and rowid >= 'AAATCjAA0AACrFuAAW' and rowid < 'AAATCjAA6AAAXpOAAq''
and rowid >= 'AAATCjAA6AAA8CZAAO' and rowid < 'AAATCjAA6AABLAYAAj''
and rowid >= 'AAATCjAA6AAAXpOAAq' and rowid < 'AAATCjAA6AAA8CZAAO''
and rowid >= 'AAATCjAA6AABLAYAAj' and rowid < 'AAATCjAA6AABlwbAAg''
and rowid >= 'AAATCjAA6AABlwbAAg' and rowid < 'AAATCjAA6AACBEoAAM''
and rowid >= 'AAATCjAA6AACBEoAAM' and rowid < 'AAATCjAA6AACCYGAA1''
and rowid >= 'AAATCjAA6AACCYGAA1' and rowid < 'AAATCjAA6AACKfBABI''
and rowid >= 'AAATCjAA6AACKfBABI' and rowid < 'AAATCjAA6AACe0cAAS''
and rowid >= 'AAATCjAA6AACe0cAAS' and rowid < 'AAATCjAA6AAFmytAAf''
and rowid >= 'AAATCjAA6AAF6RAAAQ' and rowid < 'AAATCjAA6AAHJjDAAV''
and rowid >= 'AAATCjAA6AAFmytAAf' and rowid < 'AAATCjAA6AAFp+bAA6''
and rowid >= 'AAATCjAA6AAFp+bAA6' and rowid < 'AAATCjAA6AAF6RAAAQ''
and rowid >= 'AAATCjAA6AAHJjDAAV' and rowid < 'AAATCjAA6AAIR+jAAL''
and rowid >= 'AAATCjAA6AAIR+jAAL' and rowid < 'AAATCjAA6AAKomNAAE''
and rowid >= 'AAATCjAA6AAKomNAAE' and rowid < 'AAATCjAA6AALdcMAA3''
and rowid >= 'AAATCjAA6AALdcMAA3' and rowid < 'AAATCjAA9AAACuuAAl''
and rowid >= 'AAATCjAA9AAACuuAAl' and rowid < 'AAATCjAA9AABgD6AAD''
and rowid >= 'AAATCjAA9AABgD6AAD' and rowid < 'AAATCjAA9AADiA2AAC''
and rowid >= 'AAATCjAA9AADiA2AAC' and rowid < 'AAATCjAA9AAEQMPAAT''
and rowid >= 'AAATCjAA9AAEQMPAAT''
33 rows selected.
SQL> select count(*) from BIGTABLE where 1=1 and rowid < AAATCjAA0AAAKAVAAd ;
COUNT(*)
518712
SQL> select count(*) from BIGTABLE where 1=1 and rowid >= 'AAATCjAA9AAEQMPAAT'' ;
COUNT(*)
1846369Nice but not accurate....Yes it works as intended now : +( still this annoying issue of quotes, so some rowid appear without enclosing quotes)+
from (select rn, rownum frow, mod(rownum, 2000000) mod
from (select rowid rn from BIGTABLE order by rn)
order by rn
where mod = 0
SQL> /
RN MOD FROW ROWNUM
AAATCjAA0AAAVNlAAQ 0 2000000 1
AAATCjAA0AAAlxyAAS 0 4000000 2
AAATCjAA0AAA2CRAAQ 0 6000000 3
AAATCjAA0AABFcoAAn 0 8000000 4
AAATCjAA0AABVIDAAi 0 10000000 5
AAATCjAA0AABoSEAAU 0 12000000 6
AAATCjAA0AAB3YrAAf 0 14000000 7
AAATCjAA0AACE+oAAS 0 16000000 8
AAATCjAA0AACR6dAAR 0 18000000 9
AAATCjAA0AACe8AAAa 0 20000000 10
AAATCjAA0AACt3CAAS 0 22000000 11
AAATCjAA6AAAPXrAAT 0 24000000 12
AAATCjAA6AAAgO4AA5 0 26000000 13
AAATCjAA6AAAwKfAAu 0 28000000 14
AAATCjAA6AABAQBAAH 0 30000000 15
AAATCjAA6AABREdAA9 0 32000000 16
AAATCjAA6AABhFIAAT 0 34000000 17
AAATCjAA6AABxyZAAj 0 36000000 18
AAATCjAA6AACA5CAAm 0 38000000 19
AAATCjAA6AACNJBAAN 0 40000000 20
AAATCjAA6AACbLgAAV 0 42000000 21
AAATCjAA6AACoukAAD 0 44000000 22
AAATCjAA6AAFsS8AAO 0 46000000 23
AAATCjAA6AAF36JAAa 0 48000000 24
AAATCjAA6AAHJzoAAv 0 50000000 25
AAATCjAA6AAKMCHAAv 0 52000000 26
AAATCjAA6AAL2RbAAT 0 54000000 27
AAATCjAA9AABLLbAAH 0 56000000 28
AAATCjAA9AACmSyAAA 0 58000000 29
AAATCjAA9AAEO/nAAe 0 60000000 30
AAATCjAA9AAEbC7AAI 0 62000000 31
31 rows selected.
SQL> select count(*) cpt from BIGTABLE where rowid < AAATCjAA0AAAVNlAAQ ;
CPT
1999999
SQL> select count(*) cpt from BIGTABLE where rowid >= 'AAATCjAA6AAAgO4AA5' and rowid < AAATCjAA6AAAwKfAAu ;
CPT
2000000 -
Jython error while updating a oracle table based on file count
Hi,
i have jython procedure for counting counting records in a flat file
Here is the code(took from odiexperts) modified and am getting errors, somebody take a look and let me know what is the sql exception in this code
COMMAND on target: Jython
Command on source : Oracle --and specified the logical schema
Without connecting to the database using the jdbc connection i can see the output successfully, but i want to update the oracle table with count. any help is greatly appreciated
---------------------------------Error-----------------------------
org.apache.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 45, in ?
java.sql.SQLException: ORA-00936: missing expression
---------------------------------------Code--------------------------------------------------
import java.sql.Connection
import java.sql.Statement
import java.sql.DriverManager
import java.sql.ResultSet
import java.sql.ResultSetMetaData
import os
import string
import java.sql as sql
import java.lang as lang
import re
filesrc = open('c:\mm\xyz.csv','r')
first=filesrc.readline()
lines = 0
while first:
#get the no of lines in the file
lines += 1
first=filesrc.readline()
#print lines
## THE ABOVE PART OF THE PROGRAM IS TO COUNT THE NUMBER OF LINES
## AND STORE IT INTO THE VARIABLE `LINES `
def intWithCommas(x):
if type(x) not in [type(0), type(0L)]:
raise TypeError("Parameter must be an integer.")
if x < 0:
return '-' + intWithCommas(-x)
result = ''
while x >= 1000:
x, r = divmod(x, 1000)
result = ",%03d%s" % (r, result)
return "%d%s" % (x, result)
## THE ABOVE PROGRAM IS TO DISPLAY THE NUMBERS
sourceConnection = odiRef.getJDBCConnection("SRC")
sqlstring = sourceConnection.createStatement()
sqlstmt="update tab1 set tot_coll_amt = to_number( "#lines ") where load_audit_key=418507"
sqlstring.executeQuery(sqlstmt)
sourceConnection.close()
s0=' \n\nThe Number of Lines in the File are ->> '
s1=str(intWithCommas(lines))
s2=' \n\nand the First Line of the File is ->> '
filesrc.seek(0)
s3=str(filesrc.readline())
final=s0 + s1 + s2 + s3
filesrc.close()
raise finali changed as you adviced ankit
am getting the following error now
org.apache.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 37, in ?
java.sql.SQLException: ORA-00911: invalid character
here is the modified code
sourceConnection = odiRef.getJDBCConnection("SRC")
sqlstring = sourceConnection.createStatement()
sqlstmt="update tab1 set tot_coll_amt = to_number('#lines') where load_audit_key=418507;"
result=sqlstring.executeUpdate(sqlstmt)
sourceConnection.close()
Any ideas
Edited by: Sunny on Dec 3, 2010 1:04 PM -
The currency is not getting updated in the table VBAP
Hi ,
The currency is not getting updated in the table VBAP. The currency was suppossed to be copied from the header table VBAK for a Sales Order. When the user creating a Sales Order the currency WAERK is not shown in VBAP table. VBAK-WAERk is in EUR . Does anyone know why is this happenning?
Currency is maintained in the Customer Master, Material Master and Sales Org. Any suggestions?.
Also it is happened for only one line item in a set of line items , Other line items did display the currency field.
The net Value has data in it .
The system is ECC 5.0
Regards,
SenthilDear Senthil,
Please apply the following notes (if they apply to your support pack level) and retest:
1460621 VBAP-WAERK is deleted after the sold-to party is changed
1426441 VBAP-WAERK deleted for subitems
1493998 VBAP-WAERK deleted for subitems
This should resolve the issue. I hope this helps.
Best regards,
Ian Kehoe -
Data is not getting updated in DB table
hi all
i am doing IDOC to jdbc scenario
i am triggering idoc from R/3 and the data is going into DB table
sender side: ZVendorIdoc
receiver side:
DT_testVendor
Table
tblVendor
action UPDATE_INSERT
access 1:unbounded
cVendorName 1
cVendorCode 1
fromdate 1
todate 1
Key
cVendorName 1
if i trigger idoc for example vendor 2005,2006 and 2010 data is getting updated in the table
but again if i trigger idoc for same vendor nos data does not get updated in DB table while message is successfull in moni and RWB both
plz suggest if any change need to be done to update the data
Regards
sandeep sharmaHi Ravi
you are right, vendor no is my key field . problem is when i send data again then it should Update the data but it's not updating the data again
i did on exp with this : i deleted all the record from the table and then triggered idoc for vendor 2005 , 2006,2010 after this data is updated in the table i deleted the rows for vendor no 2006 and 2010 and kept the row for vendor 2005
then i again trigered the idoc for vendor no 2005,2006 and 2010 now this should update and it should insert rows for vendor no 2006 and 2010 but i am surprised its not updating the data
Thanks
sandeep -
TDS amount not getting updated in the table under the field QBSHB
Dear Friends,
The TDS amount entered while booking the vendor invoices through MIRO T-cde, is not getting updated in the table BSEG under the field QBSHB.
Kindly let me know the reason for the same and guide me to correct it
TIA.
Regards,
VincentHI Vincent,
Bseg-QBSHB field is relavent for classic WT.
I hope you are using the EWT.
Hence if you post a document through MIRO it will not update
(but if you post document FB60 it will update but wrongly).
Reason is Miro document is posted through interface.
Hence SAP is suggested to not refer the Bseg-QBSHB and etc., fields.
refer only with_item table.
Please refer the below replay from SAP
Please refer the below note .363309
Please review attached note 363309 for detailed explanation
BSEG-QBSHB is designed to fill for the classic withholding tax. And
extended withholding tax information is stored exclusive in table
WITH_ITEM.
You can check in table BSEG for the fields and will find that system
do NOT update field BSEG-QBSHB.
In your line layout,you define a field BSEG-QBSHB. But actully the field
of vendor/customer line item is filled with zero from FI. Thus,it shows
zero in line item display.
And as note 363309 says,
"Remove the field which contains the withholding tax information
from your display variant.
If you want to display the withholding tax information, double-click on
the document number and subsequently choose 'Withholding tax' button."
(BSEG-QSSKZ, BSEG-QSSHB, BSEG-QBSHB) field is not relavent for
Extended withholding tax and not suppose to use in report FBL1N.
It basically does not make any sense to use the withholding tax fields
of the document line items (BSEG-QSSKZ, BSEG-QSSHB, BSEG-QBSHB) with the
activated extended withholding tax.
regards
Madhu M
Edited by: M Madhu on Jan 31, 2011 1:19 PM -
Can't update a sql-table with a space
Hello,
In a transaktion I'm getting some Values from a SAP-ERP System via JCO.
I update a sql-table with this values with a sql-query command.
But sometimes the values I get from SAP-ERP are empty (space) and I'm not able to update the sql-table because of a null-value exception. (The column doesn't allow null-values). It seems that MII thinks null and space are the same.
I tried to something like this when passing the value to the sql-query parameter but it didn't work:
stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", " ")
stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", " ")
this works but I don't want to have a "_"
stringif( Repeater_Result.Output{/item/SCHGT} == "X", "X", "_")
Any suggestions?
thank you.
MatthiasThe problem is Oracle doesn't know the space function. But it knows a similar function: NVL --> replaces a null value with something else. So this statement works fine for me:
update marc set
LGort = '[Param.3]',
dispo = '[Param.4]',
schgt = NVL('[Param.5]', ' '),
dismm = '[Param.6]',
sobsl = NVL('[Param.7]',' '),
fevor = '[Param.8]'
where matnr = '[Param.1]' and werks = '[Param.2]'
If Param.5 or Param.7 is null Oracle replaces it with a space in every other case it is the parameter itself.
Christian, thank you for your hint with the space function. So I remembered the NVL-function.
Regards
Matthias -
where is it at?.....its July 20, and i dont see nada??? what in the **** bobby??? umm my iphone os updates the big ones anyway used to come out at midnight.......yea?
Then, you need to sharpen your search skills. A google search for Lion ships today brings up many hits, with this one mentioning a time: http://isource.com/2011/07/19/confirmed-os-x-lion-ships-tomorrow/
-
where is it at?.....its July 20, and i dont see nada??? what in the **** bobby??? umm my iphone os updates the big ones anyway used to come out at midnight
FWIW, this is a user forum. We are all users like you. No Apple employees here and Apple doesn't follow these forums. You aren't speaking to Apple here.
-
Update or delete table from XML
Is it possible to update or delete table's row from XML file?
Thanks
Prasanta DeHi Steve,
Thanks for your reply but I could not find any example from the documentation for update-request or delete-request. I need your help in this regards.
1. I have emp table with many rows and the simple structure like this
DEPTNO NUMBER(2)
EMPNO NUMBER(2)
EMPNAME VARCHAR2(20)
EMPSAL NUMBER(8,2)
Key is defined on deptno and empno
2. I have a xml file like this
<?xml version = '1.0'?>
<ROWSET>
<ROW num="1">
<DEPTNO>1</DEPTNO>
<EMPNO>11</EMPNO>
<EMPSAL>1111.11</EMPSAL>
</ROW>
<ROW num="2">
<DEPTNO>1</DEPTNO>
<EMPNO>12</EMPNO>
<EMPSAL>2222.22</EMPSAL>
</ROW>
<ROW num="3">
<DEPTNO>1</DEPTNO>
<EMPNO>13</EMPNO>
<EMPSAL>3333.33</EMPSAL>
</ROW>
</ROWSET>
3. I want that xsql servlet will read this xml file and update EMPSAL column depending upon the value of DEPTNO and EMPNO from xml file.
Please let me know how I should use update-request in xsql page.
Thanks
Prasanta De
null
Maybe you are looking for
-
Oracle db 10gR2 installation on AIX 5 series server
can anyone please suggest me a gud link to get detailed steps for installing oracle database 10g R2 on AIX server as I am completely new to AIX. I have done db admin and maintainence on Solaris and Linux, though. Please also suggest gud installation
-
[SQL] how can i get this result....??(accumulation distinct count) Hi everybody, pls tell me how can it possible to get result? ### sample data date visitor 10-01 A 10-01 A 10-01 B 10-01 B 10-01 C 10-01 C 10-02 A 10-02 C 10-02 C 10-02 D 10-02 D 10-03
-
Finder funkiness, no right-click, etc
Hi gang. I've had a problem where the Finder or OS starts acting funky when my HD space runs too low. The right-click contextual menu stops working, I can't scroll down on drop-down menus, cut-and-paste sometimes stops working. When applications laun
-
Waveburner don't works with my audiointerface
I've promlems with Waveburner and my external audiointerface M-Audio (Lightbgrige). Always when I change the settup to my Interface, Waveburner changet automatically to the internal audiointerface. I do not know, whats the problem. can someone help m
-
Possible Bug on More Button & Submenu in Help Desk App
This behaves the same in 7.4.6.5 + 7.4.7.0 Seems to be a rouge ajax? css coding error/bug and This topic first appeared in the Spiceworks Community