Performance for ALTER TABLE statements
Hi,
I'd like to improve performance for scripts running several ALTER TABLE statements. I have two questions regarding this.
This is the original code:
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_TO_INVOICE NUMBER NULL );
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_INVOICED NUMBER NULL );
1. Would I gain any performance by making the following changes?
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_TO_INVOICE NUMBER NULL,
QTY_INVOICED NUMBER NULL );
These columns are later on filled with values and then made NOT NULL.
2. Would I gain anything by make these columns NOT NULL with a DEFAULT value in the first statement and then later on insert the values?
/Roland Bali
null
1. It wud definitely increase the performance.
2. You can only have NOT NULL columns added to an existing table if the table is empty.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Roland Bali ([email protected]):
Hi,
I'd like to improve performance for scripts running several ALTER TABLE statements. I have two questions regarding this.
This is the original code:
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_TO_INVOICE NUMBER NULL );
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_INVOICED NUMBER NULL );
1. Would I gain any performance by making the following changes?
ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
QTY_TO_INVOICE NUMBER NULL,
QTY_INVOICED NUMBER NULL );
These columns are later on filled with values and then made NOT NULL.
2. Would I gain anything by make these columns NOT NULL with a DEFAULT value in the first statement and then later on insert the values?
/Roland Bali<HR></BLOCKQUOTE>
Naveen
Similar Messages
-
SAP HANA - How to run alter table statement in HANA procedure?
I am trying to run alter table statement in a procedure. HANA gives error saying
SAP DBTech JDBC: [257] (at 1338): sql syntax error: ALTER TABLE is not allowed in SQLScript: line 36 col 8 (at pos 1338)
How to run alter table statements in procedure?
Thanks,
Suren.Hi Rich Heilman,
Thanks for your response. I have tried with dynamic SQL. I am trying to add partitions to a non portioned table.
EXECUTE IMMEDIATE 'ALTER TABLE ' || :SCHEMA_NAME || '.TARGET_TABLE PARTITION BY RANGE (TARGET_TYPE_ID) (PARTITION VALUE = 1, PARTITION VALUE = 2, PARTITION VALUE = 3, PARTITION VALUE = 4, PARTITION OTHERS)';
Execution fails with error
Could not execute 'CALL PARTITION_TARGET_TABLE('SUREN_TEST')' in 1.160 seconds .
[129]: transaction rolled back by an internal error: [129] "SUREN_TEST"."PARTITION_TARGET_TABLE": line 53 col 3 (at pos 2173): [129] (range 3)
Any reasons for this error?
Thanks,
Suren. -
Alter Table Statements in Designer
Hi Guys,
Just wondering if anyone knows if it's possible to store Alter Table statements in Designer 9i?
AntonHi,
The designer generate alter statements for you.
e.g You have a table emp with empno number(8) in designer and in your application database. You cahnged empno to number(12) in designer. Generate ddl pointing to your application database. Designer generate an ALTER table script for you. -
Poor performance after altering tables to InnoDB
I have an application using CF MX, IIS, and MySQL 5.0.37
running on Microsoft Windows Server 2003.
When I originally built the application, access from login to
start page and page to page was very good. But, I started getting
errors because tables were sometimes getting records added or
deleted and sometimes not. I thought the "cftransaction" statements
were protecting my transactions. Then I found out about MyISAM (the
default) vs InnoDB.
So, using MySQLAdmin, I altered the tables to InnoDB. Now,
the transactions work correctly on commits and rollbacks, but the
performance of the application stinks. It now takes 20 seconds to
log in.
The first page involves a fairly involved select statement,
but it hasn't changed at all. It just runs very slowly. Updates
also run slowly.
Is there something else I was supposed to do in addition to
the "alter table" in this environment? The data tables used to be
in /data/saf_data. Now the ibdata file and log files are in /data
and only the ".frm" files are still in saf_data.
I realize I'm asking this question in a CF forum. But, people
here are usually very knowledgable and helpful and I'm desperate.
This is a CF application. Is there anything I need to do for a CF
app to work well with MySQL InnoDB tables? Any configuration or
location stuff to know about?
Help, and Thanks!The programs was ported also in earlier versions 1,5 year ago we use forte 6.2 and the performance was o.k.
Possibly the program design was based on Windows
features that are inappropriate for Unix. So the principal design didn't change, the only thing is, that we switch to the boost libraries, where we use the thread, regex, filesystem and date-time libraries
Have you tried any other Unix-like system? Linux, AIX, HPUX,
etc? If so, how does the performance compare to
Solaris?Not at the moment, because the order is customer driven, but HP and Linux is also an option.
Also consider machine differences. For example, your
old Ultra-80 system at 450Mhz will not keep up with a
modern x86 or x64 system at 3+ GHz. The clock speed
could account for a factor of 6. That was my first thought, but how I have wrote in an earlier post, the performance testcase need the same time on a 6x1GHz (some Sun-Fire-T1000) machine
Also, how much realmemory does the sparc system have? 4 GB! And during the testrun the machine use less than 30% of this memory.
If the program is not multithreaded, the additional
processors on the Ultra-80 won't help. But it is!
If it is multithreaded, the default libthread or libpthread on
Solaris 8 does not give the best performance. You can
link with the alternative lwp-based thread library on
Solaris 8 by adding the link-time option
-R /usr/lib/lwp (for 32-bit applications)
-R /usr/lib/lwp/64 (for 64-bit applications)The running application use both, the thread and the pthread library can that be a problem? Is it right, that the lwp path include only the normal thread library?
Is there a particular reason why you are using the
obsolete Solaris 8 and the old Sun Studio 10?Because we have customer which do not upgrade? Can we develop on Solaris 10 with SunStudio 11and deploy on 5.8 without risk?
regards
Arno -
Commit after alter table statement or not?
Hi,
Is it necessary to put the a commit after the following statement or is it automatically committed:
Alter table tab_name drop column col_name;
ThanksKhurram,
Isnt Eric you are , i mean isnt yours synonym :)Erm...simple answer. No. We are not the same person. I just know that Eric, like yourself, makes good contributions to these threads and then someone like that is coming on the forums and trying to make himself look better and put down the regular contributors which isn't really on is it, I think you'll agree.
CREATE PUBLIC SYNONYM Eric FOR Blushadow;
hehe. -
Difference between alter table statements to add primary keys
Hi,
Can someone explain what the difference between these 2 statements are and if & when one should be used over the other?
Also, are the brackets around the column name necessary?
Thanks!
ALTER TABLE xyz ADD CONSTRAINT id_pk PRIMARY KEY (id);
ALTER TABLE xyz ADD PRIMARY KEY (id);Hi,
As every one has explained that there is no difference in the actual functioning of the two constraints except that the first statement will create Primary Key constraint with user defined name as id_pk whereas in the second statement Primary key will be created with system generated name like SYS....
Normally name for the constraints are needed when you are working on constraints like need to enable or disable the constraints or need to drop the constraints.
We can say that names for primary key are least required when it comes to the usage except better recognization.
Because we can drop primary key, disable primary key,enable primary key without giving the name for the primary key as there can only be one primary key in the table. So not much an issue with the name.
But as earlier post say its better than nothing,, i will add to it that its almost same as that with names constrained in case of Primary Keys taking into account the usage of the named primary key.
Regards -
HOW TO USE A SINGLE PERFORM FOR VARIOUS TABLES ?
perform test TABLES t_header.
select
KONH~KNUMH
konh~datab
konh~datbi
konp~kbetr
konp~konwa
konp~kpein
konp~kmein
KONP~KRECH
FROM konh INNER JOIN konp
ON konpknumh = konhknumh
into table iTABXXX
"ANY TEMPERARY INTERNAL TABLE.
for all entries in t_header
where
konh~kschl = t_header-kschl
AND konh~knumh = t_header-knumh.
endform.
how can I use above perform for various internal tables of DIFFERENT LINE TYPES but having the fields KSCHL & KNUMH.u can use single perform....
just see this example......hope this is what u r expecting....
tables : pa0001.
parameters : p_pernr like pa0001-pernr.
data : itab1 like pa0001 occurs 0 with header line.
data : itab2 like pa0002 occurs 0 with header line.
perform get_data tables itab1 itab2.
if not itab1[] is initial.
loop at itab1.
write :/ itab1-pernr.
endloop.
endif.
if not itab2[] is initial.
loop at itab2.
write :/ itab2-pernr.
endloop.
endif.
*& Form get_data
text
-->P_ITAB1 text
-->P_ITAB2 text
form get_data tables itab1 structure pa0001
itab2 structure pa0002.
select * from pa0001 into table itab1 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
select * from pa0002 into table itab2 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
endform. " get_data
Regards
vasu -
Multiple Alter Table Statements in one batch
Hi Team,
We have in one of our upcoming release two columns being added to a table that has over 20 million records and 14 indexes.
We needed to add two columns to the table both not null (bit). Because it was taking a while to add the columns, we thought that putting these two alter statements in one batch would speed up the operation significantly but to my surprise it did not.
Conclusion from my test: individual alter statements or batch alter statements take the same time
Here are me test and results - table Order1 and Order2 are exactly the same structure and data.
Test case 1:
===================
ALTER TABLE Order1
ADD OR_N BIT DEFAULT 0 NOT NULL
go
ALTER TABLE AccountTradeConfirmation_Alter1
ADD OR_S BIT DEFAULT 0 NOT NULL
Go
Elapsed Time: 2 hrs
Mar 18 2015 5:56PM
(1 row affected)
Non-clustered index (index id = 3) is being rebuilt.
Non-clustered index (index id = 4) is being rebuilt.
Non-clustered index (index id = 5) is being rebuilt.
Non-clustered index (index id = 6) is being rebuilt.
Non-clustered index (index id = 7) is being rebuilt.
Non-clustered index (index id = 8) is being rebuilt.
Non-clustered index (index id = 9) is being rebuilt.
Non-clustered index (index id = 10) is being rebuilt.
Non-clustered index (index id = 11) is being rebuilt.
Non-clustered index (index id = 12) is being rebuilt.
Non-clustered index (index id = 13) is being rebuilt.
Non-clustered index (index id = 14) is being rebuilt.
(21777920 rows affected)
Non-clustered index (index id = 3) is being rebuilt.
Non-clustered index (index id = 4) is being rebuilt.
Non-clustered index (index id = 5) is being rebuilt.
Non-clustered index (index id = 6) is being rebuilt.
Non-clustered index (index id = 7) is being rebuilt.
Non-clustered index (index id = 8) is being rebuilt.
Non-clustered index (index id = 9) is being rebuilt.
Non-clustered index (index id = 10) is being rebuilt.
Non-clustered index (index id = 11) is being rebuilt.
Non-clustered index (index id = 12) is being rebuilt.
Non-clustered index (index id = 13) is being rebuilt.
Non-clustered index (index id = 14) is being rebuilt.
(21777920 rows affected)
Mar 18 2015 7:52PM
Test case 2:
===================
ALTER TABLE Order2
ADD OR_N BIT DEFAULT 0 NOT NULL, OR_S BIT DEFAULT 0 NOT NULL
go
2 hrs elapsed time
Mar 20 2015 11:10AM
(1 row affected)
Non-clustered index (index id = 3) is being rebuilt.
Non-clustered index (index id = 4) is being rebuilt.
Non-clustered index (index id = 5) is being rebuilt.
Non-clustered index (index id = 6) is being rebuilt.
Non-clustered index (index id = 7) is being rebuilt.
Non-clustered index (index id = 8) is being rebuilt.
Non-clustered index (index id = 9) is being rebuilt.
Non-clustered index (index id = 10) is being rebuilt.
Non-clustered index (index id = 11) is being rebuilt.
Non-clustered index (index id = 12) is being rebuilt.
Non-clustered index (index id = 13) is being rebuilt.
Non-clustered index (index id = 14) is being rebuilt.
(21777920 rows affected)
Mar 20 2015 1:12PMHi Kiran,
I have read your response a few times and I was not able to understand your angle. I assume based on the results of my test that Sybase does the following in processing the alter statements
ALTER TABLE Order2
ADD OR_N BIT DEFAULT 0 NOT NULL, OR_S BIT DEFAULT 0 NOT NULL
go
process alter ADD OR_N BIT
-- > make copy of table
---> alter original table
--> put data back in
process alterOR_S BIT
-- > make copy of table
---> alter original table
--> put data back in
rebuild index
my expectation was that it would make a copy of the table only once and process the two alter statements. Also when doing the alter separately (test1) it rebuilt the index twice, however using the batch the index was rebuilt once (at least only one message displayed).
Regards. -
Replicating DDL, but only for ALTER TABLE/CREATE TABLE
We're looking to use Streams to replicate our database, for Warehouse use. We're not looking to use any ETL, but rather copy the table structures & data over identically as they appear in the source database. We'll add indexes in a custom step. Otherwise, though, we don't need TRIGGERs, PROCs, VIEWs etc.
Is this possible in Streams? Probably asking if it's possible is the wrong question ... Is this something that is normally done, or does it bring with it a significant amount of complexity, plus some extra things to note (like, "..don't forget TRUNCATE also...")
Thanks,
ChuckSorry to bump this one ...
So, if our intent is to copy over all tables in a specified list of shemas, in their current form, and then we wanted to capture:
INSERTs
UPDATEs
DELETEs
TRUNCATE TABLE
ALTER TABLE (but excluding anything related to CONSTRAINTS)
(... I'm thinking that's all we'd need to keep a copy of the main db in the warehouse, without a DBA having to "retouch" the warehouse to keep it in sync ...)
Would that be considered a complicated configuration? The ALTER TABLE piece sounds picky enough, that it could be a headache ... but I'm thinking the Oracle reps were being pushy about the amount effort needed to set this environment up.
--=Chuck -
Why append opration will not perform for hashed table???
could you pls explain why append is not working for hashed table while it is working for sort and hashed.......
Moderator Message: Interview-type questions are not allowed. Read the Rules of Engagement of these forum to avoid getting your ID deleted.
Edited by: kishan P on Mar 1, 2012 11:25 AMHello,
See the hashed tables does not support index operations like in standard and sorted tables rather its individual entries are accessed by key. The hashed internal table has been developed specifically using hashing algorithm. In other words, APPEND statement will not work in hashed internal tables but only in standard tables.
The processing of hashed tables are undertaken by using a KEY whereas for the standard table you may use the key to access it contents or not.
For more info you can refer to following link below -
[http://help.sap.com/saphelp_nw70/helpdata/en/fc/eb35de358411d1829f0000e829fbfe/content.htm]
Hope this helps ! -
Table compression and alter table statement
Friends
I am trying to add columns to a table which is compressed. Since Oracle treats compressed tables as Object tables, I cannot add columns directly so I tried to uncompress table first and then add columns. This doesnt seems to work.
What could be issue.
Thanks
Vishal V.
Script to test is here and results are below.
-- Test1 => add columns to uncompressed table -> Success
DROP TABLE TAB_COMP;
CREATE TABLE TAB_COMP(ID NUMBER) NOCOMPRESS;
ALTER TABLE TAB_COMP ADD (NAME VARCHAR2(10));
-- Test2 =. try adding columns to compressed tables, uncompress it and then try again -> Fails
DROP TABLE TAB_COMP;
CREATE TABLE TAB_COMP(ID NUMBER) COMPRESS;
ALTER TABLE TAB_COMP ADD (NAME VARCHAR2(10));
ALTER TABLE TAB_COMP move NOCOMPRESS;
ALTER TABLE TAB_COMP ADD (NAME VARCHAR2(10));
SQL> -- Test1 => add columns to uncompressed table -> Success
SQL> DROP TABLE TAB_COMP;
Table dropped.
SQL> CREATE TABLE TAB_COMP(ID NUMBER) NOCOMPRESS;
Table created.
SQL> ALTER TABLE TAB_COMP ADD (NAME VARCHAR2(10));
Table altered.
SQL>
SQL> -- Test2 =. try adding columns to compressed tables, uncompress it and then try again -> Fails
SQL> DROP TABLE TAB_COMP;
Table dropped.
SQL> CREATE TABLE TAB_COMP(ID NUMBER) COMPRESS;
Table created.
SQL> ALTER TABLE TAB_COMP ADD (NAME VARCHAR2(10));
ALTER TABLE TAB_COMP ADD (NAME VARCHAR2(10))
ERROR at line 1:
ORA-22856: cannot add columns to object tables
SQL> ALTER TABLE TAB_COMP move NOCOMPRESS;
Table altered.
SQL> ALTER TABLE TAB_COMP ADD (NAME VARCHAR2(10));
ALTER TABLE TAB_COMP ADD (NAME VARCHAR2(10))
ERROR at line 1:
ORA-22856: cannot add columns to object tablesWhich version of oracle you are using?
1* create table test1234(a number) compress
SQL> /
Table created.
Elapsed: 00:00:00.02
SQL> alter table test1234 add(b varchar2(200));
Table altered.
Elapsed: 00:00:00.02 -
Funktion for "Create Table" Statement
Dear all,
I am looking for a function to create a "create table"-SQL statement from an existing SAP Dictionary table. Does anybod y know a abap function to do this. With the SQL Statement I want to create the table in an external Database.
Kind Regards
Roman BeckerHi please enter db_create* in se37 and pick the desired function needed.
here are few Function modules for u.
DB_CREATE_TABLE
DB_CREATE_TABLE_AS_SELECT
DB_CREATE_TABLE_AS_SELECT_S
DB_CREATE_TABLE_S As DB_CREATE_TABLE, also returns the generated statements
Satish -
How to improve performance for Azure Table Storage bulk loads
Hello all,
Would appreciate your help as we are facing a challenge.
We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
Kindly, note that we shouldn't be using SQL/Azure SQL for this.
I would really appreciate your help.
ThanksI'd think you're just pooling the parallel connections to Azure, if you do it on one system. You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
You could speed it up by moving the data file to the cloud and process it with a Cloud worker role. That way you'd be in the datacenter (which is a much faster, more optimized network.)
Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
Darin R. -
Will alter table table_name move statement change the storage parammeter
Hi there,
I have a question about table reorgnization. Will alter table
table_name move SQL statment change the table's storage parameter
or keep the same as the original one? If I want to use alter
table statement to do table defragmentation and change the
initial and next storage parameter, how will I write this SQL
statement?
Thanks in advance.Thanks. My table has 5000 extents each of which is 64k, so I
think I need to do table defragmentation to improve performance.
If I use alter table table_name move without storage parameter
and tablespace name, I will relocate this table in current
tablespace and adopt the current tablespace's storage parameter
as my table's new storage paramater which is still 64k. After
that, if I issue alter
table table_name storage (initial 50M next=50M) command, will it
change this table's storage parameter and decrease the totoal
extents number? I ask this question because I use OEM2.2 tuning
pack reorg wizard to generate the job script to do table
defragmentation, but in the job script there is no new storage
parameter. It only generate alter table table_name move
statement, so I wonder if I don't need to modify this job scripts
and after it executes this job script, I issue alter table
table_name storage parameter command manaully. Will this solve
my problem or I must modify the job script and add new storage
parameter in the job script? -
Keeping stats up to date for partitioned tables
Hi,
Oracle version 10.2.0.4
I have a partioned table. I would like to keep stats up-to-date.
Can I just run a single command to update table stats, indexes and partitions please?
exec dbms_stats.gather_table_stats(user, 'TABLE', cascade=>true)or I also need to run exec dbms_stats.gather_table_stats(user, 'TABLE', granularity=>partition)
thanks,
Ashok
Edited by: 902986 on 27-Oct-2012 11:06
Edited by: 902986 on 27-Oct-2012 11:07thanks
yes there were many indexes on the original non-partitioned table and I have created another table partitioned and now populating it with the data from the original table. the new table is partitioned on a date range column for all years before 2012, then for 2012, 2013 and so forth.
the indexes are all created locally bar a unique index (as per original table), created globally to enforce uniqueness across the table itself. the search will always look to year to date say 1st jan 2012 tilll today for risk analysis. the partition is on that date column and there is also a local index on that date column as well, to avoid table scan (tested with disabling that index, predictably did table scan and was less efficient).
in a DW environment, I don't see much value in having global index bar for primary key/unique constraint. I do realise that if the query crosses more than one partition, say 2 partitions, there will be two b-tree local index scans rather than one, but that would be rare (from the way they query the table).
therefore my plan is to perform a full table stats with cascade=>true and measure the time it takes and plan to do the same if the maintenance window allows it.
thanks again for your help
Edited by: 902986 on 28-Oct-2012 13:24
Maybe you are looking for
-
In Itines my albums seem to have seperated and instead of all the songs being listed under the one Album nake the same album name has a few here and then again another few. All the sings are there but if I play an Album I only get somoe of the songs.
-
Oracle 8i, Getting ORA-1501 error while creating new DB
Hi folks, I am trying create a new database on an HP Machine (Details given below) but getting following error. ORA-1501 - signalled during Create Database LOGFILE 'G:\logfiles\logaf1.ora' We actually have a custom application setup.exe to create new
-
Using Native SQL in ABAP for DB2 database
Dear Friends, I have 500K records in ITAB(Internal table) which needs to insert in 'Z' transparent table. Currently it is taking hours of time for insertion and commit. Does using Native SQL helps in performance or any suggestions? If so plea
-
User can't access detail Hardware/Software Inventory
What right or permission do I need to give so that a user can access the detailed Hardware/Software inventory of a computer? I don't want them to be able to add or modify computer properties. It's currently grayed out for the user. Jim
-
Using external util JAR from Session EJB
I have a Stateless Session EJB that uses some classes packaged in a utility jar. How do I package the EJB jar with this util jar, and get the app server to recognize this utility jar? including a /lib dir with the utils jar in it, like you would with