Loading 3 millions record in database via externel table
I am loading 3+ millions records in database by using externel tables. It is very slow process. How can I make this process fast?
Hi,
1. Break down the file into several files. let just say 10 files (300,000 record each)
2. disable all index on the target table if possible
3. disable Foreign key if possible, beside you can check this later using exceptions table
4. make sure your freelist is and initrans is 10 for the target table, if you are inserting tabel resides in manual space management tablespace
5. Create 10 proccess, each reading from their own file. and run this 10 process concurrently and used log error with unlimited reject limit facility. so the insert will continue until finish
hope can help.
Similar Messages
-
Loading some records into Custom-R/3 table manually
Hai
I created one custom R/3 table. One of my friend load some records into that cutom table manaully. I want to load some more records manually.
So how can i load some records into the custom R/3 Table manully . Please let me know
i ll assing the points
kumarHi,
you can do it with SE16 / SE16N or, if exists, via the maintenance tool SM30.
Out of an excel sheet with the same structure can can copy the entries completey by cut & paste. -
Editting data in a database via a table in the UI
Hi,
I have an application where I am using a Table component to display data from a database. Now the problem is that the database might be updated from time to time. In that case, how do I ensure that my table displays the updated data? Because currently I am retrieving data from my database by specifying the table name from the servers window - which I guess will not capture any changes I make to the database.
Another problem: how do you change the data in the database using the table component? The user is supposed to be able to change stuff in the table from the UI, and that should override the database.
Any help will be appreciated. Thanks in advance.
Asifhi ,
ur problem is not complicated , u simply have to refresh the db connection with the interface,
i have experienced this problem when programming with asp.net and what i used to do is to look for the db connectivity auto generated code and then extract the connectivity part ,put it in a method and every time the page is loaded (this is done automatically at any submitt action in the page)the method will be called and db connectivity will be refreshed displaying the right results,
hope i was clear,
good luck -
Insert multiple records into database through internal table
Hi
please help me how to insert multiple records into database through internal tableHi,
Welcome to SCN.
Press F1 on INSERT statement and you will teh syntax as well the docu for the same. -
Is it ok? if we have 42 million records in a single fact table!!
Hello,
We have three outstanding fact tables, and we need to add one more fact type, and we were thinking whether we can do two different fact tables, or can we put the values in one of the same fact table which is similar, but the records are upto 42 million if we add ,so my question is having a single fact table with all records, or breaking it down, to two different ones!!!Thnx!!I am not sure what is an "outstanding fact" or an "fact type". A 42m fact table doesn't necessarily indicate you are doing something wrong although it does sound as odd. I would expect most facts to be small as they should have aggregated measures to speed up report. In some cases you may want to drill down to the detailed transaction level in which case you may find these large facts. But care should be taken not to allow users to query on this fact without user the "transaction ID" which obviously should be indexed and should guarantee that queries will be quick.
Guessing from your post (as it is not clear not descriptive enough) it would seem to imply that you are adding a new dimension to your fact and that will cause the fact to increase it's row count to 42m. That probably means that you are changing the granularity of the fact. That may or may not be correct, depending on your model. -
Create Store Procedure to Count Number of records in Database for each table
Hello,
I have created the code which counts the number of records for each table in database. However, I want create store procedure for this code this code that when ever use execute the Store Procedure can provide the database name to execute it and as a result
table will display number of records in each table.
Below you will find the code its working:
CREATE
TABLE #TEMPCOUNT
(TABLENAME
NVARCHAR(128),
RECORD_COUNT BIGINT)
-- Creating a TEMP table
EXEC
sp_msforeachtable
'insert #tempcount select ''?'', count(*) from ? with (nolock)'
SELECT
* FROM
#TEMPCOUNT
ORDER
BY TABLENAME
DROP
TABLE #TEMPCOUNT
This code need to be convert in store procedure and user can give database name when execute the procedure in order to count the records.
Looking forward for your support.
Thanks.
SharePoint_Consultant_EMEASomething like:
set quoted_identifier off
go
create procedure usp_TableCounts
@DBName as varchar(1000)
as
set nocount on
declare @SQLToExecute varchar(1000)
CREATE TABLE #TEMPCOUNT (TABLENAME NVARCHAR(128), RECORD_COUNT BIGINT) -- Creating a TEMP table
set @SQLToExecute = @DBName + ".dbo.sp_msforeachtable 'insert into #tempcount select ''?'', count(*) from ? with (nolock)'"
print @SQLToExecute
exec (@SQLToExecute)
SELECT * FROM #TEMPCOUNT
ORDER BY TABLENAME
DROP TABLE #TEMPCOUNT
GO
Satish Kartan www.sqlfood.com -
Issue of hanldle 10 million record
Dears,
Program is executed in online, and need handle 10 million records.
If I select data from database directly, TSV_TNEW_PAGE_ALLOC_FAILED Short dump will occurs.
ex: select xxx
from xxx
into table xxx.
If I select 1 million record from database every time, then handle 1 million records, TIME_OUT Short dump will occurs.
ex: select xxx
from xxx
into table xxx
package size 10000000.
perform xxxx " handle 1 million records,
endselect.
Could you please teach me solution?
Thanks in advance.
Best regards
LilyPlease start with basic ABAP training before you want to SELECT 1 Mio records.
If it is a good training you will not SELECT 1 Mio records afterwards.
I would recommend to start with the chapter about the WHERE conditions.
And please forget the parallel cursor, it comes at the end of the advanced training under the title 'special and obsolete commands'.
Siegfried -
Hello Friends,
The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
Scenario:
Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
Questions are…
What is best option?
IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
Thanks
Karthikeyan Jothihttp://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
Processing
hundreds of millions records can be done in less than an hour.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Hi,
I am trying to insert a csv file as a single record as a CLOB (one row for entire csv file) and trying to do that via external table. But unalbe to do so. Following is the syntax I tried with:
create table testext_tab2
( file_data clob
organization external
TYPE ORACLE_LOADER
DEFAULT DIRECTORY dir_n1
access parameters
RECORDS DELIMITED BY NEWLINE
BADFILE DIR_N1:'lob_tab_%a_%p.bad'
LOGFILE DIR_N1:'lob_tab_%a_%p.log'
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
clob_filename CHAR(100)
COLUMN TRANSFORMS (file_data FROM LOBFILE (clob_filename) FROM (DIR_N1) CLOB)
LOCATION ('emp.txt')
REJECT LIMIT UNLIMITED
--it gives the output that the table is created but the table does not have any rows (select count(*) from testext_tab2 gives 0 rows)
-- and the logfile has entries like follows:
Fields in Data Source:
CLOB_FILENAME CHAR (100)
Terminated by ","
Trim whitespace same as SQL Loader
Column Transformations
FILE_DATA
is set from a LOBFILE
directory is from constant DIR_N1
directory object list is ignored
file is from field CLOB_FILENAME
file contains character data
in character set WE8ISO8859P1
KUP-04001: error opening file /oracle/dba/dir_n1/7369
KUP-04017: OS message: No such file or directory
KUP-04065: error processing LOBFILE for field FILE_DATA
KUP-04101: record 1 rejected in file /oracle/dba/dir_n1/emp.txt
KUP-04001: error opening file /oracle/dba/dir_n1/7499
KUP-04017: OS message: No such file or directory
KUP-04065: error processing LOBFILE for field FILE_DATA
KUP-04101: record 2 rejected in file /oracle/dba/dir_n1/emp.txt
KUP-04001: error opening file /oracle/dba/dir_n1/7521
KUP-04017: OS message: No such file or directory
KUP-04065: error processing LOBFILE for field FILE_DATA
KUP-04101: record 3 rejected in file /oracle/dba/dir_n1/emp.txt
and also the file to be loaded (emp.txt) has data like this:
7369,SMITH,CLERK,7902,12/17/1980,800,null,20
7499,ALLEN,SALESMAN,7698,2/20/1981,1600,300,30
7521,WARD,SALESMAN,7698,2/22/1981,1250,500,30
7566,JONES,MANAGER,7839,4/2/1981,2975,null,20
7654,MARTIN,SALESMAN,7698,9/28/1981,1250,1400,30
7698,BLAKE,MANAGER,7839,5/1/1981,2850,null,30
7782,CLARK,MANAGER,7839,6/9/1981,2450,null,10
7788,SCOTT,ANALYST,7566,12/9/1982,3000,null,20
7839,KING,PRESIDENT,null,11/17/1981,5000,null,10
7844,TURNER,SALESMAN,7698,9/8/1981,1500,0,30
7876,ADAMS,CLERK,7788,1/12/1983,1100,null,20
7900,JAMES,CLERK,7698,12/3/1981,950,null,30
7902,FORD,ANALYST,7566,12/3/1981,3000,null,20
7934,MILLER,CLERK,7782,1/23/1982,1300,null,10I will be thankful for help on this. Also I read on asktom site (http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1669379500346411993)
that LOB are not supported for external tables but there are other sites with examples of CLOB being loaded by external tables.
With regards,
OrausernCMcM wrote:
Hi all
We have an application that runs fine on 10.2.0.4 on most platforms, but a customer has reported an error when running 10.2.0.3 on HP. We have since reproduced the error on 10.2.0.3 on XP but have failed to reproduce it on Solaris or Linux.
The exact error is within a set of procedures, but the simplest reproducible form of the error is pasted in below.Except that you haven't pasted output to show us what the actual error is. Are we supposed to guess?
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> ed
Wrote file afiedt.buf
1 declare
2 vstrg clob:= 'A';
3 Thisstrg varchar2(32000);
4 begin
5 for i in 1..31999 loop
6 vstrg := vstrg||'A';
7 end loop;
8 ThisStrg := vStrg;
9* end;
SQL> /
PL/SQL procedure successfully completed.
SQL>Works ok for me on 10.2.0.1 (Windows 2003 server) -
I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
then the particular status will show in status column, in that way I need to write the query as 16 different cases.
Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me outHere is the code I have written to get the data from temp tables which are taking records from 7 millions table with filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
the part of Status column.
SELECT
z.SYSTEMNAME
--,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
--else NULL
--End AS SubSystemName
, CASE
WHEN z.TAX_ID IN
(SELECT DISTINCT zxc.TIN
FROM .dbo.SQS_Provider_Tracking zxc
WHERE zxc.[SubSystem Name] <> 'NULL'
THEN
(SELECT DISTINCT [Subsystem Name]
FROM .dbo.SQS_Provider_Tracking zxc
WHERE z.TAX_ID = zxc.TIN)
End As SubSYSTEMNAME
,z.PROVIDERNAME
,z.STATECODE
,z.TAX_ID
,z.SRC_PAR_CD
,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
, CASE
WHEN z.SRC_PAR_CD IN ('E','O','S','W')
THEN 'Nonpar Waiver'
-- --Is Puerto Rico of Lifesynch
WHEN z.TAX_ID IN
(SELECT DISTINCT a.TAX_ID
FROM .dbo.SQS_NonPar_PR_LS_TINs a
WHERE a.Bucket <> 'Nonpar'
THEN
(SELECT DISTINCT a.Bucket
FROM .dbo.SQS_NonPar_PR_LS_TINs a
WHERE a.TAX_ID = z.TAX_ID)
--**Amendment Mailed**
WHEN z.TAX_ID IN
(SELECT DISTINCT b.PROV_TIN
FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN
(SELECT DISTINCT b.Mailing
FROM .dbo.SQS_Mailed_TINs_010614 b
WHERE z.TAX_ID = b.PROV_TIN
-- --**Amendment Mailed Wave 3-5**
WHEN z.TAX_ID In
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (3rd Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (3rd Wave)'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (4th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (4th Wave)'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (5th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (5th Wave)'
-- --**Top Objecting Systems**
WHEN z.SYSTEMNAME IN
('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
THEN 'Top Objecting Systems'
WHEN z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Top Objector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H'
THEN 'Top Objecting Systems'
-- --**Other Objecting Hospitals**
WHEN (z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Objector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Other Objecting Hospitals'
-- --**Objecting Physicians**
WHEN (z.TAX_ID IN
(SELECT DISTINCT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj
WHERE obj.[Objector?] in ('Objector','Top Objector')
and z.TAX_ID = obj.TIN
and z.Hosp_Ind = 'P')
THEN 'Objecting Physicians'
--****Rejecting Hospitals****
WHEN (z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Rejecting Hospitals'
--****Rejecting Physciains****
WHEN
(z.TAX_ID IN
(SELECT DISTINCT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj
WHERE z.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector')
and z.Hosp_Ind = 'P')
THEN 'REjecting Physicians'
----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
-- --**Non-Objecting Hospitals**
WHEN z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
WHERE
(z.TAX_ID = h.TAX_ID)
OR h.SMG_ID IS NOT NULL)
and z.Hosp_Ind = 'H'
THEN 'Non-Objecting Hospitals'
-- **Outstanding Contracts for Review**
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Non-Objecting Bilateral Physicians'
AND z.TAX_ID = qz.PROV_TIN)
Then 'Non-Objecting Bilateral Physicians'
When z.TAX_ID in
(select distinct
p.TAX_ID
from dbo.SQS_CoC_Potential_Mail_List p
where p.amendmentrights <> 'Unilateral'
AND z.TAX_ID = p.TAX_ID)
THEN 'Non-Objecting Bilateral Physicians'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'More Research Needed'
AND qz.PROV_TIN = z.TAX_ID)
THEN 'More Research Needed'
WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
THEN 'ERROR'
else 'Market Review/Preparing to Mail'
END AS [STATUS Column]
Please suggest on this -
Display data in Web DynPro table from database via EJB
I have a JavaBeans model which has a method populateDataToTable()to retrieve data from database via Session bean (calling entity bean, returning ArrayList of data) and the data needed to be display in the Web DynPro table.
User Interface (Web DynPro) <-> JavaBeans Model <-> Busineess Logic (session bean) <-> Persistence (Entity Bean)<-> DB table.
The context bindiing and table part is ok. How do i load the data to the table ? what the coding to put in wdDoInit() ?
Any help would be appreciated.in wdinit(),
Collection col = new ArrayList();
try{
MyCommandBean bean = new MyCommandBean();
col = bean.getDataFromDbViaEJB();
wdContext.nodeMyCommandBean().bind(col);
} catch (Exception ex) {
ex.printStackTrace(ex);
in your JavaBean model class, MyCommandBean getDatafromDbViaEJB() method:
Collection col = new ArrayList();
Collection newcol = new ArrayList();
//include your own context initialization etc...
col = local.getDataViaSessionBean(param);
// if your returned result also a bean class, reassigned it to current MyCommandBean
for (Iterator iterator = col.iterator(); iterator.hasNext();) {
MyOtherBean otherBean=(MyOtherBean)iterator.next();
MyCommmandBean bean = new MyCommandBean();
bean.attribute1 = outBean.getAttirbute1();
// get other attibutes
newcol.add(bean);
return newcol; -
Internal Table with 22 Million Records
Hello,
I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
Any tips on how I can optimize my coding? I have attached the Short-Dump.
Thanks,
SD
DATA: ls_source TYPE y_source_fields,
ls_target TYPE y_target_fields.
DATA: it_source_tmp TYPE yt_source_fields,
et_target_tmp TYPE yt_target_fields.
TYPES: BEGIN OF IT_TAB1,
BPARTNER TYPE /BI0/OIBPARTNER,
DATEBIRTH TYPE /BI0/OIDATEBIRTH,
ALTER TYPE /GKV/BW01_ALTER,
ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
END OF IT_TAB1.
DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
WITH NON-UNIQUE KEY BPARTNER,
WA_XX_TAB1 TYPE IT_TAB1.
it_source_tmp[] = it_source[].
SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
DELETE ADJACENT DUPLICATES FROM it_source_tmp
COMPARING /B99/S_BWPKKD.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
LOOP AT it_source INTO ls_source.
READ TABLE IT_XX_TAB1
INTO WA_XX_TAB1
WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
IF sy-subrc = 0.
ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
ENDIF.
MOVE-CORRESPONDING ls_source TO ls_target.
APPEND ls_target TO et_target.
CLEAR ls_target.
ENDLOOP.Hi SD,
Please put the select querry in below condition marked in bold.
IF it_source_tmp[] IS NOT INTIAL.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
ENDIF.
This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
Regards,
Pravin -
Problem with Fetching Million Records from Table COEP into an Internal Tabl
Hi Everyone ! Hope things are going well.
Table : COEP has 6 million records.
I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
Regards,
Owais...The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
Here is my select:
SELECT kokrs
belnr
buzei
ebeln
ebelp
wkgbtr
refbn
bukrs
gjahr
FROM covp CLIENT SPECIFIED
INTO TABLE i_coep
FOR ALL ENTRIES IN i_objnr
WHERE mandt EQ sy-mandt
AND lednr EQ '00'
AND objnr = i_objnr-objnr
AND kokrs = c_conarea. -
Handling internal table with more than 1 million record
Hi All,
We are facing dump for storage parameters wrongly set.
Basically the dump is due to the internal table having more than 1 million records. We have increased the storage parameter size from 512 to 2048 , still the dump is happening.
Please advice is there any other way to handle these kinds of internal table.
P:S we have tried the option of using hashed table, this does not suits our scenario.
Thanks and Regards,
VijayHi
your problem can be solved by populating the internal table in chunks. for that you have to use Database Cursor concept.
hope this code helps.
G_PACKAGE_SIZE = 50000.
* Using DB Cursor to fetch data in batch.
OPEN CURSOR WITH HOLD DB_CURSOR FOR
SELECT *
FROM ZTABLE.
DO.
FETCH NEXT CURSOR DB_CURSOR
INTO CORRESPONDING FIELDS OF TABLE IT_ZTABLE
PACKAGE SIZE G_PACKAGE_SIZE.
IF SY-SUBRC NE 0.
CLOSE CURSOR DB_CURSOR.
EXIT.
ENDIF. -
Hi Guys,
I have to add a new Big INT column to existing table in production, which holds 700 million records and would like to know the impact?
I have been tolled by one of my colleagues that last time they tried adding a column to same table during working hour and it locked out the table and impacted the users.
Please suggest/share If any one had similar experience.
Thanks Shiven:) If Answer is Helpful, Please VoteIf you add a new column to a table using an ALTER TABLE ADD command and specify that the new column allows NULLs and you do not define a default value, then it will take a table lock. However, once it gets the table lock, it will essentially run instantly
and then free the table lock. That will add this new column as the last column in the table, for example
ALTER MyTable ADD MyNewColumn bigint NULL;
But if you your change adds a new column with a default value, or you do something like using table designer to add the new column in the middle of the current list of columns, then SQL will have to rewrite the table. So it will get a table lock, rewrite
the whole table and then free the table lock. That will take a considerable amount of time and the table lock will be held for that whole period of time.
But, no matter how you make the change, if at all possible, I would not alter a table schema on a production database during working hours. Do it when nothing else is going on.
Tom
Maybe you are looking for
-
How to call a program from FM which acts as popup?
Hi, I need to call a program from FM and once the program is called it needs to be opened as a popup. Maybe we need to assign size when we call from FM or do we need to give size in the program it self? I know i can either use the Submit command or C
-
OK, my wife's Vista PC laptop is dying, ( now there's a surprise) Since she only uses it for Quicken, could I switch my current intel-based Imac to a dedicated PC, and buy myself a new Imac? I've always thought "bootcamp" kind of defeated the purpose
-
ADF generated HTML code performance issue
ADF generated code has lot of unnecessary html tags (when you check in Firebug). browser takes time to load the page . It means performance low. As a Front end engineer . My aim is to give end user good browsing feeling , he doesn't like to weighting
-
Grid control design and agent deployment
To give you a brief overview of the setup here, it is currently an agent per instance instead of an agent per host. This was done to allow an agent to be moved along with an instance to another host in a cluster (we are using IBM AIX LPAR, and Sun Cl
-
CL_HTMLB_HIERCELL event raising issue
Hi, I am using the CL_HTMLB_HIERCELL to output the table entries in a hierarchical format. I want to change the icon of the status icon . I tried to use STATUSICONURL attribute of CL_HTMLB_HIERCELL...it dosent work. The reason I want to change the ic