1 table of 10 million vs. 10 tables of 1 million
If you had a Parts table which had a column Part_Color. And there were 10 colors. And for every Color, there were about 1 million different records (ie. there are about 1 million different parts). Would anyone on this board consider breaking that table into ten tables, 1 for each color?
I've seen a little about this stuff. Obviously the headaches and amount of extra code – having to address 10 tables separately instead of as one. But could there be significant advantages to this approach such as speed and we can perform write operations on different colors simultaneously now.
The column Part_Color is alrady there and for every
color (ten of them) there is 1 million records so the
OP has indeed 10 milion records in Parts table as I
understand the problem explanation. Suppose for
example the color code being part of the part_id and
take this as a fact (the design is not to be
questioned). The OP asked: 1 table of 10 million vs.
10 tables of 1 million - 1 for each color. At this
point, having no other info, I prefer 1 table
partitioned by Part_Color to take advantage of
partition elimination if the queries where the color
is known are predominant (condition yet to be
confirmed by OP)
Regards
EtbinI realize my problem is very general. But yes let's say the Part_Color is always known. Any yes let's say the Primary key would be Part_ID. And that Color is an attribute of a Part.
Maybe I was a unclear: There are 10 million different parts. (1 million of each color)
What is the advantage of the Partition by?
Similar Messages
-
How to write a cursor to check every row of a table which has millions of rows
Hello every one.
I need help. please... Below is the script (sample data), You can run directly on sql server management studio.
Here we need to update PPTA_Status column in Donation table. There WILL BE 3 statuses, A1, A2 and Q.
Here we need to update PPTA_status of January month donations only. We need to write a cursor. Here as this is a sample data we have only some donations (rows), but in the real table we have millions of rows. Need to check every row.
If i run the cursor for January, cursor should take every row, row by row all the rows of January.
we have donations in don_sample table, i need to check the test_results in the result_sample table for that donations and needs to update PPTA_status COLUMN.
We need to check all the donations of January month one by one. For every donation, we need to check for the 2 previous donations. For the previous donations, we need to in the following way. check
If we want to find previous donations of a donation, first look for the donor of that donation, then we can find previous donations of that donor. Like this we need to check for 2 previous donations.
If there are 2 previous donations and if they have test results, we need to update PPTA_STATUS column of this donatioh as 'Q'.
If 2 previous donation_numbers has test_code column in result_sample table as (9,10,11) values, then it means those donations has result.
BWX72 donor in the sample data I gave is example of above scenario
For the donation we are checking, if it has only 1 previous donation and it has a result in result_sample table, then set this donation Status as A2, after checking the result of this donation also.
ZBW24 donor in the sample data I gave is example of above scenario
For the donation we are checking, if it has only 1 previous donation and it DO NOT have a result in result_sample table, then set this donation Status as A1. after checking the result of this donation also.
PGH56 donor in the sample data I gave is example of above scenario
like this we need to check all the donations in don_sample table, it has millions of rows per every month.
we need to join don_sample and result_sample by donation_number. And we need to check for test_code column for result.
-- creating table
CREATE TABLE [dbo].[DON_SAMPLE](
[donation_number] [varchar](15) NOT NULL,
[donation_date] [datetime] NULL,
[donor_number] [varchar](12) NULL,
[ppta_status] [varchar](5) NULL,
[first_time_donation] [bit] NULL,
[days_since_last_donation] [int] NULL
) ON [PRIMARY]
--inserting values
Insert into [dbo].[DON_SAMPLE] ([donation_number],[donation_date],[donor_number],[ppta_status],[first_time_donation],[days_since_last_donation])
Select '27567167','2013-12-11 00:00:00.000','BWX72','A',1,0
Union ALL
Select '36543897','2014-12-26 00:00:00.000','BWX72','A',0,32
Union ALL
Select '47536542','2014-01-07 00:00:00.000','BWX72','A',0,120
Union ALL
Select '54312654','2014-12-09 00:00:00.000','JPZ41','A',1,0
Union ALL
Select '73276321','2014-12-17 00:00:00.000','JPZ41','A',0,64
Union ALL
Select '83642176','2014-01-15 00:00:00.000','JPZ41','A',0,45
Union ALL
Select '94527541','2014-12-11 00:00:00.000','ZBW24','A',0,120
Union ALL
Select '63497874','2014-01-13 00:00:00.000','ZBW24','A',1,0
Union ALL
Select '95786348','2014-12-17 00:00:00.000','PGH56','A',1,0
Union ALL
Select '87234156','2014-01-27 00:00:00.000','PGH56','A',1,0
--- creating table
CREATE TABLE [dbo].[RESULT_SAMPLE](
[test_result_id] [int] IDENTITY(1,1) NOT NULL,
[donation_number] [varchar](15) NOT NULL,
[donation_date] [datetime] NULL,
[test_code] [varchar](5) NULL,
[test_result_date] [datetime] NULL,
[test_result] [varchar](50) NULL,
[donor_number] [varchar](12) NULL
) ON [PRIMARY]
---SET IDENTITY_INSERT dbo.[RESULT_SAMPLE] ON
---- inserting values
Insert into [dbo].RESULT_SAMPLE( [test_result_id], [donation_number], [donation_date], [test_code], [test_result_date], [test_result], [donor_number])
Select 278453,'27567167','2013-12-11 00:00:00.000','0009','2014-01-20 00:00:00.000','N','BWX72'
Union ALL
Select 278454,'27567167','2013-12-11 00:00:00.000','0010','2014-01-20 00:00:00.000','NEG','BWX72'
Union ALL
Select 278455,'27567167','2013-12-11 00:00:00.000','0011','2014-01-20 00:00:00.000','N','BWX72'
Union ALL
Select 387653,'36543897','2014-12-26 00:00:00.000','0009','2014-01-24 00:00:00.000','N','BWX72'
Union ALL
Select 387654,'36543897','2014-12-26 00:00:00.000','0081','2014-01-24 00:00:00.000','NEG','BWX72'
Union ALL
Select 387655,'36543897','2014-12-26 00:00:00.000','0082','2014-01-24 00:00:00.000','N','BWX72'
UNION ALL
Select 378245,'73276321','2014-12-17 00:00:00.000','0009','2014-01-30 00:00:00.000','N','JPZ41'
Union ALL
Select 378246,'73276321','2014-12-17 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','JPZ41'
Union ALL
Select 378247,'73276321','2014-12-17 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','JPZ41'
UNION ALL
Select 561234,'83642176','2014-01-15 00:00:00.000','0081','2014-01-19 00:00:00.000','N','JPZ41'
Union ALL
Select 561235,'83642176','2014-01-15 00:00:00.000','0082','2014-01-19 00:00:00.000','NEG','JPZ41'
Union ALL
Select 561236,'83642176','2014-01-15 00:00:00.000','0083','2014-01-19 00:00:00.000','NEG','JPZ41'
Union ALL
Select 457834,'94527541','2014-12-11 00:00:00.000','0009','2014-01-30 00:00:00.000','N','ZBW24'
Union ALL
Select 457835,'94527541','2014-12-11 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','ZBW24'
Union ALL
Select 457836,'94527541','2014-12-11 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','ZBW24'
Union ALL
Select 587345,'63497874','2014-01-13 00:00:00.000','0009','2014-01-29 00:00:00.000','N','ZBW24'
Union ALL
Select 587346,'63497874','2014-01-13 00:00:00.000','0010','2014-01-29 00:00:00.000','NEG','ZBW24'
Union ALL
Select 587347,'63497874','2014-01-13 00:00:00.000','0011','2014-01-29 00:00:00.000','NEG','ZBW24'
Union ALL
Select 524876,'87234156','2014-01-27 00:00:00.000','0081','2014-02-03 00:00:00.000','N','PGH56'
Union ALL
Select 524877,'87234156','2014-01-27 00:00:00.000','0082','2014-02-03 00:00:00.000','N','PGH56'
Union ALL
Select 524878,'87234156','2014-01-27 00:00:00.000','0083','2014-02-03 00:00:00.000','N','PGH56'
select * from DON_SAMPLE
order by donor_number
select * from RESULT_SAMPLE
order by donor_numberYou didn't mention the version of SQL Server. It's important, because SQL Server 2012 makes the job much easier (and will also run much faster, by dodging a self join). (As Kalman said, the OVER clause contributes to this answer).
Both approaches below avoid needing the cursor at all. (There was part of your explanation I didn't understand fully, but I think these suggestions work regardless)
Here's a SQL 2012 answer, using LAG() to lookup the previous 1 and 2 donation codes by Donor: (EDIT: I overlooked a couple things in this post: please refer to my follow-up post for the final/fixed answer. I'm leaving this post with my overlooked
items, for posterity).
With Results_Interim as
Select *
, count('x') over(partition by donor_number) as Ct_Donations
, Lag(test_code, 1) over(partition by donor_number order by donation_date ) as PrevDon1
, Lag(test_code, 2) over(partition by donor_number order by donation_date ) as PrevDon2
from RESULT_SAMPLE
Select *
, case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
when PrevDon1 in (9, 10, 11) then 'A2'
when PrevDon1 is not null then 'A1'
End as NEWSTATUS
from Results_Interim
Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
Order by Donor_Number, donation_date
And a SQL 2005 or greater version, not using SQL 2012 new features
With Results_Temp as
Select *
, count('x') over(partition by donor_number) as Ct_Donations
, Row_Number() over(partition by donor_number order by donation_date ) as RN_Donor
from RESULT_SAMPLE
, Results_Interim as
Select R1.*, P1.test_code as PrevDon1, P2.Test_Code as PrevDon2
From Results_Temp R1
left join Results_Temp P1 on P1.Donor_Number = R1.Donor_Number and P1.Rn_Donor = R1.RN_Donor - 1
left join Results_Temp P2 on P2.Donor_Number = R1.Donor_Number and P2.Rn_Donor = R1.RN_Donor - 2
Select *
, case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
when PrevDon1 in (9, 10, 11) then 'A2'
when PrevDon1 is not null then 'A1'
End as NEWSTATUS
from Results_Interim
Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
Order by Donor_Number, donation_date -
Time out error 1.6 million internal table fields are computed using FMs
Hi , All ...
The Report running fine in server with less 300 record but ...production server which has 1.6 million record it gives error as time-out error and takes hell amount of time.
Please suggest ugently ..
I have tried few things
Internal table having 1.6 lacs ... fields to compute .. i have token in a batch
loop at i_cinfo into s_cinfo from 1 to 100000.
and so on ...also applied all the performace related .. like indexs, work areas , deleting adjacent duplicates , for all entries ..wherever applicable.
Please Suggest .
Report is as below
REPORT ZUSOTCBD_CREDIT_REPORT .
TABLES : KNA1, " General Data Customer Master
KNB1, " Customer Master (Company Code)
KNC1, " Customer master (transaction figures)
KNKK, " Customer master credit management: Control area data
T009, " Fiscal Year Variants P
T009Y, " Shortened fiscal years in Asset Accounting P
T001, " Co. Codes
T001CM, " Permitted Credit Control Areas per Company Code
RF42B, " Structure to hold credit data.
RF035, " Structure to hold credit managment fields
RF02L, " Structure to hold credit data.
TRAS, " Interval for Days in Arrears P
T000CM. " Data for DSO calculation.
Types
TYPES:
BEGIN OF type_final,
string(50) TYPE c, " String Value for Title
END OF type_final.
data:wa_final TYPE type_final. " Work Area to hold Title Data
DATA: RASID TYPE RF035-RASID value 'R03N'. " For Days in interval
DATA: MONAT(2) TYPE N.
DATA: GJAHR TYPE KNC1-GJAHR.
DATA: LD_PERIODS(32) TYPE N
VALUE '01020304050607080910111213141516'.
DATA sytabix type sy-tabix.
DATA LAND TYPE KNA1-LAND1 VALUE 'US'. " Country Key
DATA: LD_PERIOD TYPE BSID-MONAT, " Fiscal Year Variant
LD_GJAHR TYPE KNC1-GJAHR,
LD_COUNTER TYPE SY-TABIX.
Internal Tables
Internal table to hold Title Data *
DATA:
i_final TYPE STANDARD TABLE OF type_final.
**Internal Table Permitted Credit Control Areas per Company Code
DATA: BEGIN OF TCMTAB OCCURS 10.
INCLUDE STRUCTURE T001CM.
DATA: END OF TCMTAB.
*Internal table to store Customer no.
DATA : BEGIN OF ICUST OCCURS 0,
KUNNR TYPE KNA1-KUNNR, " Customer No.
END OF ICUST.
DATA: BEGIN OF BUKTAB OCCURS 20,
KKBER LIKE T001-KKBER, " Credit Control Area
BUKRS LIKE T001-BUKRS, " Co. Code
WAERS LIKE T001-WAERS, " Currency
PERIV LIKE T001-PERIV, " Fiscal Year Variant
BUTXT LIKE T001-BUTXT,
END OF BUKTAB.
Internal Table to store Fiscal year Data .
DATA: BEGIN OF GJATAB OCCURS 5,
PERIV LIKE T001-PERIV, " Fiscal Year Variant
GJAHR LIKE KNC1-GJAHR, " Fiscal Year
MONAT LIKE T009-ANZBP, "
ANZBP LIKE T009-ANZBP, " Number of posting periods
END OF GJATAB.
*Main Output internal table to be used to store credit history Information
DATA : BEGIN OF I_CINFO occurs 0,
KUNNR TYPE KNB1-KUNNR, " Customer
KNKLI TYPE KNKK-KNKLI, " Customer's account number with credit limit reference
KKBER TYPE KNKK-KKBER, " Credit Control Area
CTLPC TYPE KNKK-CTLPC, " Risk Category
KLIMK TYPE KNKK-KLIMK, " Credit Limit
SBGRP TYPE KNKK-SBGRP, " Credit representative group for credit management
ERDAT TYPE KNKK-ERDAT, " Created On
DTREV TYPE KNKK-DTREV, " Last Internal Review
REVDB TYPE KNKK-REVDB, " Last External Review
SALDO TYPE RF42B-SALDO, " Balance
DSOIN TYPE RF02L-DSOIN, " DSO
H06SA TYPE RF035-H06SA, " Highest Balance at the end of 6 Months
H06JA TYPE RF035-H06JA, " Year highest Balance 6 Months
H06MO TYPE RF035-H06MO, " Month OF hihest Balance 6 Months
H12SA TYPE RF035-H12SA, " Highest Balance at the end of 12 Months
H12JA TYPE RF035-H12JA, " Year highest Balance 12 Months
H12MO TYPE RF035-H12MO, " Month OF hihest Balance 12 Months
UMP2U TYPE RF42B-UMP2U, " Sales from the current Year
UMP1U TYPE RF42B-UMP1U, " Sales from the Previous Year
SFAEL TYPE RF035-SFAEL, " Total Past Due Open Item
SFAE1 TYPE RF035-SFAE1, " Aging buckets 0-15
SFAE2 TYPE RF035-SFAE2, " Aging buckets 16-30
SFAE3 TYPE RF035-SFAE3, " Aging buckets 31-60
SFAE4 TYPE RF035-SFAE4, " Aging buckets 60-90
SFAE5 TYPE RF035-SFAE5, " Aging buckets Over 90
END Of I_CINFO.
DATA : BEGIN OF S_CINFO ,
KUNNR TYPE KNB1-KUNNR, " Customer
KNKLI TYPE KNKK-KNKLI, " Customer's account number with credit limit reference
KKBER TYPE KNKK-KKBER, " Credit Control Area
CTLPC TYPE KNKK-CTLPC, " Risk Category
KLIMK TYPE KNKK-KLIMK, " Credit Limit
SBGRP TYPE KNKK-SBGRP, " Credit representative group for credit management
ERDAT TYPE KNKK-ERDAT, " Created On
DTREV TYPE KNKK-DTREV, " Last Internal Review
REVDB TYPE KNKK-REVDB, " Last External Review
SALDO TYPE RF42B-SALDO, " Balance
DSOIN TYPE RF02L-DSOIN, " DSO
H06SA TYPE RF035-H06SA, " Highest Balance at the end of 6 Months
H06JA TYPE RF035-H06JA, " Year highest Balance 6 Months
H06MO TYPE RF035-H06MO, " Month OF hihest Balance 6 Months
H12SA TYPE RF035-H12SA, " Highest Balance at the end of 12 Months
H12JA TYPE RF035-H12JA, " Year highest Balance 12 Months
H12MO TYPE RF035-H12MO, " Month OF hihest Balance 12 Months
UMP2U TYPE RF42B-UMP2U, " Sales from the current Year
UMP1U TYPE RF42B-UMP1U, " Sales from the Previous Year
SFAEL TYPE RF035-SFAEL, " Total Past Due Open Item
SFAE1 TYPE RF035-SFAE1, " Aging buckets 0-15
SFAE2 TYPE RF035-SFAE2, " Aging buckets 16-30
SFAE3 TYPE RF035-SFAE3, " Aging buckets 31-60
SFAE4 TYPE RF035-SFAE4, " Aging buckets 60-90
SFAE5 TYPE RF035-SFAE5, " Aging buckets Over 90
END Of s_cinfo.
*Internal table to hold month-wise balance.
DATA: BEGIN OF SALTAB OCCURS 12,
LNUMM(2) TYPE N, " Month
SALDO LIKE RF42B-SALDO, " Balance
END OF SALTAB.
*Internal table used for computing the Balance fields
DATA: BEGIN OF SALDO,
UML01 LIKE KNC1-UM01S,
UML02 LIKE KNC1-UM01S,
UML03 LIKE KNC1-UM01S,
UML04 LIKE KNC1-UM01S,
UML05 LIKE KNC1-UM01S,
UML06 LIKE KNC1-UM01S,
UML07 LIKE KNC1-UM01S,
UML08 LIKE KNC1-UM01S,
UML09 LIKE KNC1-UM01S,
UML10 LIKE KNC1-UM01S,
UML11 LIKE KNC1-UM01S,
UML12 LIKE KNC1-UM01S,
END OF SALDO.
Structure to hold Bal fields ------ -------
DATA: BEGIN OF SKNKK,
KUNNR LIKE KNA1-KUNNR, " Customer Number 1: Debitor
KONTO LIKE KNKK-KUNNR,
SFAE1 LIKE RF035-SFAE1, " Aging buckets 0-15
SFAE2 LIKE RF035-SFAE2, " Aging buckets 16-30
SFAE3 LIKE RF035-SFAE3, " Aging buckets 30-60
SFAE4 LIKE RF035-SFAE4, " Aging buckets 60-90
SFAE5 LIKE RF035-SFAE5, " Aging buckets Over 90
SFAEL LIKE RF035-SFAEL, " Total Due of Items
UML01 LIKE KNC1-UM01S,
UML02 LIKE KNC1-UM01S,
UML03 LIKE KNC1-UM01S,
UML04 LIKE KNC1-UM01S,
UML05 LIKE KNC1-UM01S,
UML06 LIKE KNC1-UM01S,
UML07 LIKE KNC1-UM01S,
UML08 LIKE KNC1-UM01S,
UML09 LIKE KNC1-UM01S,
UML10 LIKE KNC1-UM01S,
UML11 LIKE KNC1-UM01S,
UML12 LIKE KNC1-UM01S,
UMP1U LIKE RF42B-UMP1U, " Sales from the Previous Year
UMP2U LIKE RF42B-UMP2U, " Sales from the current Year
SALDO LIKE RF42B-SALDO, " Balance
END OF SKNKK.
DATA : BEGIN OF ICUST1 OCCURS 0,
KUNNR TYPE KNA1-KUNNR, "For Customer Filter.
END OF ICUST1.
**Internal table to hold fiscal varriants
DATA: BEGIN OF LT_PERIODS OCCURS 12,
PERIOD LIKE BSID-MONAT,
GJAHR LIKE KNC1-GJAHR,
END OF LT_PERIODS.
**Constants
constants : BUKRS1 TYPE KNB1-BUKRS VALUE '1000',
BUKRS2 TYPE KNB1-BUKRS VALUE '1031',
Recs Type i value '200',
B_count type i value '2'.
INITIALIZATION
INITIALIZATION.
IF RASID IS INITIAL.
SELECT * FROM TRAS.
EXIT.
ENDSELECT.
IF SY-SUBRC = 0.
RASID = TRAS-RASID.
ENDIF.
ENDIF.
SELECTION-SCREEN
parameters : p_path type rlgrap-filename default 'C:\Documents and Settings\C890971\Desktop\Credit_history.XLS'.
Start of selection processing
START-OF-SELECTION.
**Get Customers for Co. Code 1000 & 1031.
PERFORM GET_CUST.
Get / Compute Credit Information data for Company Codes 1000 & 1031.
PERFORM GET_CREDIT_DATA.
End of selection processing
*END-OF-SELECTION.
Listing Credit History Data
PERFORM DOWNLOAD_CREDIT_DATA.
S U B R O U T I N E S
*& Form GET_CUST
text
--> p1 text
<-- p2 text
FORM GET_CUST .
Get US only Customers.
Refresh icust.
Select kunnr from kna1 appending table icust
where Land1 = land.
**Delete duplicate records
Delete Adjacent duplicates from icust comparing kunnr.
if icust[] is not initial.
Limit the selection some more to Co. Code 1000 & 1031
As join will cost overhead as compared.
Select kunnr from knb1 into table icust1
for all entries in icust
where kunnr = icust-kunnr
and bukrs = bukrs1
OR bukrs = bukrs2.
**Delete duplicate records
Delete Adjacent duplicates from icust1 comparing kunnr.
endif.
**Free memory.
Free icust.
Credit Control Area
select * from T001CM into table TCMTAB
where bukrs = bukrs1 OR
bukrs = bukrs2.
ENDFORM. " GET_CUST
*& Form GET_CREDIT_DATA
text
--> p1 text
<-- p2 text
FORM GET_CREDIT_DATA .
DATA : L_TEXT(60) TYPE C ,
Ltext1(50) type C value 'Computing Credit-History Data For',
Ltext2(10) type C value 'Customers',
L_PCT type i value '10',
L_recs type i,
l_batch_recs type i,
l_s_rec type i value 1,
l_recs1(7) type N.
***Fetch data from KNKK table
PERFORM GET_KNKK_DATA .
***Computing Crredit Fields
**Number of Customers For whom Deatils needed.
Describe table I_CINFO lines l_recs.
l_recs1 = l_recs.
Concatenate Ltext1 l_recs1 Ltext2 into l_text separated by ' '.
PERFORM GET_PGRESS_INDICATOR USING l_text l_pct.
***If records are more than 200000, should be processed batch-wise
If l_recs > recs.
l_batch_recs = abs( l_recs / b_count ).
Do b_count times.
loop at i_cinfo INTO S_CINFO from l_S_REC to l_batch_recs.
**Remember the row
sytabix = sy-tabix.
**Compute DSO
PERFORM GET_DSO_FIELD .
Compute rest credit history data.
PERFORM COMPUTE_SFIELDS.
endloop.
l_S_REC = l_S_REC + l_batch_recs.
l_batch_recs = l_batch_recs + l_batch_recs.
IF l_batch_recs ge l_recs.
l_batch_recs = l_recs.
eNDIF.
Commit up to here to release the DB locks.
Commit work.
enddo.
else.
loop at i_cinfo INTO S_CINFO.
**Remember the row
sytabix = sy-tabix.
**Compute DSO
PERFORM GET_DSO_FIELD .
Compute rest credit history data.
PERFORM COMPUTE_SFIELDS.
endloop.
Endif.
ENDFORM. " GET_CREDIT_DATA
*& Form GET_KNKK_DATA
text
-->P_ICUST_KUNNR text
FORM GET_KNKK_DATA .
if icust1[] is not initial.
SELECT KUNNR KNKLI KKBER CTLPC KLIMK
SBGRP ERDAT DTREV REVDB
from KNKK into corresponding fields of table I_Cinfo
for all entries in icust1
where kunnr = icust1-kunnr.
Delete Adjacent duplicates from i_cinfo comparing kunnr.
endif.
**Free Memory for internal table icust1.
Free icust1.
ENDFORM. " GET_KNKK_DATA
**& Form GET_DSO_FIELD
text
--> p1 text
<-- p2 text
FORM GET_DSO_FIELD .
***Determine DSO Parameter
PERFORM DSO_PARAMETER.
***Compute DSO
CALL FUNCTION 'CUSTOMER_DSO_CALCULATION'
EXPORTING
I_KKBER = s_cinfo-kkber
I_KUNNR = s_cinfo-kunnr
I_ANZBUPER = T000CM-DSOPP
I_XCHILDS = T000CM-DSOCH
I_ACTBALANCE = T000CM-DSOAB
IMPORTING
E_DSOIN = RF02L-DSOIN
EXCEPTIONS
ERROR_MESSAGE = 1.
ENDFORM. " GET_DSO_FIELD
*& Form DSO_PARAMETER
text
--> p1 text
<-- p2 text
FORM DSO_PARAMETER.
IF T000CM-DSOPP IS INITIAL.
SELECT SINGLE * FROM T000CM.
IF SY-SUBRC EQ 0.
IF T000CM-DSOPP IS INITIAL.
T000CM-DSOPP = '003'.
ENDIF.
ELSE.
T000CM-DSOPP = '003'.
T000CM-DSOCH = ' '.
T000CM-DSOAB = 'X'.
ENDIF.
ENDIF.
ENDFORM. " DSO_PARAMETER
--> p1 text
<-- p2 text
FORM PERIODE_ERMITTELN_EXC USING
P03_BUDAT LIKE SYST-DATUM
P03_GJAHR LIKE KNC1-GJAHR
P03_MONAT LIKE MONAT.
CALL FUNCTION 'FI_PERIOD_DETERMINE'
EXPORTING
I_BUDAT = P03_BUDAT
I_PERIV = T001-PERIV
I_BUKRS = T001-BUKRS
I_GJAHR = P03_GJAHR
I_MONAT = P03_MONAT
IMPORTING
E_GJAHR = P03_GJAHR
E_MONAT = P03_MONAT
EXCEPTIONS
ERROR_MESSAGE = 1.
ENDFORM. "PERIODE_ERMITTELN_EXC
*& Form COMPUTE_SFIELDS
text
--> p1 text
<-- p2 text
FORM COMPUTE_SFIELDS .
**Compute Balance
PERFORM GET_SFIELDS .
S_CINFO-DSOIN = RF02L-DSOIN.
S_CINFO-SALDO = RF035-SALDO.
S_CINFO-H06SA = RF035-H06SA.
S_CINFO-H06JA = RF035-H06JA.
S_CINFO-H06MO = RF035-H06MO.
S_CINFO-H12SA = RF035-H12SA.
S_CINFO-H12JA = RF035-H12JA.
S_CINFO-H12MO = RF035-H12MO.
S_CINFO-UMP2U = RF42B-UMP2U.
S_CINFO-UMP1U = RF42B-UMP1U.
S_CINFO-SFAEL = RF035-SFAEL.
S_CINFO-SFAE1 = RF035-SFAE1.
S_CINFO-SFAE2 = RF035-SFAE2.
S_CINFO-SFAE3 = RF035-SFAE3.
S_CINFO-SFAE4 = RF035-SFAE4.
S_CINFO-SFAE5 = RF035-SFAE5.
modify..
MODIFY I_CINFO FROM S_CINFO INDEX sytabix.
CLEAR: S_CINFO,RF035,RF02L, RF42B.
ENDFORM. " COMPUTE_SFIELDS
text
-->P_C_INFO_KUNNR text
FORM GET_CUST_BAL_INFO.
LOOP AT TCMTAB.
CALL FUNCTION 'FI_COMPANY_CODE_DATA'
EXPORTING
I_BUKRS = TCMTAB-BUKRS
IMPORTING
E_T001 = T001
EXCEPTIONS
ERROR_MESSAGE = 1.
IF SY-SUBRC = 0.
MOVE-CORRESPONDING T001 TO BUKTAB.
BUKTAB-KKBER = TCMTAB-KKBER.
COLLECT BUKTAB.
ENDIF.
ENDLOOP.
LOOP AT BUKTAB WHERE PERIV NE SPACE.
GJATAB-PERIV = BUKTAB-PERIV.
COLLECT GJATAB.
ENDLOOP.
CLEAR: MONAT.
LOOP AT GJATAB.
T001-PERIV = GJATAB-PERIV.
CLEAR: GJAHR, MONAT.
PERFORM PERIODE_ERMITTELN_EXC USING SY-DATLO GJAHR MONAT.
CHECK SY-SUBRC = 0.
GJATAB-GJAHR = GJAHR.
GJATAB-MONAT = MONAT.
SELECT SINGLE * FROM T009 WHERE PERIV = GJATAB-PERIV.
IF SY-SUBRC = 0.
GJATAB-ANZBP = T009-ANZBP.
ENDIF.
MODIFY GJATAB.
ENDLOOP.
LOOP AT BUKTAB.
CHECK NOT ( BUKTAB-PERIV IS INITIAL ).
READ TABLE GJATAB WITH KEY BUKTAB-PERIV.
CHECK SY-SUBRC = 0
AND NOT ( GJATAB-GJAHR IS INITIAL ).
CALL FUNCTION 'CUSTOMER_BALANCE'
EXPORTING
KUNNR = S_cinfo-kunnr
BUKRS = BUKTAB-BUKRS
GJAHR = GJATAB-GJAHR
MONAT = GJATAB-MONAT
PERIV = GJATAB-PERIV
ANZBP = GJATAB-ANZBP
XH6MON = 'X'
XH12MON = 'X'
IMPORTING
UMP2U = RF42B-UMP2U
VMP2U = RF42B-UMP1U
SALDO = RF035-SALDO
UML01 = SALDO-UML01
UML02 = SALDO-UML02
UML03 = SALDO-UML03
UML04 = SALDO-UML04
UML05 = SALDO-UML05
UML06 = SALDO-UML06
UML07 = SALDO-UML07
UML08 = SALDO-UML08
UML09 = SALDO-UML09
UML10 = SALDO-UML10
UML11 = SALDO-UML11
UML12 = SALDO-UML12
EXCEPTIONS
NO_BALANCE = 4.
IF SY-SUBRC = 0.
SKNKK-UMP1U = SKNKK-UMP1U + RF42B-UMP1U.
SKNKK-UMP2U = SKNKK-UMP2U + RF42B-UMP2U.
SKNKK-SALDO = SKNKK-SALDO + RF035-SALDO.
SKNKK-UML01 = SKNKK-UML01 + SALDO-UML01.
SKNKK-UML02 = SKNKK-UML02 + SALDO-UML02.
SKNKK-UML03 = SKNKK-UML03 + SALDO-UML03.
SKNKK-UML04 = SKNKK-UML04 + SALDO-UML04.
SKNKK-UML05 = SKNKK-UML05 + SALDO-UML05.
SKNKK-UML06 = SKNKK-UML06 + SALDO-UML06.
SKNKK-UML07 = SKNKK-UML07 + SALDO-UML07.
SKNKK-UML08 = SKNKK-UML08 + SALDO-UML08.
SKNKK-UML09 = SKNKK-UML09 + SALDO-UML09.
SKNKK-UML10 = SKNKK-UML10 + SALDO-UML10.
SKNKK-UML11 = SKNKK-UML11 + SALDO-UML11.
SKNKK-UML12 = SKNKK-UML12 + SALDO-UML12.
ENDIF.
ENDLOOP.
ENDFORM. "
*& Form GET_SFIELDS
text
--> p1 text
<-- p2 text
FORM GET_SFIELDS .
sknkk-kunnr = S_CINFO-KUNNR.
**Clear target to store computed values
CLEAR: RF035.
**Compute Balance fields
PERFORM GET_CUST_BAL_INFO.
REFRESH: SALTAB.
SALTAB-LNUMM = '01'. SALTAB-SALDO = SKNKK-UML01. APPEND SALTAB.
SALTAB-LNUMM = '02'. SALTAB-SALDO = SKNKK-UML02. APPEND SALTAB.
SALTAB-LNUMM = '03'. SALTAB-SALDO = SKNKK-UML03. APPEND SALTAB.
SALTAB-LNUMM = '04'. SALTAB-SALDO = SKNKK-UML04. APPEND SALTAB.
SALTAB-LNUMM = '05'. SALTAB-SALDO = SKNKK-UML05. APPEND SALTAB.
SALTAB-LNUMM = '06'. SALTAB-SALDO = SKNKK-UML06. APPEND SALTAB.
SALTAB-LNUMM = '07'. SALTAB-SALDO = SKNKK-UML07. APPEND SALTAB.
SALTAB-LNUMM = '08'. SALTAB-SALDO = SKNKK-UML08. APPEND SALTAB.
SALTAB-LNUMM = '09'. SALTAB-SALDO = SKNKK-UML09. APPEND SALTAB.
SALTAB-LNUMM = '10'. SALTAB-SALDO = SKNKK-UML10. APPEND SALTAB.
SALTAB-LNUMM = '11'. SALTAB-SALDO = SKNKK-UML11. APPEND SALTAB.
SALTAB-LNUMM = '12'. SALTAB-SALDO = SKNKK-UML12. APPEND SALTAB.
READ TABLE SALTAB INDEX 1.
RF035-H06SA = SALTAB-SALDO.
RF035-H06MO = SALTAB-LNUMM.
RF035-H12SA = SALTAB-SALDO.
RF035-H12MO = SALTAB-LNUMM.
------ SALTAB ---------------------------------------------
LOOP AT SALTAB.
IF SALTAB-SALDO > RF035-H06SA
AND SY-TABIX < 7.
RF035-H06SA = SALTAB-SALDO.
RF035-H06MO = SALTAB-LNUMM.
ENDIF.
IF SALTAB-SALDO > RF035-H12SA
AND SY-TABIX < 13.
RF035-H12SA = SALTAB-SALDO.
RF035-H12MO = SALTAB-LNUMM.
ENDIF.
ENDLOOP.
------ Period--------------------
REFRESH LT_PERIODS.
CLEAR LD_COUNTER.
READ TABLE BUKTAB INDEX 1.
IF SY-SUBRC = 0.
READ TABLE GJATAB WITH KEY BUKTAB-PERIV.
DO GJATAB-MONAT TIMES
VARYING LD_PERIOD FROM LD_PERIODS(2) NEXT LD_PERIODS+2(2)
RANGE LD_PERIODS.
LT_PERIODS-GJAHR = GJATAB-GJAHR.
LT_PERIODS-PERIOD = LD_PERIOD.
LD_COUNTER = LD_COUNTER + 1.
APPEND LT_PERIODS.
ENDDO.
IF LD_COUNTER LT 12.
LD_GJAHR = GJATAB-GJAHR - 1.
CLEAR T009Y.
SELECT SINGLE * FROM T009Y WHERE PERIV = GJATAB-PERIV
AND GJAHR = LD_GJAHR.
DO GJATAB-ANZBP TIMES
VARYING LD_PERIOD FROM LD_PERIODS(2) NEXT LD_PERIODS+2(2)
RANGE LD_PERIODS.
IF T009Y-ANZBP > 0.
CHECK SY-INDEX <= T009Y-ANZBP.
ENDIF.
LD_COUNTER = LD_COUNTER + 1.
LT_PERIODS-GJAHR = LD_GJAHR.
LT_PERIODS-PERIOD = LD_PERIOD.
APPEND LT_PERIODS.
ENDDO.
ENDIF.
IF LD_COUNTER LT 12.
LD_GJAHR = LD_GJAHR - 1.
DO GJATAB-ANZBP TIMES
VARYING LD_PERIOD FROM LD_PERIODS(2) NEXT LD_PERIODS+2(2)
RANGE LD_PERIODS.
LD_COUNTER = LD_COUNTER + 1.
LT_PERIODS-GJAHR = LD_GJAHR.
LT_PERIODS-PERIOD = LD_PERIOD.
APPEND LT_PERIODS.
ENDDO.
ENDIF.
SORT LT_PERIODS BY GJAHR ASCENDING PERIOD ASCENDING.
LD_COUNTER = LD_COUNTER - 12.
DO LD_COUNTER TIMES.
DELETE LT_PERIODS INDEX 1.
ENDDO.
SORT LT_PERIODS BY GJAHR DESCENDING PERIOD DESCENDING.
READ TABLE LT_PERIODS INDEX RF035-H06MO.
RF035-H06MO = LT_PERIODS-PERIOD.
RF035-H06JA = LT_PERIODS-GJAHR.
READ TABLE LT_PERIODS INDEX RF035-H12MO.
RF035-H12MO = LT_PERIODS-PERIOD.
RF035-H12JA = LT_PERIODS-GJAHR.
ENDIF.
**Compute Due Dates fields
PERFORM GET_AGING_BUCKETS .
RF035-SFAE1 = SKNKK-SFAE1.
RF035-SFAE2 = SKNKK-SFAE2.
RF035-SFAE3 = SKNKK-SFAE3.
RF035-SFAE4 = SKNKK-SFAE4 .
RF035-SFAE5 = SKNKK-SFAE5.
ENDLOOP.
ENDFORM. " GET_BALANCE_SFIELDS
*& Form GET_AGING_BUCKETS
text
--> p1 text
<-- p2 text
FORM GET_AGING_BUCKETS .
DATA: BEGIN OF LT_BUKRS OCCURS 0,
BUKRS LIKE T001-BUKRS,
END OF LT_BUKRS.
DATA: BEGIN OF LT_BUKTAB OCCURS 0,
BUKRS LIKE T001-BUKRS,
WAERS LIKE T001-WAERS,
KKBER LIKE T014-KKBER,
END OF LT_BUKTAB.
DATA: LD_LINES LIKE SY-TABIX.
*...performance optimization: for more than one company codes check....*
*...if balances exist and avoid selection of open items, if they don't.*
*...exist..............................................................*
REFRESH LT_BUKRS.
REFRESH LT_BUKTAB.
DESCRIBE TABLE BUKTAB LINES LD_LINES.
IF LD_LINES GT 1.
SELECT DISTINCT BUKRS APPENDING CORRESPONDING FIELDS
OF TABLE LT_BUKRS
FROM KNC1 FOR ALL ENTRIES IN BUKTAB
WHERE BUKRS = BUKTAB-BUKRS AND
KUNNR = Sknkk-KUNNR.
SELECT DISTINCT BUKRS APPENDING CORRESPONDING FIELDS
OF TABLE LT_BUKRS
FROM KNC3 FOR ALL ENTRIES IN BUKTAB
WHERE BUKRS = BUKTAB-BUKRS AND
KUNNR = Sknkk-KUNNR.
SORT LT_BUKRS.
DELETE ADJACENT DUPLICATES FROM LT_BUKRS.
LOOP AT LT_BUKRS.
LOOP AT BUKTAB WHERE BUKRS = LT_BUKRS-BUKRS.
MOVE-CORRESPONDING BUKTAB TO LT_BUKTAB.
APPEND LT_BUKTAB.
ENDLOOP.
ENDLOOP.
ELSE.
READ TABLE BUKTAB INDEX 1.
MOVE-CORRESPONDING BUKTAB TO LT_BUKTAB.
APPEND LT_BUKTAB.
ENDIF.
*...process company codes for customer given by interface..........*
LOOP AT LT_BUKTAB WHERE KKBER = s_cinfo-KKBER.
CALL FUNCTION 'CUSTOMER_DUE_DATE_ANALYSIS'
EXPORTING BUKRS = LT_BUKTAB-BUKRS
KKBER = ' '
KKBER = s_cinfo-KKBER
KUNNR = Sknkk-KUNNR
RASID = RASID
IMPORTING SFAE1 = RF035-SFAE1
SFAE2 = RF035-SFAE2
SFAE3 = RF035-SFAE3
SFAE4 = RF035-SFAE4
SFAE5 = RF035-SFAE5
SFAEL = RF035-SFAEL
EXCEPTIONS NO_OPEN_ITEMS = 4.
IF SY-SUBRC = 0.
*-- RF035 -
SKNKK-SFAE1 = SKNKK-SFAE1 + RF035-SFAE1.
SKNKK-SFAE2 = SKNKK-SFAE2 + RF035-SFAE2.
SKNKK-SFAE3 = SKNKK-SFAE3 + RF035-SFAE3.
SKNKK-SFAE4 = SKNKK-SFAE4 + RF035-SFAE4.
SKNKK-SFAE5 = SKNKK-SFAE5 + RF035-SFAE5.
SKNKK-SFAE5 = SKNKK-SFAE5 + RF035-SFAE5.
SKNKK-SFAEL = SKNKK-SFAEL + RF035-SFAEL.
ENDIF.
ENDLOOP.
ENDFORM. " GET_AGING_BUCKETS
*& Form header
This Subroutine gets data for displaying title *
There are no interface parameters to be passed to this subroutine. *
FORM header .
wa_final-string = text-000. APPEND wa_final TO i_final.
wa_final-string = text-001. APPEND wa_final TO i_final.
wa_final-string = text-002. APPEND wa_final TO i_final.
wa_final-string = text-003. APPEND wa_final TO i_final.
wa_final-string = text-004. APPEND wa_final TO i_final.
wa_final-string = text-005. APPEND wa_final TO i_final.
wa_final-string = text-006. APPEND wa_final TO i_final.
wa_final-string = text-007. APPEND wa_final TO i_final.
wa_final-string = text-008. APPEND wa_final TO i_final.
wa_final-string = text-009. APPEND wa_final TO i_final.
wa_final-string = text-010. APPEND wa_final TO i_final.
wa_final-string = text-011. APPEND wa_final TO i_final.
wa_final-string = text-012. APPEND wa_final TO i_final.
wa_final-string = text-013. APPEND wa_final TO i_final.
wa_final-string = text-014. APPEND wa_final TO i_final.
wa_final-string = text-015. APPEND wa_final TO i_final.
wa_final-string = text-016. APPEND wa_final TO i_final.
wa_final-string = text-017. APPEND wa_final TO i_final.
wa_final-string = text-018. APPEND wa_final TO i_final.
wa_final-string = text-019. APPEND wa_final TO i_final.
wa_final-string = text-020. APPEND wa_final TO i_final.
wa_final-string = text-021. APPEND wa_final TO i_final.
wa_final-string = text-022. APPEND wa_final TO i_final.
wa_final-string = text-023. APPEND wa_final TO i_final.
wa_final-string = text-024. APPEND wa_final TO i_final.
ENDFORM. " header
*& Form DOWNLOADCREDITDATA
text
-->P_P_PATH text
FORM DOWNLOADCREDITDATA USING P_PATH.
DATA:
lw_file2 TYPE string . " File Path
lw_file2 = p_PATH.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
BIN_FILESIZE = BIN_FILESIZE
filename = lw_file2
filetype = 'DBF'
APPEND = ' '
write_field_separator = ' '
HEADER = '00'
TRUNC_TRAILING_BLANKS = 'X'
WRITE_LF = 'X'
COL_SELECT = 'X'
COL_SELECT_MASK = ' '
DAT_MODE = 'X'
CONFIRM_OVERWRITE = ' '
NO_AUTH_CHECK = ' '
CODEPAGE = ' '
IGNORE_CERR = ABAP_TRUE
REPLACEMENT = '#'
WRITE_BOM = ' '
TRUNC_TRAILING_BLANKS_EOL = 'X'
WK1_N_FORMAT = '0'
WK1_N_SIZE = ' '
WK1_T_FORMAT = ' '
WK1_T_SIZE = ' '
IMPORTING
FILELENGTH = FILELENGTH
TABLES
data_tab = I_CINFO
fieldnames = i_final
EXCEPTIONS
file_write_error = 1
no_batch = 2
gui_refuse_filetransfer = 3
invalid_type = 4
no_authority = 5
unknown_error = 6
header_not_allowed = 7
separator_not_allowed = 8
filesize_not_allowed = 9
header_too_long = 10
dp_error_create = 11
dp_error_send = 12
dp_error_write = 13
unknown_dp_error = 14
access_denied = 15
dp_out_of_memory = 16
disk_full = 17
dp_timeout = 18
file_not_found = 19
dataprovider_exception = 20
control_flush_error = 21
IF sy-subrc <> 0.
Messege
ENDIF. " IF sy-subrc EQ 0
ENDFORM. " DOWNLOADCREDITDATA
*& Form DOWNLOAD_CREDIT_DATA
text
--> p1 text
<-- p2 text
FORM DOWNLOAD_CREDIT_DATA .
PERFORM HEADER.
PERFORM DOWNLOADCREDITDATA USING P_PATH.
ENDFORM. " DOWNLOAD_CREDIT_DATA
*& Form GET_PGRESS_INDICATOR
text
-->P_L_TEXT text
-->P_L_PCT text
FORM GET_PGRESS_INDICATOR USING L_TEXT
L_PCT.
CALL FUNCTION 'SAPGUI_PROGRESS_INDICATOR'
EXPORTING
PERCENTAGE = l_pct
TEXT = l_TEXT.
ENDFORM. " GET_PGRESS_INDICATORIf you are just Downloading to a Flat file then why dont you have logic in place for the program to dump the data read into the file to that point depending on any criteria like accounts or customer then clear the internal table and run it in the back ground.
try to use cursor to read the records from the table which will make it a bit more efficient than plain select stement. -
Inserting 10 million rows in to a table hangs
HI through toad iam using a simple for loop to insert 10 million rows into a table by saying
for i in 1 ......10000000
insert.................
It hangs ........ for lot of time
is there a better way to insert the rows in to the table....?
i have to test for performance.... and i have to insert 50 million rows in its child table..
practically when the code moves to production it will have these many rows...(may be more also) thats why i have to test for these many rows
plz suggest a better way for this
Regards
rajMust be a 'hardware thing'.
My ancient desktop (pentium IV, 1.8 Ghz, 512 MB), running XE, needs:
MHO%xe> desc t
Naam Null? Type
N NUMBER
A VARCHAR2(10)
B VARCHAR2(10)
MHO%xe> insert /*+ APPEND */ into t
2 with my_data as (
3 select level n, 'abc' a, 'def' b from dual
4 connect by level <= 10000000
5 )
6 select * from my_data;
10000000 rijen zijn aangemaakt.
Verstreken: 00:04:09.71
MHO%xe> drop table t;
Tabel is verwijderd.
Verstreken: 00:00:31.50
MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
Tabel is aangemaakt.
Verstreken: 00:00:01.04
MHO%xe> insert into t
2 with my_data as (
3 select level n, 'abc' a, 'def' b from dual
4 connect by level <= 10000000
5 )
6 select * from my_data;
10000000 rijen zijn aangemaakt.
Verstreken: 00:02:44.12
MHO%xe> drop table t;
Tabel is verwijderd.
Verstreken: 00:00:09.46
MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
Tabel is aangemaakt.
Verstreken: 00:00:00.15
MHO%xe> insert /*+ APPEND */ into t
2 with my_data as (
3 select level n, 'abc' a, 'def' b from dual
4 connect by level <= 10000000
5 )
6 select * from my_data;
10000000 rijen zijn aangemaakt.
Verstreken: 00:01:03.89
MHO%xe> drop table t;
Tabel is verwijderd.
Verstreken: 00:00:27.17
MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
Tabel is aangemaakt.
Verstreken: 00:00:01.15
MHO%xe> insert into t
2 with my_data as (
3 select level n, 'abc' a, 'def' b from dual
4 connect by level <= 10000000
5 )
6 select * from my_data;
10000000 rijen zijn aangemaakt.
Verstreken: 00:01:56.10Yea, 'cached' it a bit (ofcourse ;) )
But the append hint seems to knibble about 50 sec off anyway (using NO indexes at all) on my 'configuration'. -
I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
then the particular status will show in status column, in that way I need to write the query as 16 different cases.
Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me outHere is the code I have written to get the data from temp tables which are taking records from 7 millions table with filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
the part of Status column.
SELECT
z.SYSTEMNAME
--,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
--else NULL
--End AS SubSystemName
, CASE
WHEN z.TAX_ID IN
(SELECT DISTINCT zxc.TIN
FROM .dbo.SQS_Provider_Tracking zxc
WHERE zxc.[SubSystem Name] <> 'NULL'
THEN
(SELECT DISTINCT [Subsystem Name]
FROM .dbo.SQS_Provider_Tracking zxc
WHERE z.TAX_ID = zxc.TIN)
End As SubSYSTEMNAME
,z.PROVIDERNAME
,z.STATECODE
,z.TAX_ID
,z.SRC_PAR_CD
,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
, CASE
WHEN z.SRC_PAR_CD IN ('E','O','S','W')
THEN 'Nonpar Waiver'
-- --Is Puerto Rico of Lifesynch
WHEN z.TAX_ID IN
(SELECT DISTINCT a.TAX_ID
FROM .dbo.SQS_NonPar_PR_LS_TINs a
WHERE a.Bucket <> 'Nonpar'
THEN
(SELECT DISTINCT a.Bucket
FROM .dbo.SQS_NonPar_PR_LS_TINs a
WHERE a.TAX_ID = z.TAX_ID)
--**Amendment Mailed**
WHEN z.TAX_ID IN
(SELECT DISTINCT b.PROV_TIN
FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN
(SELECT DISTINCT b.Mailing
FROM .dbo.SQS_Mailed_TINs_010614 b
WHERE z.TAX_ID = b.PROV_TIN
-- --**Amendment Mailed Wave 3-5**
WHEN z.TAX_ID In
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (3rd Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (3rd Wave)'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (4th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (4th Wave)'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (5th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (5th Wave)'
-- --**Top Objecting Systems**
WHEN z.SYSTEMNAME IN
('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
THEN 'Top Objecting Systems'
WHEN z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Top Objector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H'
THEN 'Top Objecting Systems'
-- --**Other Objecting Hospitals**
WHEN (z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Objector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Other Objecting Hospitals'
-- --**Objecting Physicians**
WHEN (z.TAX_ID IN
(SELECT DISTINCT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj
WHERE obj.[Objector?] in ('Objector','Top Objector')
and z.TAX_ID = obj.TIN
and z.Hosp_Ind = 'P')
THEN 'Objecting Physicians'
--****Rejecting Hospitals****
WHEN (z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Rejecting Hospitals'
--****Rejecting Physciains****
WHEN
(z.TAX_ID IN
(SELECT DISTINCT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj
WHERE z.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector')
and z.Hosp_Ind = 'P')
THEN 'REjecting Physicians'
----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
-- --**Non-Objecting Hospitals**
WHEN z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
WHERE
(z.TAX_ID = h.TAX_ID)
OR h.SMG_ID IS NOT NULL)
and z.Hosp_Ind = 'H'
THEN 'Non-Objecting Hospitals'
-- **Outstanding Contracts for Review**
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Non-Objecting Bilateral Physicians'
AND z.TAX_ID = qz.PROV_TIN)
Then 'Non-Objecting Bilateral Physicians'
When z.TAX_ID in
(select distinct
p.TAX_ID
from dbo.SQS_CoC_Potential_Mail_List p
where p.amendmentrights <> 'Unilateral'
AND z.TAX_ID = p.TAX_ID)
THEN 'Non-Objecting Bilateral Physicians'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'More Research Needed'
AND qz.PROV_TIN = z.TAX_ID)
THEN 'More Research Needed'
WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
THEN 'ERROR'
else 'Market Review/Preparing to Mail'
END AS [STATUS Column]
Please suggest on this -
Insert/select one million rows at a time from source to target table
Hi,
Oracle 10.2.0.4.0
I am trying to insert around 10 million rows into table target from source as follows:
INSERT /*+ APPEND NOLOGGING */ INTO target
SELECT *
FROM source f
WHERE
NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);There is a unique index on target table on col1,col2
I was having issues with undo and now I am getting the follwing error with temp space
ORA-01652: unable to extend temp segment by 64 in tablespace TEMPI believce it would be easier if I did bulk insert one million rows at a time and commit.
I appriciate any advice on this please.
Thanks,
Ashok902986 wrote:
NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);
I don't know if it has any bearing on the case, but is that WHERE clause on purpose or a typo? Should it be:
NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.COL1 and f.col2 = m.col2);Anyway - how much of your data already exists in target compared to source?
Do you have 10 million in source and very few in target, so most of source will be inserted into target?
Or do you have 9 million already in target, so most of source will be filtered away and only few records inserted?
And what is the explain plan for your statement?
INSERT /*+ APPEND NOLOGGING */ INTO target
SELECT *
FROM source f
WHERE
NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);As your error has to do with TEMP, your statement might possibly try to do a lot of work in temp to materialize the resultset or parts of it to maybe use in a hash join before inserting.
So perhaps you can work towards an explain plan that allows the database to do the inserts "along the way" rather than calculate the whole thing in temp first.
That probably will go much slower (for example using nested loops for each row to check the exists), but that's a tradeoff - if you can't have sufficient TEMP then you may have to optimize for less usage of that resource at the expense of another resource ;-)
Alternatively ask your DBA to allocate more room in TEMP tablespace. Or have the DBA check if there are other sessions using a lot of TEMP in which case maybe you just have to make sure your session is the only one using lots of TEMP at the time you execute. -
How to purge 13 million data from a table?
Hi,
I have a table which is 10GB in size and it has 1.6 crore rows.
Based on some conditions i want to remove the data.
Since there are other processes which happen simultaneously,i am scared to even query and get the dataset which i require.
can u suggest me something..
The table has 2 XML columns so that scares me even more :)
First approach whcih i tahught of is (which is normally practised for purging)
create a copy of the table which has got data that needs to be retained.
Secondly,purge this table
third,rename the table to original.
What my concern is that if i query this 2.6 crore data to get the desired result set..it will take a lot of time and will also hamper other processing.
So...what appreaoch should i take??
Also ,do u feel that this will require downtime.
I am using oralce 11g
Edited by: user8731258 on Aug 21, 2012 6:32 AM
Edited by: user8731258 on Aug 22, 2012 5:57 AMuser8731258 wrote:
I guess people need not waste time on these silly things.I am sorry but even u need to show respect for others peoples language.
Dont unnecessarily raise these issues.this forum is to share knowledge.
Thanks.What you call Silly isn't silly, but perfectly legitimate. This being an International Forum, people not belonging (or un-aware) of Asian Subcontinental Terms would not be able to understand the jargons viz. Lakhs, Crores. In order to help people understand your problem, you should use Globally accepted terminology viz. use Million instead of 10 Lakhs etc.
How do you react if some person posted a question using a terminology that you do not understand???
Moreover, If you are asking for Help on Forums, you should provide sufficient information so that Volunteers would not have to Google out the Terms used in your question. You are definitely harming yourself in a way.
Now, focusing on your original problem, if you can query out the Table and let us know how many records you wish to retain or delete, people are more than willing to help you out on it. RP and others have already provided you a short solution. -
Deleting records from a table with 12 million records
We need to delete some records on this table.
SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
Name Null? Type
CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
CLM_PMT_CHCK_AMT NUMBER(9,2)
CLM_PMT_CHCK_DT DATE
CLM_PMT_PAYEE_NAME VARCHAR2(30)
CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
CLM_PMT_PAYEE_CITY VARCHAR2(19)
CLM_PMT_PAYEE_STATE_CD CHAR(2)
CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
CLM_PMT_SUM_CHCK_IND CHAR(1)
CLM_PMT_PAYEE_TYPE_CD CHAR(1)
CLM_PMT_CHCK_STTS_CD CHAR(2)
SYSTEM_INSERT_DT DATE
SYSTEM_UPDATE_DT
I only need to delete the records based on this condition
select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
Thsi table has 12 million records.
Please advise
Regards,
Narayanuser7202581 wrote:
We need to delete some records on this table.
SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
Name Null? Type
CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
CLM_PMT_CHCK_AMT NUMBER(9,2)
CLM_PMT_CHCK_DT DATE
CLM_PMT_PAYEE_NAME VARCHAR2(30)
CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
CLM_PMT_PAYEE_CITY VARCHAR2(19)
CLM_PMT_PAYEE_STATE_CD CHAR(2)
CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
CLM_PMT_SUM_CHCK_IND CHAR(1)
CLM_PMT_PAYEE_TYPE_CD CHAR(1)
CLM_PMT_CHCK_STTS_CD CHAR(2)
SYSTEM_INSERT_DT DATE
SYSTEM_UPDATE_DT
I only need to delete the records based on this condition
select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
Thsi table has 12 million records.
Please advise
Regards,
NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018; -
Reading data from a table which can have millions of record
Hi,
I need to write a query in my report that can be able to fetch records from a z table that have arount 20 lac (= 2 million) records. As far i knoew we can not handle such a huge amount of data in our internal tables. because the size of internal table is fixed and that can be increased only by basis persons.
So can anybody tell me the approach that i should follow such that i can split the data from this table.
Or any other approach is also welcome.
Please reply ASAP when you have the time.
Thanks And Regards,
Mayank
Edited by: Thomas Zloch on Mar 6, 2010 9:25 PMHi Mayank,
reduce the data selected to the fields you really need, avoid select *. See online help on SELECT, addition PACKAGE SIZE.
select field1 field2 field3
into corresponding fields of table lt_table package size nnn WHERE ...
* process lt_table
endselect.
Regards,
Clemens
Edited by: Rob Burbank on Mar 9, 2010 12:29 PM -
Is it ok? if we have 42 million records in a single fact table!!
Hello,
We have three outstanding fact tables, and we need to add one more fact type, and we were thinking whether we can do two different fact tables, or can we put the values in one of the same fact table which is similar, but the records are upto 42 million if we add ,so my question is having a single fact table with all records, or breaking it down, to two different ones!!!Thnx!!I am not sure what is an "outstanding fact" or an "fact type". A 42m fact table doesn't necessarily indicate you are doing something wrong although it does sound as odd. I would expect most facts to be small as they should have aggregated measures to speed up report. In some cases you may want to drill down to the detailed transaction level in which case you may find these large facts. But care should be taken not to allow users to query on this fact without user the "transaction ID" which obviously should be indexed and should guarantee that queries will be quick.
Guessing from your post (as it is not clear not descriptive enough) it would seem to imply that you are adding a new dimension to your fact and that will cause the fact to increase it's row count to 42m. That probably means that you are changing the granularity of the fact. That may or may not be correct, depending on your model. -
Update Millions of records in a table
Hi ,
My DB is 10.2.0
This is a very generic question, hence I am not adding the Plan.
I have to update a column in a table with value of sum of another column from the same table.
The table has 19 million records.
UPDATE transactions a SET rest_amount = Nvl((SELECT Sum(trans_amount) FROM transactions b
WHERE b.trans_date=a.trans_date AND trans_ind < 2),trans_amount)
WHERE trans_grp < 6999 and trans_ind < 2;This query takes 10 hours to run. There are indexes on trans_date and trans_grp. There is no index on rest_amount column.
As per tom in
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6407993912330
He had suggested to disable all indexes and constraints and then enable them. Is it applicable only for INSERT or for all DML.
Only during insert the constraints will be validated and index will be refreshed with the new row. But in my case, I am updating a column which is not a part of any index. Should I go as per tom's suggestion ???The table has 19 million records.How many rows (that's rows) are actually updated?
-
Internal Table with 22 Million Records
Hello,
I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
Any tips on how I can optimize my coding? I have attached the Short-Dump.
Thanks,
SD
DATA: ls_source TYPE y_source_fields,
ls_target TYPE y_target_fields.
DATA: it_source_tmp TYPE yt_source_fields,
et_target_tmp TYPE yt_target_fields.
TYPES: BEGIN OF IT_TAB1,
BPARTNER TYPE /BI0/OIBPARTNER,
DATEBIRTH TYPE /BI0/OIDATEBIRTH,
ALTER TYPE /GKV/BW01_ALTER,
ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
END OF IT_TAB1.
DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
WITH NON-UNIQUE KEY BPARTNER,
WA_XX_TAB1 TYPE IT_TAB1.
it_source_tmp[] = it_source[].
SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
DELETE ADJACENT DUPLICATES FROM it_source_tmp
COMPARING /B99/S_BWPKKD.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
LOOP AT it_source INTO ls_source.
READ TABLE IT_XX_TAB1
INTO WA_XX_TAB1
WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
IF sy-subrc = 0.
ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
ENDIF.
MOVE-CORRESPONDING ls_source TO ls_target.
APPEND ls_target TO et_target.
CLEAR ls_target.
ENDLOOP.Hi SD,
Please put the select querry in below condition marked in bold.
IF it_source_tmp[] IS NOT INTIAL.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
ENDIF.
This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
Regards,
Pravin -
How to export a table with half a million rows?
I need to export a table that has 535,000 rows. I tried to export to Excel and it exported only 65,535 rows. I tried to export to a text file and it said it was using the clipboard (?) and 65,000 rows was the maximum. Surely there has to be a way to export
the entire table. I've been able to import much bigger csv files than this, millions of rows.What version of Access are you using? Are you attempting to copy and paste records or are you using Access' export functionality from the menu/ribbon? I'm using Access 2010 and just exported a million record table to both a text file and to Excel
(.xlsx format). Excel 2003 (using .xls 97-2003 format) does have a limit of 65,536 rows but the later .xlsx format does not.
-Bruce -
Problem with Fetching Million Records from Table COEP into an Internal Tabl
Hi Everyone ! Hope things are going well.
Table : COEP has 6 million records.
I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
Regards,
Owais...The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
Here is my select:
SELECT kokrs
belnr
buzei
ebeln
ebelp
wkgbtr
refbn
bukrs
gjahr
FROM covp CLIENT SPECIFIED
INTO TABLE i_coep
FOR ALL ENTRIES IN i_objnr
WHERE mandt EQ sy-mandt
AND lednr EQ '00'
AND objnr = i_objnr-objnr
AND kokrs = c_conarea. -
1 million row full table scan in less than one minute?
All,
a customer has a table which is around 1 million rows.
The table size is 300 Mo.
The customer wants to calculate regular arithmetic operations,
and synthetic data on the whole table. He will use 'group by'
statements and 'sort by'.
He wants to know if it would be possible to get the results in
less than one minute, and what would be the necessary
architecture in order to achieve this?
He currently has a 4 Gb Ram machine, 4 processors (I think...).
It's a 7.3 database.
Would an upgrade to 8i or 9i help? Would partitioning help?
Would parallel query help? What else?
There will be no other concurrent requests on the system.
Thanks!Hi
If you have four processors parallel query will increase speed
of executing your statement. It will work better with
partitions. In version 7.3 is not very easy to manage partitons
because you have not partition tables - you have only partition
views. Maybe the speed is the same but management of partition
views is not so easy. So in version 8i and 9i it will be better.
If you have index for group by columns and they are 'NOT NULL'
it will be additional way of increasing speed beacause you will
eleminate sort operation.
Regards -
Select query on a table with 13 million of rows
Hi guys,
I have been trying to perform a select query on a table which has 13 millions of entries however it took around 58 min to complete.
The table has 8 columns with 4 Primary keys looks like below:
(PK) SegmentID > INT
(PK) IPAddress > VARCHAR (45)
MAC Address > VARCHAR (45)
(PK) Application Name > VARCHAR (45)
Total Bytes > INT
Dates > VARCHAR (45)
Times > VARCHAR (45)
(PK) DateTime > DATETIME
The sql query format is :
select ipaddress, macaddress, sum(totalbytes), applicationname , dates,
times from appstat where segmentid = 1 and datetime between '2011-01-03
15:00:00.0' and '2011-01-04 15:00:00.0' group by ipaddress,
applicationname order by applicationname, sum(totalbytes) desc
Is there a way I can improve this query to be faster (through my.conf or any other method)?
Any feedback is welcomed.
Thank you.
MusTolls wrote:
What db is this?
You never said.
Anyway, it looks like it's using the Primary Key to find the correct rows.
Is that the correct number of rows returned?
5 million?
Sorted?I am using MySQL. By the way, the query time has been much more faster (22 sec) after I changed the configuration file (based on my-huge.cnf).
The number of rows returned is 7999 Rows
This is some portion of the my.cnf
# The MySQL server
[mysqld]
port = 3306
socket = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 800M
max_allowed_packet = 1M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size= 16M
log = /var/log/mysql.log
log-slow-queries = /var/log/mysqld.slow.log
long_query_time=10
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 6
Is there anything else I need to tune so it can be faster ?
Thanks a bunch.
Edited by: user578505 on Jan 17, 2011 6:47 PM
Maybe you are looking for
-
Vendor payments and debts about an asset
Dear sap colleages I need to know these information about an ASSET: Purchased orders, Items from purchase orders, which of these items i've received, i've paid (also date), i've not paid ( amount of debt related to asset) payments in advanced (date),
-
I have already imported some sounds (mp3) into a Flash file but I'm having a problem replacing them. In the properties of the sound when I try to import a new version I just get a 'sound import failed' error. It works fine on one of them but on the o
-
Deployment of Myfaces 1.1.4 on WebLogic 9.1
Hi , I developed a JSF based application using Apache MyFaces 1.1.4 / Tomahawk and WAS 5.2 (WebSphere Application Server) as development container. The application turns perfectly on WAS 5.2. But when deployed to weblogi
-
Delete Organization Structure in QAS
Hi! We want to copy our Organizational Structure com PRD. Okay, we're using IDOC to do it. But there's some duplicated Orgs because and some testing ones. And now we wanna to clear it all and make a real copy of PRD. Does anyone knows how to do this?
-
Hi there I have searched the archives of this forum with no success so I will ask here. Is it possible to fill the IPTC field "Date created" automatically? I have created a metadata template and have ticked the appropriate box. All the other fields a