Select query is taking lot of time to fetch data.....
Select query is taking lot of time to fetch data.
SELECT algnum atanum abdatu abzeit abname abenum bmatnr bmaktx bqdatu bqzeit bvlenr bnlenr bvltyp bvlber b~vlpla
bnltyp bnlber bnlpla bvsola b~vorga INTO TABLE it_final FROM ltak AS a
INNER JOIN ltap AS b ON btanum EQ atanum AND algnum EQ blgnum
WHERE a~lgnum = p_whno
AND a~tanum IN s_tono
AND a~bdatu IN s_tocd
AND a~bzeit IN s_bzeit
AND a~bname IN s_uname
AND a~betyp = 'P'
AND b~matnr IN s_mno
AND b~vorga <> 'ST'.
Moderator message: Please Read before Posting in the Performance and Tuning Forum
Edited by: Thomas Zloch on Mar 27, 2011 12:05 PM
Hi Shiva,
I am using two more select queries with the same manner ....
here are the other two select query :
***************1************************
SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
FROM ztftelpt LEFT JOIN ztfzberep
ON ztfzberep~gjahr = st_input-gjahr
AND ztfzberep~poper = st_input-poper
AND ztfzberepcntr = ztftelptrprctr
WHERE rldnr = c_telstra_projects
AND rrcty = c_actual
AND rvers = c_ver_001
AND rbukrs = st_input-bukrs
AND racct = st_input-saknr
AND ryear = st_input-gjahr
and rzzlstar in r_lstar
AND rpmax = c_max_period.
and the second one is
*************************2************************
SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
FROM ztftelnt LEFT JOIN ztfzberep
ON ztfzberep~gjahr = st_input-gjahr
AND ztfzberep~poper = st_input-poper
AND ztfzberepcntr = ztftelntrprctr
WHERE rldnr = c_telstra_networks
AND rrcty = c_actual
AND rvers = c_ver_001
AND rbukrs = st_input-bukrs
AND racct = st_input-saknr
AND ryear = st_input-gjahr
and rzzlstar in r_lstar
AND rpmax = c_max_period.
for both the above table program is taking very less time .... although both the table used in above queries have similar amount of data. And i can not remove the APPENDING CORRESPONDING. because i have to append the data after fetching from the tables. if i will not use it will delete all the data fetched earlier.
Thanks on advanced......
Sourabh
Similar Messages
-
SELECT is taking lot of time to fetch data from cluster table BSET
<Modified the subject line>
Hi experts,
I want to fetch data of some fields from bset table but it is taking a lot of time as the table is cluster table.
Can you please suggest me any other process to fetch data from cluster table. I am using native sql to fetch data.
Regards,
SURYA
Edited by: Suhas Saha on Jun 29, 2011 1:51 PMHi Subhas,
As per your suggestion I am now using normal SQL statement to select data from BSET but it is still taking much time.
My SQL statement is :
SELECT BELNR
GJAHR
BUZEI
MWSKZ
HWBAS
KSCHL
KNUMH FROM BSET INTO CORRESPONDING FIELDS OF TABLE IT_BSET
FOR ALL ENTRIES IN IT_BKPF
WHERE BELNR = IT_BKPF-BELNR
AND BUKRS = IT_BKPF-BUKRS.
<Added code tags>
Can you suggest me anymore?
Regards,
SURYA
Edited by: Suhas Saha on Jun 29, 2011 4:16 PM -
Select Query is taking lot of time.
Hi Experts,
Here is the query i have used.
Its taking a lot of tome and sometimes its timing out.
SELECT mvgr1 "Line of Business
werks "Plant
lgort "Storage Location
charg "Batch
matnr "Material Number
kwmeng "Ordered quantity
posnr "Item
vbeln "Sales Order Number
FROM vbap
INTO TABLE gt_salesdatatemp
FOR ALL ENTRIES IN gt_matmerge
WHERE matnr EQ gt_matmerge-matnr
AND werks EQ gt_matmerge-werks
AND mvgr1 EQ gt_matmerge-mvgr1
AND lgort EQ gt_matmerge-lgort
AND charg EQ gt_matmerge-charg
AND abgru EQ space.
i didnt use primary keys of VBAP table, Could you please suggest me how to improve the performance of this query.
Thanks & Regards,
Krishna Reddy.THi Experts,
Here is the query i have used.
Its taking a lot of tome and sometimes its timing out.
There are more than 4lakh records in GT_MATMERGE.
SELECT mvgr1 "Line of Business
werks "Plant
lgort "Storage Location
charg "Batch
matnr "Material Number
kwmeng "Ordered quantity
posnr "Item
vbeln "Sales Order Number
FROM vbap
INTO TABLE gt_salesdatatemp
FOR ALL ENTRIES IN gt_matmerge
WHERE matnr EQ gt_matmerge-matnr
AND werks EQ gt_matmerge-werks
AND mvgr1 EQ gt_matmerge-mvgr1
AND lgort EQ gt_matmerge-lgort
AND charg EQ gt_matmerge-charg
AND abgru EQ space.
i didnt use primary keys of VBAP table, Could you please suggest me how to improve the performance of this query.
Thanks & Regards,
Krishna Reddy.T -
Select query taking too much time to fetch data from pool table a005
Dear all,
I am using 2 pool table a005 and a006 in my program. I am using select query to fetch data from these table. i.e. example is mentioned below.
select * from a005 into table t_a005 for all entries in it_itab
where vkorg in s_vkorg
and matnr in s_matnr
and aplp in s_aplp
and kmunh = it_itab-kmunh.
here i can't create index also as tables are pool table...If there is any solutions , than please help me for same..
Thanks ,it would be helpful to know what other fields are in the internal table you are using for the FOR ALL ENTRIES.
In general, you should code the order of your fields in the select in the same order as they appear in the database. If you do not have the top key field, then the entire database is read. If it's large then it's going to take a lot of time. The more key fields from the beginning of the structure that you can supply at faster the retrieval.
Regards,
Brent -
Select statement is taking lot of time for the first time Execution.?
Hi Experts,
I am facing the following issue. I am using one select statement to retrieve all the contracts from the table CACS_CTRTBU according to FOR ALL ENTRIES restriction.
if p_lt_zcacs[] is not initial.
SELECT
appl ctrtbu_id version gpart
busi_begin busi_end tech_begin tech_end
flg_cancel_obj flg_cancel_vers int_title
FROM cacs_ctrtbu INTO TABLE lt_cacs FOR ALL ENTRIES IN p_lt_zcacs
WHERE
appl EQ gv_appl
AND ctrtbu_id EQ p_lt_zcacs-ctrtbu_id
AND ( flg_cancel_vers EQ '' OR version EQ '000000' )
AND flg_cancel_obj EQ ''
AND busi_begin LE p_busbegin
AND busi_end GT p_busbegin.
endif.
The WHERE condition is in order with the available Index. The index has APPL,CTRTBU_ID,FLG_CANCEL_VERS and FLG_CANCEL_OBJ.
The technical settings of table CACS_CTRTBU says that the "Buffering is not allowed"
Now the problem is , for the first time execution of this select statement, with 1.5 lakh entries in P_LT_ZCACS table, the select statement takes 3 minutes.
If I execute this select statement again, in another run with Exactly the same parameter values and number of entries in P_LT_ZCACS ( i.e 1.5 lakh entries), it gets executed in 3-4 seconds.
What can be the issue in this case? Why first execution takes longer time?.. Or is there any way to modify the Select statemnt to get better performance.
Thanks in advance
Sreejith A PHi,
>
sree jith wrote:
> What can be the issue in this case? Why first execution takes longer time?..
> Sreejith A P
Sounds like caching or buffering in some layer down the i/o stack. Your first execution
seems to do the "physical I/O" where your following executions can use the caches / buffers
that are filled by your first exectution.
>
sree jith wrote:
> Or is there any way to modify the Select statemnt to get better performance.
> Sreejith A P
If modifying your SELECTS statement or your indexes could help depends on your access details:
does your internal table P_LT_ZCACS contain duplicates?
how do your indexes look like?
how does your execution plan look like?
what are your execution figures in ST05 - Statement Summary?
(nr. of executions, records in total, total time, time per execuiton, records per execution, time per record,...)
Kind regards,
Hermann -
Hi
I have a delete statement..which is taking lot of time. If I select this scenerio only 500 records are coming. But delete is taking lot of time.
Please advise.
delete from whs_bi.TRACK_PLAY_EVENT a
where a.time_stamp >=to_date('5/27/2013','mm/dd/yyyy')
and a.time_stamp <to_date('5/28/2013','mm/dd/yyyy')Thanks in adv.
KPRLets check the wait events.
Open 2 sessions, 1 for running the update command and another for monitoring wait events. The session in which you want to run UPDATE, find the SID of that session ( SELECT userenv('SID') from dual ).
Now run the update in one session (of which we have already found the SID).
Run following query in another session
select w.sid sid,
p.spid PID,
w.event event,
substr(s.username,1,10) username,
substr(s.osuser, 1,10) osuser,
w.state state,
w.wait_time wait_time,
w.seconds_in_wait wis,
substr(w.p1text||' '||to_char(w.P1)||'-'||
w.p2text||' '||to_char(w.P2)||'-'||
w.p3text||' '||to_char(w.P3), 1, 45) P1_P2_P3_TEXT
from v$session_wait w, v$session s, v$process p
where s.sid=w.sid
and p.addr = s.paddr
and w.event not in ('SQL*Net message from client', 'pipe get')
and s.username is not null
and s.sid = &your_SIDWhile UPDATE is running in another session, run above query in second session for 5-6 times, with gap of (say) 10 seconds. If you can give us the output of monitoring query (from all 5-6 runs), that might throw more light on whats going under the hood. -
Function Module Extraction from KONV Table taking lot of time for extractio
Hi
I have a requirement wherein i need to get records from KONV Table (Conditions (Transaction Data) ). i need the data corresponding to Application (KAPPL) = 'F'.
For this i had written one function module but it is taking lot of time (@ 2.5 hrs) for fetching records as there are large number of records in KONV Table.
I am pasting the Function Module code for reference.
<b>kindly guide me as to how the extraction performance can be improved.</b>
<b>Function Module Code:</b>
FUNCTION ZBW_SHPMNT_COND.
""Local interface:
*" IMPORTING
*" VALUE(I_REQUNR) TYPE SBIWA_S_INTERFACE-REQUNR
*" VALUE(I_ISOURCE) TYPE SBIWA_S_INTERFACE-ISOURCE OPTIONAL
*" VALUE(I_MAXSIZE) TYPE SBIWA_S_INTERFACE-MAXSIZE OPTIONAL
*" VALUE(I_INITFLAG) TYPE SBIWA_S_INTERFACE-INITFLAG OPTIONAL
*" VALUE(I_UPDMODE) TYPE SBIWA_S_INTERFACE-UPDMODE OPTIONAL
*" VALUE(I_DATAPAKID) TYPE SBIWA_S_INTERFACE-DATAPAKID OPTIONAL
*" VALUE(I_PRIVATE_MODE) OPTIONAL
*" VALUE(I_CALLMODE) LIKE ROARCHD200-CALLMODE OPTIONAL
*" TABLES
*" I_T_SELECT TYPE SBIWA_T_SELECT OPTIONAL
*" I_T_FIELDS TYPE SBIWA_T_FIELDS OPTIONAL
*" E_T_DATA STRUCTURE ZBW_SHPMNT_COND OPTIONAL
*" E_T_SOURCE_STRUCTURE_NAME OPTIONAL
*" EXCEPTIONS
*" NO_MORE_DATA
*" ERROR_PASSED_TO_MESS_HANDLER
The input parameter I_DATAPAKID is not supported yet !
TABLES: KONV.
Auxiliary Selection criteria structure
DATA: l_s_select TYPE sbiwa_s_select.
Maximum number of lines for DB table
STATICS: l_maxsize TYPE sbiwa_s_interface-maxsize.
Maximum number of lines for DB table
STATICS: S_S_IF TYPE SRSC_S_IF_SIMPLE,
counter
S_COUNTER_DATAPAKID LIKE SY-TABIX,
cursor
S_CURSOR TYPE CURSOR.
Select ranges
RANGES: L_R_KNUMV FOR KONV-KNUMV,
L_R_KSCHL FOR KONV-KSCHL,
L_R_KDATU FOR KONV-KDATU.
Declaring internal tables
DATA : I_KONV LIKE KONV OCCURS 0 WITH HEADER LINE.
DATA : Begin of I_KONV occurs 0,
MANDT LIKE konv-mandt,
KNUMV LIKE konv-knumv,
KPOSN LIKE konv-kposn,
STUNR LIKE konv-stunr,
ZAEHK LIKE konv-zaehk,
KAPPL LIKE konv-kappl,
KSCHL LIKE konv-kschl,
KDATU LIKE konv-kdatu,
KBETR LIKE konv-kbetr,
WAERS LIKE konv-waers,
END OF I_KONV.
Initialization mode (first call by SAPI) or data transfer mode
(following calls) ?
IF i_initflag = sbiwa_c_flag_on.
Initialization: check input parameters
buffer input parameters
prepare data selection
The input parameter I_DATAPAKID is not supported yet !
Invalid second initialization call -> error exit
IF NOT g_flag_interface_initialized IS INITIAL.
IF
1 = 2.
MESSAGE e008(r3).
ENDIF.
log_write 'E' "message type
'R3' "message class
'008' "message number
' ' "message variable 1
' '. "message variable 2
RAISE error_passed_to_mess_handler.
ENDIF.
Check InfoSource validity
CASE i_isource.
WHEN 'X'.
WHEN 'Y'.
WHEN 'Z'.
WHEN OTHERS.
IF 1 = 2. MESSAGE e009(r3). ENDIF.
log_write 'E' "message type
'R3' "message class
'009' "message number
i_isource "message variable 1
' '. "message variable 2
RAISE error_passed_to_mess_handler.
ENDCASE.
Check for supported update mode
CASE i_updmode.
For full upload
WHEN 'F'.
WHEN 'D'.
WHEN OTHERS.
IF 1 = 2. MESSAGE e011(r3). ENDIF.
log_write 'E' "message type
'R3' "message class
'011' "message number
i_updmode "message variable 1
' '. "message variable 2
RAISE error_passed_to_mess_handler.
ENDCASE.
APPEND LINES OF i_t_select TO g_t_select.
Fill parameter buffer for data extraction calls
g_s_interface-requnr = i_requnr.
g_s_interface-isource = i_isource.
g_s_interface-maxsize = i_maxsize.
g_s_interface-initflag = i_initflag.
g_s_interface-updmode = i_updmode.
g_s_interface-datapakid = i_datapakid.
g_flag_interface_initialized = sbiwa_c_flag_on.
Fill field list table for an optimized select statement
(in case that there is no 1:1 relation between InfoSource fields
and database table fields this may be far from beeing trivial)
APPEND LINES OF i_t_fields TO g_t_fields.
Interpretation of date selection for generic extraktion
CALL FUNCTION 'RSA3_DATE_RANGE_CONVERT'
TABLES
i_t_select = g_t_select.
ELSE. "Initialization mode or data extraction ?
CASE g_s_interface-updmode.
WHEN 'F' OR 'C' OR 'I'.
First data package -> OPEN CURSOR
IF g_counter_datapakid = 0.
L_MAXSIZE = G_S_INTERFACE-MAXSIZE.
LOOP AT g_t_select INTO l_s_select WHERE fieldnm = 'KNUMV'.
MOVE-CORRESPONDING l_s_select TO l_r_knumv.
APPEND l_r_knumv.
ENDLOOP.
LOOP AT g_t_select INTO l_s_select WHERE fieldnm = 'KSCHL'.
MOVE-CORRESPONDING l_s_select TO l_r_kschl.
APPEND l_r_kschl.
ENDLOOP.
Loop AT g_t_select INTO l_s_select WHERE fieldnm = 'KDATU'.
MOVE-CORRESPONDING l_s_select TO l_r_kdatu.
APPEND l_r_kdatu.
ENDLOOP.
*In case of full upload
Fill field list table for an optimized select statement
(in case that there is no 1:1 relation between InfoSource fields
and database table fields this may be far from beeing trivial)
APPEND LINES OF I_T_FIELDS TO S_S_IF-T_FIELDS.
OPEN CURSOR G_CURSOR FOR
SELECT MANDT
KNUMV
KPOSN
STUNR
ZAEHK
KAPPL
KSCHL
KDATU
KBETR
WAERS
FROM KONV
WHERE KNUMV IN l_r_knumv
AND KSCHL IN l_r_kschl
AND KDATU IN l_r_kdatu
AND KAPPL EQ 'F'.
ENDIF.
Refresh I_KONV.
FETCH NEXT CURSOR G_CURSOR
APPENDING CORRESPONDING FIELDS OF TABLE I_KONV
PACKAGE SIZE S_S_IF-MAXSIZE.
IF SY-SUBRC <> 0.
CLOSE CURSOR G_CURSOR.
RAISE NO_MORE_DATA.
ENDIF.
LOOP AT I_KONV.
IF I_KONV-KAPPL EQ 'F'.
CLEAR :E_T_DATA.
E_T_DATA-MANDT = I_KONV-MANDT.
E_T_DATA-KNUMV = I_KONV-KNUMV.
E_T_DATA-KPOSN = I_KONV-KPOSN.
E_T_DATA-STUNR = I_KONV-STUNR.
E_T_DATA-ZAEHK = I_KONV-ZAEHK.
E_T_DATA-KAPPL = I_KONV-KAPPL.
E_T_DATA-KSCHL = I_KONV-KSCHL.
E_T_DATA-KDATU = I_KONV-KDATU.
E_T_DATA-KBETR = I_KONV-KBETR.
E_T_DATA-WAERS = I_KONV-WAERS.
APPEND E_T_DATA.
ENDIF.
ENDLOOP.
g_counter_datapakid = g_counter_datapakid + 1.
ENDIF.
ENDFUNCTION.
Thanks in Advance
Regards
Swapnil.Hi,
one option to investigate is to select the data with a condition on KNUMV (primary IDX).
Since shipment costs are store in VFKP I would investigate if all your F condition records are used in this table (field VFKP-KNUMV).
If this is the case then something like
SELECT *
FROM KONV
WHERE KNUMV IN (SELECT DISTINCT KNUMV FROM VFKP)
or
SELECT DISTINCT KNUMV
INTO CORRESPONDING FIELD OF <itab>
FROM VFKP
and then
SELECT *
FROM KONV
FOR ALL ENTRIES IN <itab>
WHERE...
will definitively speed it up.
hope this helps....
Olivier -
hi
The following query is taking too much time (more than 30 minutes), working with 11g.
The table has three columns rid, ida, geometry and index has been created on all columns.
The table has around 5,40,000 records of point geometries.
Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
regardsI have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
a.ida='CORD' AND
b.idat='CORD' AND
a.rid !=b.rid AND
sdo_equal(a.geometry, b.geometry)='TRUE'
ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
[ c o d e ]
<code/results here>
[ / c o d e ]
that way the original formatting is kept and it makes things much easier to read.
Regards,
Stefan -
When query is taking too long time
When query is taking too long time,Where and how to start tuning it?
Here i've listed few things need to be considered,out of my knowledge and understanding
1.What the sql is waiting for(wait events)
2.Parameter modification need to be done at system/session level
3.The query has to be tuned (using hints )
4.Gathering/deleting statistics
List out any other things that need to be taken into account?
Which approach must be followed and on what basis that approach must be considered?When query is taking too long time,Where and how to start tuning it?explain plan will be good start . trace also
Here i've listed few things need to be considered,out of my knowledge and understanding
1.What the sql is waiting for(wait events)When Oracle executes an SQL statement, it is not constantly executing. Sometimes it has to wait for a specific event to happen befor it can proceed.
Read
http://www.adp-gmbh.ch/ora/tuning/event.html
2.Parameter modification need to be done at system/session levelDepend on parameter , define parameter , trace done on session level for example
3.The query has to be tuned (using hints )Could be help you but you must know how to use .
4.Gathering/deleting statisticsDo it in non working hours , it will impact on database performance , but its good
List out any other things that need to be taken into account?Which account ?
Which approach must be followed and on what basis that approach must be considered?you could use lot of tools , Trace , AWR -
Why this Query is taking much longer time than expected?
Hi,
I need experts support on the below mentioned issue:
Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time. Below, please find the DDL & DML:
DDL
BHDCollections
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[BHDCollections](
[BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
[GroupMemberid] [int] NOT NULL,
[BHDDate] [datetime] NOT NULL,
[BHDShift] [varchar](10) NULL,
[SlipValue] [decimal](18, 3) NOT NULL,
[ProcessedValue] [decimal](18, 3) NOT NULL,
[BHDRemarks] [varchar](500) NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
[BHDCollectionid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
BHDCollectionsDet
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[BHDCollectionsDet](
[CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
[BHDCollectionid] [bigint] NOT NULL,
[Currencyid] [int] NOT NULL,
[Denomination] [decimal](18, 3) NOT NULL,
[Quantity] [int] NOT NULL,
CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
[CollectionDetailid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Banks
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Banks](
[Bankid] [int] IDENTITY(1,1) NOT NULL,
[Bankname] [varchar](50) NOT NULL,
[Bankabbr] [varchar](50) NULL,
[BankContact] [varchar](50) NULL,
[BankTel] [varchar](25) NULL,
[BankFax] [varchar](25) NULL,
[BankEmail] [varchar](50) NULL,
[BankActive] [bit] NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
[Bankid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
Groupmembers
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[GroupMembers](
[GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
[Groupid] [int] NOT NULL,
[BAID] [int] NOT NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
[GroupMemberid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[GroupMembers] WITH CHECK ADD CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
REFERENCES [dbo].[BankAccounts] ([BAID])
GO
ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
GO
ALTER TABLE [dbo].[GroupMembers] WITH CHECK ADD CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
REFERENCES [dbo].[Groups] ([Groupid])
GO
ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
BankAccounts
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[BankAccounts](
[BAID] [int] IDENTITY(1,1) NOT NULL,
[CustomerID] [int] NOT NULL,
[Locationid] [varchar](25) NOT NULL,
[Bankid] [int] NOT NULL,
[BankAccountNo] [varchar](50) NOT NULL,
CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
[BAID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[BankAccounts] WITH CHECK ADD CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
REFERENCES [dbo].[Banks] ([Bankid])
GO
ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
GO
ALTER TABLE [dbo].[BankAccounts] WITH CHECK ADD CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
REFERENCES [dbo].[Locations] ([Locationid])
GO
ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
Currency
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Currency](
[Currencyid] [int] IDENTITY(1,1) NOT NULL,
[CurrencyISOCode] [varchar](20) NOT NULL,
[CurrencyCountry] [varchar](50) NULL,
[Currency] [varchar](50) NULL,
CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
[Currencyid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
CurrencyDetails
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[CurrencyDetails](
[CurDenid] [int] IDENTITY(1,1) NOT NULL,
[Currencyid] [int] NOT NULL,
[Denomination] [decimal](15, 3) NOT NULL,
[DenominationType] [varchar](25) NOT NULL,
CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
[CurDenid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
QUERY
WITH TEMP_TABLE AS
SELECT 0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
(BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
FROM BHDCollections INNER JOIN
BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
Banks ON BankAccounts.Bankid = Banks.Bankid
GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
HAVING (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
(CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
UNION ALL
SELECT BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
(BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
FROM BHDCollections INNER JOIN
BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
Banks ON BankAccounts.Bankid = Banks.Bankid
GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
HAVING (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
(CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
TEMP_TABLE2 AS
SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
HAVING COUNT(DSLIPS)<>0;Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
Just
SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS FROM
#tmp Group By CollectionDate,DSLIPS,Bankname
HAVING COUNT(DSLIPS)<>0;
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Program is taking lot of time for execution
Hi all,
TYPES: BEGIN OF TY_MARA,
MATNR TYPE MATNR, " Material Number
ZZBASE_CODE TYPE ZBASE_CODE, " Base Code
END OF TY_MARA,
BEGIN OF TY_MAKT,
MATNR TYPE MATNR, " Material
MAKTX TYPE MAKTX, " Material Description
END OF TY_MAKT,
BEGIN OF TY_MARC,
MATNR TYPE MATNR , " Material Number
WERKS TYPE WERKS_D, " Plant
END OF TY_MARC,
BEGIN OF TY_QMAT,
MATNR TYPE MATNR, " Material Number
ART TYPE QPART, " Inspection Type
END OF TY_QMAT,
BEGIN OF TY_MAPL,
MATNR TYPE MATNR, " Material
PLNTY TYPE PLNTY, " Task List Type
END OF TY_MAPL,
BEGIN OF TY_PLKO,
PLNTY TYPE PLNTY, " Task List Type
VERWE TYPE PLN_VERWE, " Task list usage
END OF TY_PLKO,
BEGIN OF TY_KLAH,
CLASS TYPE KLASSE_D, " Class Number
END OF TY_KLAH,
BEGIN OF TY_FINAL,
MATNR TYPE MATNR, " Material Number
MAKTX TYPE MAKTX, " Material Description
ZZBASE_CODE TYPE ZBASE_CODE, " Base Code
WERKS TYPE WERKS_D, " Plant
CLASS TYPE KLASSE_D, " Class Number
ART TYPE QPART, " Inspection Type
VERWE TYPE PLN_VERWE, " Task list usage
MESSAGE TYPE STRING, " Message
END OF TY_FINAL.
DATA: I_MARA TYPE STANDARD TABLE OF TY_MARA ,
I_MAKT TYPE STANDARD TABLE OF TY_MAKT ,
I_MARC TYPE STANDARD TABLE OF TY_MARC ,
I_QMAT TYPE STANDARD TABLE OF TY_QMAT ,
I_MAPL TYPE STANDARD TABLE OF TY_MAPL ,
I_PLKO TYPE STANDARD TABLE OF TY_PLKO ,
I_KLAH TYPE STANDARD TABLE OF TY_KLAH ,
I_FINAL TYPE STANDARD TABLE OF TY_FINAL ,
WA_MARA TYPE TY_MARA,
WA_MAKT TYPE TY_MAKT,
WA_MARC TYPE TY_MARC,
WA_QMAT TYPE TY_QMAT,
WA_MAPL TYPE TY_MAPL,
WA_PLKO TYPE TY_PLKO,
WA_KLAH TYPE TY_KLAH,
WA_FINAL TYPE TY_FINAL.
DATA: V_MTART TYPE MARA-MTART,
V_MATNR TYPE MARA-MATNR,
V_ZZBASE_CODE TYPE MARA-ZZBASE_CODE,
V_WERKS TYPE T001W-WERKS,
V_BESKZ TYPE MARC-BESKZ.
*selection-screen
SELECTION-SCREEN SKIP 1.
SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
SELECT-OPTIONS: S_MTART FOR V_MTART DEFAULT 'halb' TO 'zraw',
S_MATNR FOR V_MATNR,
S_ZZBASE FOR V_ZZBASE_CODE,
S_WERKS FOR V_WERKS OBLIGATORY,
S_BESKZ FOR V_BESKZ.
SELECTION-SCREEN END OF BLOCK B1.
START-OF-SELECTION.
SELECT MATNR
ZZBASE_CODE
FROM MARA INTO TABLE I_MARA
WHERE MTART IN S_MTART "Material Type
AND MATNR IN S_MATNR "Material
AND ZZBASE_CODE IN S_ZZBASE."Base Code
IF NOT I_MARA IS INITIAL.
SELECT MATNR
MAKTX
FROM MAKT INTO TABLE I_MAKT FOR ALL ENTRIES IN I_MARA
WHERE MATNR = I_MARA-MATNR.
ENDIF.
IF NOT I_MARA IS INITIAL.
SELECT MATNR
WERKS
FROM MARC INTO TABLE I_MARC FOR ALL ENTRIES IN I_MARA
WHERE MATNR = I_MARA-MATNR
AND WERKS IN S_WERKS "plant
AND BESKZ IN S_BESKZ."Procurement Type
ENDIF.
IF NOT I_MARA IS INITIAL.
SELECT MATNR
ART
FROM QMAT INTO TABLE I_QMAT FOR ALL ENTRIES IN I_MARA
WHERE MATNR = I_MARA-MATNR
AND WERKS IN S_WERKS.
ENDIF.
IF NOT I_MARA IS INITIAL.
SELECT MATNR
PLNTY FROM MAPL INTO TABLE I_MAPL FOR ALL ENTRIES IN I_MARA
WHERE MATNR = I_MARA-MATNR.
ENDIF.
IF NOT I_MAPL IS INITIAL.
SELECT PLNTY
VERWE
FROM PLKO INTO TABLE I_PLKO FOR ALL ENTRIES IN I_MAPL
WHERE PLNTY = I_MAPL-PLNTY.
ENDIF.
LOOP AT I_MARA INTO WA_MARA.
CALL FUNCTION 'CLFC_BATCH_ALLOCATION_TO_CLASS'
EXPORTING
MATERIAL = WA_MARA-MATNR
PLANT = WA_MARC-WERKS
CLASSTYPE = '023'
I_IGNORE_MATMASTER = ' '
I_BATCHES_ONLY =
I_IGNORE_BUFFER = ' '
IMPORTING
CLASSTYPE =
CLASS = WA_KLAH-CLASS
EXCEPTIONS
WRONG_FUNCTION_CALL = 1
NO_CLASS_FOUND = 2
NO_CLASSTYPE_FOUND = 3
OTHERS = 4 .
*IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
*ENDIF.
APPEND WA_KLAH TO I_KLAH.
ENDLOOP.
LOOP AT I_MARA INTO WA_MARA.
WA_FINAL-MATNR = WA_MARA-MATNR.
WA_FINAL-ZZBASE_CODE = WA_MARA-ZZBASE_CODE.
APPEND WA_FINAL TO I_FINAL.
SORT I_MAKT BY MATNR.
READ TABLE I_MAKT INTO WA_MAKT WITH KEY MATNR = WA_MARA-MATNR BINARY
SEARCH.
IF SY-SUBRC EQ 0.
WA_FINAL-MAKTX = WA_MAKT-MAKTX.
ENDIF.
APPEND WA_FINAL TO I_FINAL.
SORT I_MARC BY MATNR.
READ TABLE I_MARC INTO WA_MARC WITH KEY MATNR = WA_MARA-MATNR BINARY
SEARCH.
IF SY-SUBRC EQ 0.
WA_FINAL-WERKS = WA_MARC-WERKS.
ENDIF.
APPEND WA_FINAL TO I_FINAL.
SORT I_QMAT BY MATNR.
READ TABLE I_QMAT INTO WA_MARC WITH KEY MATNR = WA_MARA-MATNR BINARY
SEARCH.
IF SY-SUBRC EQ 0.
WA_FINAL-ART = WA_QMAT-ART.
ENDIF.
APPEND WA_FINAL TO I_FINAL.
SORT I_MAPL BY MATNR.
READ TABLE I_MAPL INTO WA_MAPL WITH KEY MATNR = WA_MARA-MATNR BINARY
SEARCH.
IF SY-SUBRC EQ 0.
SORT I_PLKO BY PLNTY.
READ TABLE I_PLKO INTO WA_PLKO WITH KEY PLNTY = WA_MAPL-PLNTY BINARY
SEARCH.
ENDIF.
WA_FINAL-VERWE = WA_PLKO-VERWE.
APPEND WA_FINAL TO I_FINAL.
ENDLOOP.
LOOP AT I_KLAH INTO WA_KLAH.
WA_FINAL-CLASS = WA_KLAH-CLASS.
APPEND WA_FINAL TO I_FINAL.
ENDLOOP.
LOOP AT I_FINAL INTO WA_FINAL.
WRITE:/ WA_FINAL-MATNR,
WA_FINAL-MAKTX,
WA_FINAL-ZZBASE_CODE,
WA_FINAL-WERKS,
WA_FINAL-CLASS,
WA_FINAL-ART,
WA_FINAL-VERWE.
ENDLOOP.
This is my program. it is giving out put.but it is taking lot of time for execution. what might be the porblem.pls let me know.
Thanks,Hi Mythily,
Try the following code.
TYPES: BEGIN OF ty_mara,
matnr TYPE matnr, " Material Number
zzbase_code TYPE zbase_code, " Base Code
END OF ty_mara,
BEGIN OF ty_makt,
matnr TYPE matnr, " Material
maktx TYPE maktx, " Material Description
END OF ty_makt,
BEGIN OF ty_marc,
matnr TYPE matnr , " Material Number
werks TYPE werks_d, " Plant
END OF ty_marc,
BEGIN OF ty_qmat,
art TYPE qpart , " Inspection Type
matnr TYPE matnr , " Material Number
werks TYPE werks_d, "Plant
END OF ty_qmat,
BEGIN OF ty_mapl,
matnr TYPE matnr , " Material
werks TYPE werks_d , "Plant
plnty TYPE plnty , " Task List Type
plnnr TYPE plnnr , " Key for Task List Group
plnal TYPE plnal , " Group Counter
zkriz TYPE dzkriz , " Counter for additional criteria
zaehl TYPE cim_count, " Internal counter
END OF ty_mapl,
BEGIN OF ty_plko,
plnty TYPE plnty , " Task List Type
plnnr TYPE plnnr , " Key for Task List Group
plnal TYPE plnal , " Group Counter
zaehl TYPE cim_count, " Internal counter
verwe TYPE pln_verwe, " Task list usage
END OF ty_plko,
BEGIN OF ty_klah,
class TYPE klasse_d, " Class Number
END OF ty_klah,
BEGIN OF ty_final,
matnr TYPE matnr, " Material Number
maktx TYPE maktx, " Material Description
zzbase_code TYPE zbase_code, " Base Code
werks TYPE werks_d, " Plant
class TYPE klasse_d, " Class Number
art TYPE qpart, " Inspection Type
verwe TYPE pln_verwe, " Task list usage
message TYPE string, " Message
END OF ty_final.
DATA: i_mara TYPE STANDARD TABLE OF ty_mara ,
i_makt TYPE HASHED TABLE OF ty_makt
WITH UNIQUE KEY matnr,
i_marc TYPE SORTED TABLE OF ty_marc
WITH NON-UNIQUE KEY matnr,
i_qmat TYPE SORTED TABLE OF ty_qmat
WITH NON-UNIQUE KEY matnr werks,
i_mapl TYPE SORTED TABLE OF ty_mapl
WITH NON-UNIQUE KEY matnr werks,
i_mapl_tmp TYPE STANDARD TABLE OF ty_mapl ,
i_plko TYPE SORTED TABLE OF ty_plko
WITH NON-UNIQUE KEY plnty
plnnr
plnal,
i_final TYPE STANDARD TABLE OF ty_final,
wa_mara TYPE ty_mara,
wa_makt TYPE ty_makt,
wa_marc TYPE ty_marc,
wa_qmat TYPE ty_qmat,
wa_mapl TYPE ty_mapl,
wa_plko TYPE ty_plko,
wa_klah TYPE ty_klah,
wa_final TYPE ty_final.
DATA: v_mtart TYPE mara-mtart,
v_matnr TYPE mara-matnr,
v_zzbase_code TYPE mara-zzbase_code,
v_werks TYPE t001w-werks,
v_beskz TYPE marc-beskz.
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
SELECT-OPTIONS: s_mtart FOR v_mtart DEFAULT 'halb' TO 'zraw',
s_matnr FOR v_matnr,
s_zzbase FOR v_zzbase_code,
s_werks FOR v_werks OBLIGATORY,
s_beskz FOR v_beskz.
SELECTION-SCREEN END OF BLOCK b1.
START-OF-SELECTION.
SELECT matnr
zzbase_code
FROM mara
INTO TABLE i_mara
WHERE mtart IN s_mtart "Material Type
AND matnr IN s_matnr. "Material
AND zzbase_code IN s_zzbase."Base Code
IF NOT i_mara[] IS INITIAL.
SELECT matnr
maktx
FROM makt
INTO TABLE i_makt
FOR ALL ENTRIES IN i_mara
WHERE matnr EQ i_mara-matnr
AND spras EQ sy-langu.
SELECT matnr
werks
FROM marc
INTO TABLE i_marc
FOR ALL ENTRIES IN i_mara
WHERE matnr EQ i_mara-matnr
AND werks IN s_werks "plant
AND beskz IN s_beskz."Procurement Type
IF sy-subrc EQ 0.
SELECT art
matnr
werks
FROM qmat
INTO TABLE i_qmat
FOR ALL ENTRIES IN i_marc
WHERE matnr EQ i_marc-matnr
AND werks EQ i_marc-werks.
SELECT matnr
werks
plnty
plnnr
plnal
zkriz
zaehl
FROM mapl
INTO TABLE i_mapl
FOR ALL ENTRIES IN i_marc
WHERE matnr EQ i_marc-matnr
AND werks EQ i_marc-werks.
IF NOT i_mapl[] IS INITIAL.
i_mapl_tmp[] = i_mapl[].
SORT i_mapl_tmp BY plnty
plnnr
plnal.
DELETE ADJACENT DUPLICATES FROM i_mapl_tmp
COMPARING
plnty
plnnr
plnal.
SELECT plnty
plnnr
plnal
zaehl
verwe
FROM plko
INTO TABLE i_plko
FOR ALL ENTRIES IN i_mapl_tmp
WHERE plnty EQ i_mapl_tmp-plnty
AND plnnr EQ i_mapl_tmp-plnnr
AND plnal EQ i_mapl_tmp-plnal.
ENDIF.
ENDIF.
ENDIF.
LOOP AT i_mara INTO wa_mara.
wa_final-matnr = wa_mara-matnr.
wa_final-zzbase_code = wa_mara-zzbase_code.
READ TABLE i_makt INTO wa_makt
WITH KEY matnr = wa_mara-matnr
TRANSPORTING
maktx.
IF sy-subrc EQ 0.
wa_final-maktx = wa_makt-maktx.
ENDIF.
REFRESH i_final.
LOOP AT i_marc INTO wa_marc
WHERE matnr EQ wa_mara-matnr.
CLEAR wa_klah-class.
CALL FUNCTION 'CLFC_BATCH_ALLOCATION_TO_CLASS'
EXPORTING
material = wa_mara-matnr
plant = wa_marc-werks
classtype = '023'
IMPORTING
class = wa_klah-class
EXCEPTIONS
wrong_function_call = 1
no_class_found = 2
no_classtype_found = 3
OTHERS = 4.
IF sy-subrc EQ 0.
wa_final-class = wa_klah-class.
ENDIF.
READ TABLE i_qmat
WITH KEY matnr = wa_mara-matnr
werks = wa_marc-werks
TRANSPORTING NO FIELDS.
IF sy-subrc EQ 0.
LOOP AT i_qmat INTO wa_qmat
WHERE matnr EQ wa_mara-matnr
AND werks EQ wa_marc-werks.
wa_final-art = wa_qmat-art.
READ TABLE i_mapl
WITH KEY matnr = wa_marc-matnr
werks = wa_marc-werks
TRANSPORTING NO FIELDS.
IF sy-subrc EQ 0.
LOOP AT i_mapl INTO wa_mapl
WHERE matnr EQ wa_marc-matnr
AND werks EQ wa_marc-werks.
LOOP AT i_plko INTO wa_plko
WHERE plnty EQ wa_mapl-plnty
AND plnnr EQ wa_mapl-plnnr
AND plnal EQ wa_mapl-plnal.
wa_final-verwe = wa_plko-verwe.
APPEND wa_final TO i_final.
ENDLOOP.
ENDLOOP.
ELSE.
APPEND wa_final TO i_final.
ENDIF.
ENDLOOP.
ELSE.
LOOP AT i_mapl INTO wa_mapl
WHERE matnr EQ wa_marc-matnr
AND werks EQ wa_marc-werks.
LOOP AT i_plko INTO wa_plko
WHERE plnty EQ wa_mapl-plnty
AND plnnr EQ wa_mapl-plnnr
AND plnal EQ wa_mapl-plnal.
wa_final-verwe = wa_plko-verwe.
APPEND wa_final TO i_final.
ENDLOOP.
ENDLOOP.
ENDIF.
ENDLOOP.
CLEAR wa_final.
ENDLOOP.
LOOP AT i_final INTO wa_final.
WRITE:/ wa_final-matnr ,
wa_final-maktx ,
wa_final-zzbase_code,
wa_final-werks ,
wa_final-class ,
wa_final-art ,
wa_final-verwe .
ENDLOOP. -
Partition switching taking lots of time
I have a partitioned table. All the partitions of this table are mapped to a single filegroup.
When I am trying to switch partition, it is taking lots of time more than 5 mins for a single partition.
I can see that there is a file in the filegroup and its size is very high that is 234GB.
Can partition switching take more time due to file size?
or is there any other issue?
I need help in finding out what is the problem.
Thanks,So you are running ALTER TABLE SWITCH? Check for blocking. The operation requires an Sch-M lock which means that it is blocked any query that is running, including queries using NOLOCK.
Erland Sommarskog, SQL Server MVP, [email protected] -
Web-I Report taking lots of time to refresh when filters changed
In BO XI 3.1 Web-I report is taking lot of time to refresh when any changes are made in edit query mode on the filters.
When the query is runned for the first time it runs well, but if we make any changes in the filters pane, it is taking lots of time to refresh, and when we cancel the query web-i server is not releasing the server resources (CPU, Memory)
Did anyone face this kind of a problem, and resolved it.
Please let me know your thoughts
Thank youHi,
why do you need 100K rows in your reports? Is there a way to consolidate/aggregate your data on the database?
Comparing sqlplus and BOBJ is a little bit unfair. Let me explain you why:
sqlplus starts displaying data after the query returns the first results. I am sure that if you measure the time that sqlplus needs to display ALL rows, this should be way more than just 10 secs.
On the other hand BOBJ will display something as soon as the first page can be created. Depending on the way your report is structured, it may be the case that the WebI server should first get ALL rows in order to be able to display the first page of your report. This is for example the case if you let the total number of pages being displayed or if you display total sums already at the header of your report or if you have added a chart based on your data also in the header of your report. Of course, after the first call the report will be cached (and thus fast to be displayed) until you change again the parameters values for your query.
Since you do not display (or do you use this function in a variable or a formula, maybe?) the total number of pages in your report, I can only assume that you either trying to display total sums (or other kinds of aggregated values) or a chart in the first page of your report. Can you confirm this?
Regards,
Stratos -
Taking lot of time for loading
Hi,
We are loading data from HR to BI. Connection between HR and BI has been done recently. When I am trying to load data from HR to BI it taking lot of time--for example to load 5recs its taking 8hrs. Its same for every datasource. Should we change anything in settings to makes IDOCS work proper? ThanksYou have to isolate the part that is taking the time.
- Is R/3 extraction quick? (You can check with RSA3 and see how long does it take.)
- If R/3 extraction is slow, then is the selection using an index? How many records are there in the table / view?
- Is there a user exit? Is user exit slow?
You can find out using monitor screen:
- After R/3 extraction completed, how long did it take to insert into PSA?
- How long did it take in the update rules?
- How long did it take to activate?
Once you isolate the problem area, post your findings here and someone will help you. -
Background job is taking lot of time for processing the job.
One background job - which is processing Idocs is processing a job for more than 2000+ seconds.. and completed tho.
But this is happening for the past few days.. is there any way to trouble shoot, or find from any logs of this completed job, to find out why it was taking lot of time for processing.
Can you please tell me the steps of analyzing / trouble shooting why it was taking lot of time daily.
Regards,
Satish.Hi Satish,
Run DB stat from db13 it will improve the performance.
Check number of idocs. You can send part by part, instead of sending huge data.
Check SM58 logs
Suman
Maybe you are looking for
-
hey guys, I was working on some Socket programming assignment and encountered an intersting problem. i am using Swing for GUI and Threads to communicate with clients. My server GUi has a JButton("Start Server") to trigger the Server. or we can say..
-
"Backup My Contacts" is reversing "Mobile Number 1" and "Mobile Number 2"
I'm trying to use the "Backup My Contacts" to upload several new contacts to my cell phone. First, I downloaded a sample contact using the "save my cloud contact to my computer". Then I used the labels in the first row of the resulting csv file to cr
-
Shared folder crashes user(s) Finder
Out of the blue, a handful of users (in 10.5.6 client) are experiencing problems with a shared folder on the server (10.4.11 server). Upon mounting the file server a user will try to navigate to a nested folder called 'People' and then the spinning w
-
Photo sharing with FB in IOS 7
Since downloading IOS 7, I can no longer share photos from my iPhone 5 to Facebook and there is no option in my privacy settings to change this as there should be. I've deleted the app from my phone twice and then reloaded to see if it would fix it b
-
Regrettably my job consists of "working by the seat of my pants" on a continual basis. As such, I sometimes feel that although I can design a solution that "works", I often know there is a better design out there, its only my limitations of knowledge