Performance Issue in Query using = and =
Hi,
I have a performance issue in using condition like:
SELECT * FROM A WHERE ITEM_NO>='M-1130' AND ITEM_NO<='M-9999'.
Item_No is a varchar2 field and the field contains Numberical as well as string values.
Can anyone help to solve the issue.
Thanks and Regards
How can you say it is a performance issue with the condition? Do you have execution plan? If yes, post it between [pre] and [/pre] tags. Like this.
[pre]SQL> explain plan for
2 select sysdate
3 from dual
4 /
Explained.
SQL> select * from table(dbms_xplan.display)
2 /
PLAN_TABLE_OUTPUT
Plan hash value: 1546270724
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
8 rows selected.
SQL>
[/pre]
Similar Messages
-
RE: Case 59063: performance issues w/ C TLIB and Forte3M
Hi James,
Could you give me a call, I am at my desk.
I had meetings all day and couldn't respond to your calls earlier.
-----Original Message-----
From: James Min [mailto:jminbrio.forte.com]
Sent: Thursday, March 30, 2000 2:50 PM
To: Sharma, Sandeep; Pyatetskiy, Alexander
Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
Hello,
I just want to reiterate that we are very committed to working on
this issue, and that our goal is to find out the root of the problem. But
first I'd like to narrow down the avenues by process of elimination.
Open Cursor is something that is commonly used in today's RDBMS. I
know that you must test your query in ISQL using some kind of execute
immediate, but Sybase should be able to handle an open cursor. I was
wondering if your Sybase expert commented on the fact that the server is
not responding to commonly used command like 'open cursor'. According to
our developer, we are merely following the API from Sybase, and open cursor
is not something that particularly slows down a query for several minutes
(except maybe the very first time). The logs show that Forte is waiting for
a status from the DB server. Actually, using prepared statements and open
cursor ends up being more efficient in the long run.
Some questions:
1) Have you tried to do a prepared statement with open cursor in your ISQL
session? If so, did it have the same slowness?
2) How big is the table you are querying? How many rows are there? How many
are returned?
3) When there is a hang in Forte, is there disk-spinning or CPU usage in
the database server side? On the Forte side? Absolutely no activity at all?
We actually have a Sybase set-up here, and if you wish, we could test out
your database and Forte PEX here. Since your queries seems to be running
off of only one table, this might be the best option, as we could look at
everything here, in house. To do this:
a) BCP out the data into a flat file. (character format to make it portable)
b) we need a script to create the table and indexes.
c) the Forte PEX file of the app to test this out.
d) the SQL staement that you issue in ISQL for comparison.
If the situation warrants, we can give a concrete example of
possible errors/bugs to a developer. Dial-in is still an option, but to be
able to look at the TOOL code, database setup, etc. without the limitations
of dial-up may be faster and more efficient. Please let me know if you can
provide this, as well as the answers to the above questions, or if you have
any questions.
Regards,
At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
James, Ken:
FYI, see attached response from our Sybase expert, Dani Sasmita. She has
already tried what you suggested and results are enclosed.
++
Sandeep
-----Original Message-----
From: SASMITA, DANIAR
Sent: Wednesday, March 29, 2000 6:43 PM
To: Pyatetskiy, Alexander
Cc: Sharma, Sandeep; Tenerelli, Mike
Subject: Re: FW: Case 59063: Select using LIKE has performance
issues
w/ CTLIB and Forte 3M
We did that trick already.
When it is hanging, I can see what is doing.
It is doing OPEN CURSOR. But not clear the exact statement of the cursor
it is trying to open.
When we run the query directly to Sybase, not using Forte, it is clearly
not opening any cursor.
And running it directly to Sybase many times, the response is always
consistently fast.
It is just when the query runs from Forte to Sybase, it opens a cursor.
But again, in the Forte code, Alex is not using any cursor.
In trying to capture the query,we even tried to audit any statementcoming
to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
James Min
Technical Support Engineer - Forte Tools
Sun Microsystems, Inc.
1800 Harrison St., 17th Fl.
Oakland, CA 94612
james.minsun.com
510.869.2056
==============================================
Support Hotline: 510-451-5400
CUSTOMERS open a NEW CASE with Technical Support:
http://www.forte.com/support/case_entry.html
CUSTOMERS view your cases and enter follow-up transactions:
http://www.forte.com/support/view_calls.htmlEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance Issue in Dashboard using SAP BW NetWeaver Connection
HI Experts ,
We developed Dashboard which is based on BW Queries, However it is taking considerable amount time while executing.
We are using BO Dashboard SP2 version ,BO 4.1 and BI system 7.01 .
We are looking for few clarifications on SAP BO Xcelsius Dashboards.
Though we know limitations on component number and data volumes which could badly affect performance of the dashboard, We do have a requirement to handle huge data volumes and multiple components. Our source data lies in SAP BI system and we are using BICS connectivity/ Webi with QAAWS for updating data in BO dashboard .
Our requirement is too complex where we should be in a position to meet user expectations for complete view 75 KPI’s in a single Dashboard.
We have scenarios like YTD ,QTD and MTD other complex calculations in Bex Query.
Here are my questions,
Is there any way to provide complete functionality using large data sets to the users with the current architecture without any performance issues?
Are there any third party tools which can be used with BO Dashboard for the performance improvement and handling huge volumes?
Do you suggest any alternate solution for complete functionality?
Many thanks for your inputs in advance!
Regards
VenkatHi Rajesh,
Thank you so much for your response.
I have tried the way you suggested. But here my issue is , I have a prompt in my webi report based on months selection and it is a single valued prompt.
so I was able to schedule my report only for one month;whereas my dashboard needs to show values for one year data based on the months the user will select in the dashboard.
Though i use the WebI instance data in the dashboard , i getting only one month value, also I am not able to associate the selected month for the WebI instance in dashboard.
Is there any option to schedule webI report for different months and the dashboard has to pick the 12 months instance and the combo box selection in the dashboard must associate with it??
Please help me with your thoughts. -
Performance issue with high CPU and IO
Hi guys,
I am encountering huge user response time on a production system and I don’t know how to solve it.
Doing some extra tests and using the instrumentation that we have in the code we concluded that the DB is the bottleneck.
We generated some AWR reports and noticed the CPU was in top wait events. Also noticed that in a random manner some simple sql take a long time to execute. We activated the sql trace on the system and noticed that for very simple SQLs (unique index access on one table) we have huge exec times. 9s
In the trace file the huge time we had it in fetch area: 9.1s cpu and elapsed 9.2.
And no or very small waits for this specific SQL.
it seems like the bottle neck is on the CPU but at that point there were very few processes running on the DB. Why can we have such a big cpu wait on a simple select? This is a machine with 128 cores. We have quicker responses on machines smaller/busier than this.
We noticed that we had a huge db_cache_size (12G) and after we scale it down we noticed some improvements but not enough. How can I prove that there is a link between high CPU and big cache_size? (there was not wait involved in SQL execution). what can we do in the case we need big DB cache size?
The second issue is that I tried to execute an sql on a big table (FTS on a big table. no join). Again on that smaller machine it runs in 30 seconds and on this machine it runs in 1038 seconds.
Also generated a trace for this SQL on the problematic machine:
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 402.08 1038.31 1842916 6174343 0 1
total 3 402.08 1038.32 1842916 6174343 0 1
db file sequential read 12419 0.21 40.02
i/o slave wait 135475 0.51 613.03
db file scattered read 135475 0.52 675.15
log file switch completion 5 0.06 0.18
latch: In memory undo latch 6 0.00 0.00
latch: object queue header operation 1 0.00 0.00
********************************************************************************The high CPU is present here also but here I have huge wait on db file scattered read.
Looking at the session with the select the AWG_wait for db scattered read was 0.5. on the other machine it is like 0.07.
I though this is an IO issue. I did some IO tests at SO level and it seems like the read and writes operation are very fast…much faster than the machine that has the awg_wait smaller. Why the difference in waits?
One difference between these two DBs is that the problem one has the db block size = 16k and the other one has 8k.
I received some reports done at OS level on CPU and IO usage on the problematic machine (in normal operations). It seems like the CPU is very used and the IO stays very low.
On the other machine, the smaller and the faster one, it is other way around.
What is the problem here? How can I test further? Can I link the high CPU to low/slow IO?
we have 10G on sun os with ASM.
Thanks in advance.Yes, there are many things you can and should do to isolate this. But first check MOS Poor Performance With Oracle9i and 10g Releases When Using Dynamic Intimate Shared Memory (DISM) [ID 1018855.1] isn't messing you up to start.
Also, be sure and post exact patch levels for both Oracle and OS.
Be sure and check all your I/O settings and see what MOS has to say about those.
Are you using ASSM? See Long running update
Since it got a little better with shrinking the SGA size, that might indicate (wild speculation here, something like) one of the problems is simply too much thrashing within the SGA, as oracle decides "small" objects being full scanned in memory is faster than range scans (or whatever) from disk, overloading the cpu, not allowing the cpu to ask for other full scans from I/O. Possibly made worse by row level locking, or some other app issue that just does too much cpu.
You probably have more than one thing wrong. High fetch count might mean you need to adjust the array size on the clients.
Now that that is all out of the way, if you still haven't found the problem, go through http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Edit: Oh, see Solaris 10 memory management conflicts with Automatic PGA Memory Management [ID 460424.1] too.
Edited by: jgarry on Nov 15, 2011 1:45 PM -
Performance issue on query. Help needed.
This is mainly a performance issue. I hope someone can help me on this.
Basically I have four tables Master (150000 records), Child1 (100000+ records), Child2 (50 million records !), Child 3 (10000+ records)
(please pardon the aliases).
Now every record in master has more than one corresponding record in each of the child tables (one to many).
Also there may not be any record in any or all of the tables for a particular master record.
Now, I need to fetch the max of last_updated_date for every master record in each of the 3 child tables and then find the maximum of
the three last_active_dates obtained from the 3 tables.
eg: for Master ID 100, I need to query Child1 for all the records of Master ID 100 and get the max last_updated_date.
Same for the other 2 tables and then get the maximum of these three values.
(I also need to take care of cases where no record may be found in a child table for a Master ID)
Writing a procedure that uses cursors that fetches the value from each of the child table hits performance
badly. And thing is I need to find out the last_updated_date for every Master record (all 150000 of them). It'll probably take days to do this.
SELECT MAX (C1.LAST_UPDATED_DATE)
,MAX (C2.LAST_UPDATED_DATE)
,MAX (C3.LAST_UPDATED_DATE)
FROM CHILD1 C1
,CHILD2 C2
,CHILD3 C3
WHERE C1.MASTER_ID = 100
OR C2.MASTER_ID = 100
OR C3.MASTER_ID = 100
I tried the above but I got a temp tablespace error. I don't think the query is good enough at all.
(The OR clause is to take care of no records in any child table. If there's an AND, then the join and hence select will
fail even if there is no record in one child table but valid values in the other 2 tables).
Thanks a lot.
Edited by: user773489 on Dec 16, 2008 11:49 AMNot sure I understand the problem. The max you are getting from the above is already the greatest out of the three - that's why we do the UNION ALL.
Here's sample code without output, maybe this will clear it up:
with a as (
select 10 MASTER_ID, to_date('12/15/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 20 MASTER_ID, to_date('12/01/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 30 MASTER_ID, to_date('12/02/2008', 'MM/DD/YYYY') LAST_DTE from dual
b as (
select 10 MASTER_ID, to_date('12/14/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 20 MASTER_ID, to_date('12/02/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 40 MASTER_ID, to_date('11/15/2008', 'MM/DD/YYYY') LAST_DTE from dual
c as (
select 10 MASTER_ID, to_date('12/07/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 30 MASTER_ID, to_date('11/29/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 40 MASTER_ID, to_date('12/13/2008', 'MM/DD/YYYY') LAST_DTE from dual
select MASTER_ID, MAX(LAST_DTE)
FROM
(select MASTER_ID, LAST_DTE from a UNION ALL
select MASTER_ID, LAST_DTE from b UNION ALL
select MASTER_ID, LAST_DTE from c)
group by MASTER_ID;
MASTER_ID MAX(LAST_DTE)
30 02-DEC-08
40 13-DEC-08
20 02-DEC-08
10 15-DEC-08
4 rows selectedEdited by: tk-7381344 on Dec 16, 2008 12:38 PM -
MacBook Pro performance issues w/2nd monitor and FCP7
I have this MacBook Pro bought brand-new in January 2010:
Model Name: MacBook Pro
Model Identifier: MacBookPro5,2
Processor Name: Intel Core 2 Duo
Processor Speed: 3.06 GHz
Number Of Processors: 1
Total Number Of Cores: 2
L2 Cache: 6 MB
Memory: 8 GB
Bus Speed: 1.07 GHz
and until today had never attached a second monitor to it. Today I hooked up my Samsung 24" to do some dual screen editing in Final Cut 7.0.3. I was unable to play back my video at full speed in the second monitor, and after a few seconds of skippy playback I'd get that error message about unable to play back video at full speed and to check my RT settings. I was using a Mini DisplayPort to DVI adapter. My computer has no issues playing the video in the laptop's monitor at any resolution and any quality settings (I've never changed the RT settings or anything else in the menu ever but I tried every combination this time). I then tried using my TV as a 2nd monitor with an HDMI adapter. Same performance issues. I then tried my friend's newer 13" MBP 8,1 and it performed flawlessly with the same project & footage. I feel like my $3,000 computer should outperform a $1,200 one even if mine is a year and a half older. Any advice?
ChrisWow, you posted this perfectly to coincide with an identical problem, albeit using Logic Pro 9.1.5 rather than FCP.
Last week, I purchased a 23" external monitor to use alongside my "flagship" 2011 15" hi-res, 2.3 i7 Macbook Pro with 8Gb of RAM.
It is connected via a mini-DVI to D-sub analog (not that that should matter?) and all appeared fine.
The first issue I had was with my MBP's fan now running CONSTANTLY, when I have the second monitor attached. Even when the machine is completely idle.
When using the machine to record audio, this is a fairly hefty problem and not something I had anticipated - indeed why would I anticipate such a thing?
What is far, far worse though is that over the last few days I have had repeated problems with performance drop-outs and errors in Logic and I have trying to fathom out why. Realising that the only major system change made, was the above monitor connection, I ran some tests.
I restarted my MBP, no other apps were running and with my new 23" monitor attached acting as main display with MBP built in display on as secondary
I loaded up a fairly demanding Logic project which was hitting 40% to 60% CPU usage when using the built in MBP display last week
I ran activity monitor and had CPU usage history open
The above project now repeatedly overloads and playback halts in a given 8 bar section - with CPU at 80% most of the time
I disconnected the external display, no shut down, I just let the machine switch to the built in 15".
Started the same project, the same 8 bar section and hey presto - CPU usage back down to 40% to 60%
The above was reflected in the CPU usage history with the graph showing CPU use down by about a half, when running this Logic project WITHOUT the external display.
There is a very useful benchmark Logic project that has been used as a test by many users to gauge Logic performance on given Apple hardware.
The project has about 100 tracks pre-configured with CPU intensive plugins, designed to tax the CPU.
The idea is that you load up the project with tracks muted, press play and then unmute the tracks steadily until Logic us unable to play contiunously because of a system performance error.
On my MBP, with the external monitor NOT attached, I can play back around 50 of the audio tracks in this benchmark project.
With the monitor attached, I can get about 22 tracks playing.... which is actually a far worse a performance drop (-50% I think!?) than with the first example!
I did also try with just the external monitor attached and not the MBP display and performance was about 10% better than with dual monitors - so still extremely poor, to say the least.
This machine is the flagship MBP and has a dedicated AMD Radeon HD6750 GPU which should take care of most if not ALL graphics processing - I mean it's capable of running some pretty demanding games!
Putting aside the issue of constant fan noise, there is no reason AT ALL, why using an external monitor should tax the i7 CPU this way - it's not as though Logic is graphically demanding... far from it.
I am on 10.6.8, Logic 9.1.5, all apps up to date via "Software Update".
I will of course, be contacting Apple... -
SAP Query Use and Transport Strategy
Anyone wish to share their experience in the use of SAP Query? We generally have an understanding that we don't want to be giving out this tool to end-users in Production. We would like to create queries, and when we wish to give them out we'll attach t-codes to them and roll them out.
However in practice, this is becoming difficult. An example is where in our gold client we create queries and then we would typically transport to our unit test client. But whenever we do an export, it generates a transport request. Before we are done testing we may end up with 10's of transports for a single query?
Anyone have some ideas on a transport strategy for SAP Query? How about it's use in Production? Our landscape for changes are typically DEV Gold -> DEV Test -> QAS -> PRD. We would ideally like our transport strategy for queries to match what we do for everything else.HI,
Query objects are transported in different ways according to the query area in which they were created.
In order to know which transport options are available, you must first understand how query objects are created.
<b>Standard Area</b>
Query objects are stored in the client-specific table AQLDB. They are not connected to the Change and Transport Organizer.
<b>Global Area</b>
Query objects are stored in the cross-client table AQGDB. They are connected to the Change and Transport Organizer.
http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb467f455611d189710000e8322d00/content.htm
Global area objects can be transported into other systems. Standard area query objects can not only be transported to other clients within their own system, but into all clients of other systems as well. In addition, query objects can be transported from the global query area to the standard query area and back within the same system.Transports are normally performed by the system administrator, not by end-users. For this reason, you need the appropriate authorizations
Check the below links for detailed explanation
<b>Transporting Global Area Objects</b>
http://help.sap.com/saphelp_47x200/helpdata/en/ec/052786a30411d1950a0000e82de14a/content.htm
<b>Transporting Standard Area Objects</b>
http://help.sap.com/saphelp_47x200/helpdata/en/ec/052789a30411d1950a0000e82de14a/content.htm
<b>General Transport Description</b>
http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb4699455611d189710000e8322d00/content.htm
<b>Generating Transporting Datasets</b>
http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb46a6455611d189710000e8322d00/content.htm
<b>Reading Transport Datasets</b>
http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb46e7455611d189710000e8322d00/content.htm
<b>Managing Transport Datasets</b>
http://help.sap.com/saphelp_47x200/helpdata/en/d2/cb46f4455611d189710000e8322d00/content.htm
<b>Transporting Objects between Query Areas</b>
http://help.sap.com/saphelp_47x200/helpdata/en/ec/05278ca30411d1950a0000e82de14a/content.htm
I hope this solves your purpose.
Regards,
Vara
Message was edited by:
varaprasad bhagavatula -
Performance issues while query data from a table having large records
Hi all,
I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
SELECT SUM (B.BASE_TRANSACTION_VALUE)
FROM
MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A
WHERE A.ORGANIZATION_ID = B.ORGANIZATION_ID
AND A.ORGANIZATION_ID = :b1
AND B.REFERENCE_ACCOUNT = A.MATERIAL_ACCOUNT
AND B.TRANSACTION_DATE <= LAST_DAY (TO_DATE (:b2 , 'MON-YY' ) )
AND B.ACCOUNTING_LINE_TYPE != 15
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.02 0.05 0 0 0 0
Fetch 3 134.74 722.82 847951 1003824 0 2
total 7 134.76 722.87 847951 1003824 0 2
Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 193 (APPS)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
1 1 1 SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
788242 788242 788242 NESTED LOOPS (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
1 1 1 TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
1 1 1 INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
788242 788242 788242 TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
8704356 8704356 8704356 INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
788242 NESTED LOOPS
1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MTL_PARAMETERS' (TABLE)
1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF
'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
788242 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MTL_TRANSACTION_ACCOUNTS' (TABLE)
8704356 INDEX MODE: ANALYZED (RANGE SCAN) OF
'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
row cache lock 29 0.00 0.02
SQL*Net message to client 2 0.00 0.00
db file sequential read 847951 0.40 581.90
latch: object queue header operation 3 0.00 0.00
latch: gc element 14 0.00 0.00
gc cr grant 2-way 3 0.00 0.00
latch: gcs resource hash 1 0.00 0.00
SQL*Net message from client 2 0.00 0.00
gc current block 3-way 1 0.00 0.00
********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
Is there any way I can improve the performance of this query?
Regards
Edited by: mhosur on Dec 10, 2012 2:41 AM
Edited by: mhosur on Dec 10, 2012 2:59 AM
Edited by: mhosur on Dec 11, 2012 10:32 PMCREATE INDEX mtl_transaction_accounts_n0
ON mtl_transaction_accounts (
transaction_date
, organization_id
, reference_account
, accounting_line_type
/:p -
Performance issues involving tables S031 and S032
Hello gurus,
I am having some performance issues. The program involves accessing data from S031 and S032. I have pasted the SELECT statements below. I have read through the forums for past postings regarding performance, but I wanted to know if there is anything that stands out as being the culprit of very poor performance, and how it can be corrected. I am fairly new to SAP, so I apologize if I've missed an obvious error. From debugging the program, it seems the 2nd select statement is taking a very long time to process.
GT_S032: approx. 40,000 entries
S031: approx. 90,000 entries
MSEG: approx. 115,000 entries
MKPF: approx. 100,000 entries
MARA: approx. 90,000 entries
SELECT
vrsio "Version
werks "Plan
lgort "Storage Location
matnr "Material
ssour "Statistic(s) origin
FROM s032
INTO TABLE gt_s032
WHERE ssour = space AND vrsio = c_000 AND werks = gw_werks.
IF sy-subrc = 0.
SELECT
vrsio "Version
werks "Plant
spmon "Period to analyze - month
matnr "Material
lgort "Storage Location
wzubb "Valuated stock receipts value
wagbb "Value of valuated stock being issued
FROM s031
INTO TABLE gt_s031
FOR ALL ENTRIES IN gt_s032
WHERE ssour = gt_s032-ssour
AND vrsio = gt_s032-vrsio
AND spmon IN r_spmon
AND sptag = '00000000'
AND spwoc = '000000'
AND spbup = '000000'
AND werks = gt_s032-werks
AND matnr = gt_s032-matnr
AND lgort = gt_s032-lgort
AND ( wzubb <> 0 OR wagbb <> 0 ).
ELSE.
WRITE: 'No data selected'(m01).
EXIT.
ENDIF.
SORT gt_s032 BY vrsio werks lgort matnr.
SORT gt_s031 BY vrsio werks spmon matnr lgort.
SELECT
p~werks "Plant
p~matnr "Material
p~mblnr "Document Number
p~mjahr "Document Year
p~bwart "Movement type
p~dmbtr "Amount in local currency
t~shkzg "Debit/Credit indicator
INTO TABLE gt_scrap
FROM mkpf AS h
INNER JOIN mseg AS p
ON hmblnr = pmblnr
AND hmjahr = pmjahr
INNER JOIN mara AS m
ON pmatnr = mmatnr
INNER JOIN t156 AS t
ON pbwart = tbwart
WHERE h~budat => gw_duepr-begda
AND h~budat <= gw_duepr-endda
AND p~werks = gw_werks.
Thanks so much for your help,
JayeshIssue with table s031 and with for all entries.
Hi,
I have following code in which select statement on s031 is
taking long time and after that it shows a dump. What should I do instead of
exceeding the time limit of execution of an abap program.
TYPES:
BEGIN OF TY_MTL, " Material Master
MATNR TYPE MATNR, " Material Code
MTART TYPE MTART, " Material Type
MATKL TYPE MATKL, " Material Group
MEINS TYPE MEINS, " Base unit of Measure
WERKS TYPE WERKS_D, " Plant
MAKTX TYPE MAKTX, " Material description (Short Text)
LIFNR TYPE LIFNR, " vendor code
NAME1 TYPE NAME1_GP, " vendor name
CITY TYPE ORT01_GP, " City of Vendor
Y_RPT TYPE P DECIMALS 3, "Yearly receipt
Y_ISS TYPE P DECIMALS 3, "Yearly Consumption
M_OPG TYPE P DECIMALS 3, "Month opg
M_OPG1 TYPE P DECIMALS 3,
M_RPT TYPE P DECIMALS 3, "Month receipt
M_ISS TYPE P DECIMALS 3, "Month issue
M_CLG TYPE P DECIMALS 3, "Month Closing
D_BLK TYPE P DECIMALS 3, "Block Stock,
D_RPT TYPE P DECIMALS 3, "Today receipt
D_ISS TYPE P DECIMALS 3, "Day issues
TL_FL(2) TYPE C,
STATUS(4) TYPE C,
END OF TY_MTL,
BEGIN OF TY_OPG , " Opening File
SPMON TYPE SPMON, " Period to analyze - month
WERKS TYPE WERKS_D, " Plant
MATNR TYPE MATNR, " Material No
BASME TYPE MEINS,
MZUBB TYPE MZUBB, " Receipt Quantity
WZUBB TYPE WZUBB,
MAGBB TYPE MAGBB, " Issues Quantity
WAGBB TYPE WAGBB,
END OF TY_OPG,
DATA :
T_M TYPE STANDARD TABLE OF TY_MTL INITIAL SIZE 0,
WA_M TYPE TY_MTL,
T_O TYPE STANDARD TABLE OF TY_OPG INITIAL SIZE 0,
WA_O TYPE TY_OPG.
DATA: smonth1 TYPE spmon.
SELECT
a~matnr
a~mtart
a~matkl
a~meins
b~werks
INTO TABLE t_m FROM mara AS a
INNER JOIN marc AS b
ON a~matnr = b~matnr
* WHERE a~mtart EQ s_mtart
WHERE a~matkl IN s_matkl
AND b~werks IN s_werks
AND b~matnr IN s_matnr .
endif.
SELECT spmon
werks
matnr
basme
mzubb
WZUBB
magbb
wagbb
FROM s031 INTO TABLE t_o
FOR ALL ENTRIES IN t_m
WHERE matnr = t_m-matnr
AND werks IN s_werks
AND spmon le smonth1
AND basme = t_m-meins. -
Performance issue if we use jar file instead of classes
Hi,
My application uses tomcat as web server.
If i use calsses in webapps -> WEB-INF -> classes folder, i place classes in that ,
In other case i use jar file and place that file in WEB-INF -> lib folder in the webapps directory.
There is huge performance difference.
While using classes performance is great while using jar file performance is very disappointed.
I am using a file for encryption /decryption also.I can't really believe that classes vs jars makes a difference, but whatever.
-
Performance issue with query when generated from an ODS
I am generating a query from an ODS. The run time is very high. How do I improve the performance of the query ?
Hi Baruah,
Steps:
1. Build the Secondary Index.
2. divide the data in to 2 ODS where Historical and Present data ODS's and then build a Multiprovider and build the query on multiprovider.
3. Build the Indexing on the Table level (ODS table level).
We cannot make much faster performance for the ODS's that too with huge data...
The above are very few of them...
Hope you understood ..
Regards,
Ravi Kanth -
Performance issues with query input variable selection in ODS
Hi everyone
We've upgraded from BW 3.0B to NW04s BI using SP12.
There is a problem encountered with input variable selection. This happens regardless of using BEx (new or old 3.x) or using RSRT. When using the F4 search help (or "Select from list" in BEx context) to list possible values, this takes forever for large ODS (containing millions of records).
Using ST01 and SM50 to trace the code in the same query, we see a difference here:
<u>NW04s BI SQL command</u>
SELECT
"P0000"."COMP_CODE" AS "0000000032" ,"T0000"."TXTMD" AS "0000000032_TXTMD"
FROM
( "/BI0/PCOMP_CODE" "P0000" ) LEFT OUTER JOIN "/BI0/TCOMP_CODE" "T0000" ON "P0000"."COMP_CODE" = "T0000
"."COMP_CODE"
WHERE
"P0000"."OBJVERS" = 'A' AND "P0000"."COMP_CODE" IN ( SELECT "O"."COMP_CODE" AS "KEY" FROM
"/BI0/APY_PP_C100" "O" )
ORDER BY
"P0000"."COMP_CODE" ASC#
<u>BW 3.0B SQL command:</u>
SELECT ROWNUM < 500 ....
In 3.0B, rownum is limited to 500 and this results in a speedy, though limited query. In the new NW04s BI, this renders the selection screen unusable as ABAP dumps for timing out will occur first due to the large data volume searched using sequential read.
It will not be feasible to create indexes for every single query selection parameter (issues with oerformance when loading, space required etc.). Is there a reason why SAP seems have fallen back on a less effective code for this?
I have tried to change the number of selected rows to <500 in BEx settings but one must reach a responsive screen in order to get to that setting and it is not always possible or saved for the next run.
Anyone with similar experience or can provide help on this?here is a reason why the F4 help on ODS was faster in BW 3.x.
In BW 3.x the ODS did not support the read mode "Only values in
InfoProvider". So If I compare the different SQL statements I propose
to change the F4 mode in the InfoProvider specific properties to
"About master data". This is the fastest F4 mode.
As an alternative you can define indexes on your ODS to speed up F4.
So would need a non-unique index on InfoObject 0COMP_CODE in your ODS
Check below for insights
https://forums.sdn.sap.com/click.jspa?searchID=6224682&messageID=2841493
Hope it Helps
Chetan
@CP.. -
Issues with query using joins in 3 tables
I am trying to fetch data from 3 tables (Project,RIsk and Issues) using join. There are Risks associated with some projects and Issues associated with some projects.
ProjectID is primary key in Project table.
RiskID is primary key in risk table. ProjectID is foreign key.
IssueID is primary key in Issue table.ProjectID is foreign Key.
I need the projectname, count of risks for projects, count of issues for projects. I am using joins in all the 3 tables. Issue here is, its giving me double of count of risks and issues for each project.
Please advise how can I get the correct number. I have used the below query,
select p.projectname,count(r.riskid),count(i.issueid) from project as p
left outer join risk as r on p.projectid=r.projecctid
left outer join issue as i on p.projectid=i.projectid
group by
p.projectname
thanksHi All,
I got a new requirement to count, the number of high priority risks as well as high priority issues along with the other details. I modified the below table to include the changes, but I am not getting the desired result. Could you please help?
Original query:
select p.projectname,count(distinct r.riskid), count(distinct i.issueid) from project as p
left outer join risk as r on p.projectid=r.projecctid
left outer join issue as i on p.projectid=i.projectid
group by p.projectname
Modified query:
select p.projectname,count(distinct r.riskid),sum(case when r.riskpriority='high' then 1 else 0 end), sum(case when i.issuepriority='high' then 1 else 0 end),count(distinct i.issueid) from project as p
left outer join risk as r on p.projectid=r.projecctid
left outer join issue as i on p.projectid=i.projectid
group by p.projectnameI should get the desired result as:XYZ,8,1,4,4But I am getting:XYZ,8,4,4,32thanks for the reply. -
QueryString issues with Query Builder and Search Box
Hi Guys,
I've got the following scenario (Issues are bolded for quicker reading):
I have a search results pages named results.aspx
I have a page, named messages.aspx, with a webpart that display messages from a specific folder (which can be changed in the webpart properties). On that page, there's a search box which will lead to results.aspx, and will do search only on the specific
folder.
On the other hand, when I'm using Content Search Web Part, I can use the query builder to do the following:
ParentLink:{QueryString.folder}
So all I have left to do is to send the folder as a query string parameter from messages.aspx to results.aspx.
(1) The search box doesn't support sending other QueryString parameters,
which is also known to happen in previous SP versions, so I fixed the onclick events and voila, the folder parameter is transferred to results.aspx.
But something weird happens in results.aspx:
When I'm first redirected from messagses.aspx to results.aspx I get the follwing URL:
/Pages/results.aspx?k=Washington&folder=Home - I've got results
When another query is issued from the search box of results.aspx I'm getting the folliwing URL:
/Pages/results.aspx?k=Washington&folder=Home#k=Indiana - I've got NO results.
Same with paging:
/Pages/results.aspx?k=Washington&folder=Home#s=11
(2) When a QueryString parameter is present in the results.aspx page, and than another query (different keywords, paging or anything) is issued using AJAX and hashtags - no results are returned.
My guess is a client side issue: When the a new query is issued, adding a hashtag (#) to the URL, it ignores the QueryString parameter currently present in the URL, and thus the server doesn't recieve it in the query packet/http post xml, which creates a
valid query with no results, in my case: ParentLink:"", which I manually checked and does yield no results.
I've also tried the following with no help:
/Pages/results.aspx#k=Washington#folder=Home - no results. my guess is that folder is now not a QueryString
/Pages/results.aspx?folder=Home#k=Washington - No results.
/Pages/results.aspx#k=Washington&folder=Home - The query text issued was "Washington&folder=Home" - no results of course.
Any ideas any one?
AmirHi Amir,
after some months I found a workaround that solves this issue: just add & after your query string value, it will separate your parameter value from other strings that may be added by sharepoint controls.
ie:
if your page is http://spsite/page.aspx?param=value
just call it like this http://spsite/page.aspx?param=value&
It works!
Ciao
Fabio -
Performance issue in query on reguh
Hi,
The following query on table REGUH is taking a lot of time..
SELECT laufd laufi xvorl
zbukr lifnr kunnr
empfg vblnr znme1
znme2 rbetr
FROM reguh
INTO TABLE i_reguh_vendor
FOR ALL ENTRIES IN i_lfa1
WHERE laufd IN s_laufd AND
zbukr EQ i_lfa1-bukrs AND
lifnr EQ i_lfa1-lifnr AND
rzawe IN s_rzawe AND
uzawe IN s_uzawe AND
xvorl IN s_xvorl.
The company code is available on the selection screen as select-option.Check out the OSS Note numbers # 597984 ( Create Index )
and 145647
597984 ( Create Index )
Solution
Solve the performance problem by importing Support Packages for Release 4.6C and 4.70. You can implement an advance correction. For this purpose implement the attached program corrections and create the following index in table REGUH:
Transaction SE11
Display Database table REGUH
Goto -> Indexes...
In the standard system no indexes have been defined yet, so answer the displayed question "Do you want to create an index? " with "Yes". If customer-specific indexes have already been created, click on "Create" (F5) for the creation of a new index in the dialog box.
Index Name: PMW
Short description: Logical database FPMF of Payment Medium Workbench
Index flds: MANDT
FPM_KEY
GRPNO
SRTF1
Index -> Activate
Consequently, the index exists in the ABAP Dictionary. Finally, the index must be activated in the database.
If you cannot activate the index via "Utilities -> Database utility -> Create database index", let your database administrator create the index.
Note that follow-up actions can be required depending on the database, so that the new index is used.
For example, with DB2/390 a RUNSTATS must be carried out on table REGUH using Transaction DB13.
Hope this`ll help you .
Good Luck !
Thanks
Message was edited by: Saquib Khan
Maybe you are looking for
-
How do I change an apple bluetooth keyboard to a different computer?
I use an Imac and a Macbook Pro, and recently my laptop broke. To help with the problem, Apple unplugged my trackpad and internal keyboard. I have a bluetooth mouse I can use instead of my trackpad but I don't have another keyboard. I decided to use
-
How to kill a single user in BO XI 3.0
Hello, I thought there was a SDK for killing single users on a system. Can someone help me out again on where to find this procedure. thanks Thierry
-
How can I see my SONY MCS-8M's operation hours?
Hi, I'm Mike. I got a SONY MCS-8M but don't know how long has it been used before. (user manual was missing) So, can anybody tell me how to find out my operation hours?
-
Reader XI does not open in IE 10. All instructions for Reader 8 and 9 have been tried and have been unsuccesful. Reader opens non-browser documents and in Firefox.
-
JTextPane & JTextArea Control Handling
Hello... I have created chat application. At client side i take applet in that The chat is dispalyed in JTextPane in different colors. Also i have taken JTextArea & button to send message to other end. But the problem is that when chat message is rec