Performance Issue in MIRO Transaction
Dear All,
Our end user has been complaining of MIRO taking too much time to pass an entry. What could be the reason behind the same?? Please suggest possible solutions.
Regards,
Alok.
Hi Alok,
First check if this issue is with all users or specific to single user.
Ask your basis to activate the trace for that user.
Also check if there is any user-exit active which may be taking lots of time.
Reward points if useful.
Regards,
Atish
Similar Messages
-
Hi Friends,
While executing Tcode - MIRO, we are facing performance issue . The details are as below
We create PO , then receive the goods against PO and then do invoice verification. In PO we check box for GR-Based IV.
We have observed that if we check the box for GR-Based IV, there is performance issue with MIRO, But if we do not check this box, there is no performance issue with MIRO.
Could you tell me how to resolve this issue ?
I will reward for every useful responses.
pls treat it as urgent.
Thx in Adv.
BobbyIf you check the box for GR-Based IV.
then the GR data needs to be picked up from table MKPF & MSEG
for MIRO..
Check with your Basis team...they may find the exact reason -
Performance issue: Follow up transaction creation taking too much time
hi,
i am facing an issue where whenever a user tries to create a follow-up Quote from an opportunity, it is taking more than 7-8 secs...can someone help me understand its a standard time or ways to reduce the time taken.
we have already implemented SAP's go-live recommendations on basis side of tune up.
thanks
RHHello,
Performance could be either at client/server due to the following
--> Processing(including rendering time)
--> Memory
One needs to analyze the which is the cause and then find a solution.
Hope this helps
Regards
Pramodh -
CK74 Batch Input performance issue
Hi SDNers
I've created a Batch input (call transaction) in order to load additive costs for several materials/months. However, I realized that the performance is really poor. So I've been analyzing this issue and I found 314528 SAP note where it says that I should use tcode CK74 instead of CK74N. Also, It says that there is no BAPIS for these transactions.
Did you get a performance issue on this transaction? How did you solve it?
Thanks in advance for your help
Best Regards
LeonardoHi,
Thanks for your answer. You're right. However, in this case. I used the tcode CK74.
Anyone had a similar performance issue?
Regards,
Leonardo -
Performance issues STWB_2 for some users
A small amount of users have performance issues when executing transaction STWB_2 -> selecting a test plan and go to the message overview.
This action acts different for users with the same roles and parameters in their user master record. Executing STWB_2, selecting the same test plan and go to message overview takes 30 minutes for user 1 and 2 minutes for user 2.
Different behavior for individual users with the same access doing the same selection.
The only "solution" that works is re-creating the user master of the user with performance problems, and giving it a different name. This is not really an option and best practices.
Has anybody experienced the same issue and can give me tips and or tricks how to solve this?Hi,
a colleague told me that it seems that user 2 has set a user customizing that causes the data to be read newly (maybe remotely) instead of using the buffer. To check this go to the status analysis and select a status icon. Switch to the message tab and click on the icon user settings. Uncheck the flag "Use current message data".
Regards
Andreas -
Performance issues while accessing the Confirm/Goods Services' transaction
Hello
We are using SRM 4.0 , through Enterprise Portal 7.0.
Many of our users are crippled by Performance issues when accessing the Confirm/Goods Services tab( Transaction bbpcf02).
The system simply clocks and would never show the screen.
This problem occurs for some users all the time, and some users for some time.
It's not related to the User's machine as others are able to access it fast using the same machine.
It is also not dependent on the data size (i.e.no . of confirmations created by the user).
We would like to know why only some users are suffering more pronouncedly, and why is this transaction generally slower than all others.
Any directions for finding the Probable cause will be highly rewarded.
Thanks
KedarHi Kedar,
Please go through the following OSS Notes:
Note 610805 - Performance problems in goods receipt
Note 885409 - BBPCF02: The search for confirmation and roles is slow
Note 1258830 - BBPCF02: Display/Process confirmation response time is slow
Thanks,
Pradeep -
Performance issue with transaction MC50
I am not sure where to post this question. If I need to post it in another forum, please let me know.
We are having performance issues with transaction MC50. After reviewing SAP Note 457615 we created an index on mseg with the following fields: MANDT, MJAHR, MATNR, WERKS and SOBKZ.
When running an explain on the sql statement, the database is using a different index. This index has the following fields MANDT, MATNR, WERKS, LGORT, BWART and SOBKZ.
The sql statement is ( sql trace from ST05):
SELECT * from mseg WHERE "MANDT" = '400' AND "MJAHR" BETWEEN 2009 AND 2009 AND "MATNR" = '000000000054001121' AND "WERKS" = 'SAT' AND "SOBKZ" IN ( 'K' , 'V' , 'W' , 'O' , ' ' )
Is there any way to force the database to use the newly created index.
Thanks....Tommy
Edited by: Tommy Knight on Dec 8, 2009 2:24 PM
Edited by: Tommy Knight on Dec 8, 2009 3:07 PM
Edited by: Tommy Knight on Dec 8, 2009 3:08 PMHello Tommy,
at first your database release and patchset is missing.
If you are using Oracle 10g, the advice of Peter is useless, because of statistics are automatically collected by a CREATE INDEX.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_5010.htm#i2075657
COMPUTE STATISTICS
In earlier releases, you could use this clause to start or stop the collection of statistics on an index. This clause has been deprecated. Oracle Database now automatically collects statistics during index creation and rebuild. This clause is supported for backward compatibility and will not cause errors.
The other thing is - are you really sure that this SQL statements was executed with literals?
I know that on MSEG (with histograms) it is recommended sometimes, but i just want to be sure.
If you want us to help you - we need much more information .. please check sapnote #1257075 and upload the information to a webhosting platform, so that we can take a look at it.
Regards
Stefan -
Critical :MIRO TRANSACTION-POP UP SELECTION FOR MULTIPLE VENDOR SELECTION
Hi experts ,
I have a requirement in my project in which AP ( accounts payable ) requires pop up selection box in MIRO when there are multiple remit to address exist for the same vendor.Currently AP does not have the ability to see when mulitple remit accounts exist for a vendor. This is causing payment to be sent to the wrong place.so the requirement is 1.Modify the MIRO transaction to determine if a multiple remit-to vendor scenario exists and issue an info message to the user so that he/she can perform a vendor search.
Modify MIRO to Issue Info Message for Multiple Remit-to Scenario
If possible, find an appropriate BAdi or user exit where an info message can be issued. The message must be issued after entry of the purchase order number.
Determination of a multiple remit-to case is as follows:
- User in transaction MIRO enters a value in the purchase order number field and presses enter or other function key
- In exit code:
o the vendor number is determined from the purchase order entered
o the vendor master is then searched and the following vendor records are returned as matches:
ADBR remit-to vendors defined as partners to the purchase order vendor number
Vendor records with the same tax ID number as the purchase order vendor number
ADVN vendor records with the same value in LFA1-ZZMCOD2 in the first 12 characters only
Include the vendor number defined to the purchase order number entered
o If the search returns more than one vendor record, then issue the info message u201CMultiple remit-to vendors may exist. Please validate vendor on PO with vendor on the invoice.u201D
can anybody provide any inputs in this.
regards
PrasunI have wondered about this for a while. I was trying something tonight and it seemed to do what you need. I suggest you save your document before trying it.
Select one of your popups.
Copy Style
Select all the popups
Paste Style
With all selected, go to the Inspector and add another item to the list.
All popups should now have the new item. The selected values should not have changed. Copy/paste style is required to make this work.
Please let me know if this works for you. I've tried it a few times and it seems to do the trick. -
MIRO Transaction take lot of time during Save
Dear Sir,
During the Save operation , MIRO transaction is taking lot of time . Kindly guide us as how can we analyze the reason for the excessive time being taken . Is there any trace functionality available .
Kindly guide us the steps to be followed for the analaysis .
With Thanks and Rgds
B MittalNote 750644 - MIRO: Performance delivery costs settlement
967291 - MIRO: Performance problem with access to VFKK and VFKP
766477 - MIRO: PO search via shipment number w/o success
1.Check the statistics are being executed properly
2.Find out the tables and indexes and check statistic is up to date.
3. trace the transcation and find out the tables and execution time for each step and table.
That is the best way to look into performance issues.
Hope it help.
Amit -
Report Performance Issue - Activity
Hi gurus,
I'm developing an Activity report using Transactional database (Online real time object).
the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
In order to fullfill that requirment I've created 2 report
1) All Activities related to Contact -- Report A
pull in Acitivity ID , Activity Type, Status, Contact ID
2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
<Activity ID not equal to any Activity ID in Report B>
Anyone encountered performance issue due to the advanced filter in analytic before?
any input is really appriciated
Thanks in advanced,
FinaFina,
Union is always the last option. If you can get all record in one report, do not use union.
since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
if contact id is null (or = 'Unspecified') then owner name else contact name
Hopefully, this is helping. -
RE: Case 59063: performance issues w/ C TLIB and Forte3M
Hi James,
Could you give me a call, I am at my desk.
I had meetings all day and couldn't respond to your calls earlier.
-----Original Message-----
From: James Min [mailto:jminbrio.forte.com]
Sent: Thursday, March 30, 2000 2:50 PM
To: Sharma, Sandeep; Pyatetskiy, Alexander
Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
Hello,
I just want to reiterate that we are very committed to working on
this issue, and that our goal is to find out the root of the problem. But
first I'd like to narrow down the avenues by process of elimination.
Open Cursor is something that is commonly used in today's RDBMS. I
know that you must test your query in ISQL using some kind of execute
immediate, but Sybase should be able to handle an open cursor. I was
wondering if your Sybase expert commented on the fact that the server is
not responding to commonly used command like 'open cursor'. According to
our developer, we are merely following the API from Sybase, and open cursor
is not something that particularly slows down a query for several minutes
(except maybe the very first time). The logs show that Forte is waiting for
a status from the DB server. Actually, using prepared statements and open
cursor ends up being more efficient in the long run.
Some questions:
1) Have you tried to do a prepared statement with open cursor in your ISQL
session? If so, did it have the same slowness?
2) How big is the table you are querying? How many rows are there? How many
are returned?
3) When there is a hang in Forte, is there disk-spinning or CPU usage in
the database server side? On the Forte side? Absolutely no activity at all?
We actually have a Sybase set-up here, and if you wish, we could test out
your database and Forte PEX here. Since your queries seems to be running
off of only one table, this might be the best option, as we could look at
everything here, in house. To do this:
a) BCP out the data into a flat file. (character format to make it portable)
b) we need a script to create the table and indexes.
c) the Forte PEX file of the app to test this out.
d) the SQL staement that you issue in ISQL for comparison.
If the situation warrants, we can give a concrete example of
possible errors/bugs to a developer. Dial-in is still an option, but to be
able to look at the TOOL code, database setup, etc. without the limitations
of dial-up may be faster and more efficient. Please let me know if you can
provide this, as well as the answers to the above questions, or if you have
any questions.
Regards,
At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
James, Ken:
FYI, see attached response from our Sybase expert, Dani Sasmita. She has
already tried what you suggested and results are enclosed.
++
Sandeep
-----Original Message-----
From: SASMITA, DANIAR
Sent: Wednesday, March 29, 2000 6:43 PM
To: Pyatetskiy, Alexander
Cc: Sharma, Sandeep; Tenerelli, Mike
Subject: Re: FW: Case 59063: Select using LIKE has performance
issues
w/ CTLIB and Forte 3M
We did that trick already.
When it is hanging, I can see what is doing.
It is doing OPEN CURSOR. But not clear the exact statement of the cursor
it is trying to open.
When we run the query directly to Sybase, not using Forte, it is clearly
not opening any cursor.
And running it directly to Sybase many times, the response is always
consistently fast.
It is just when the query runs from Forte to Sybase, it opens a cursor.
But again, in the Forte code, Alex is not using any cursor.
In trying to capture the query,we even tried to audit any statementcoming
to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
James Min
Technical Support Engineer - Forte Tools
Sun Microsystems, Inc.
1800 Harrison St., 17th Fl.
Oakland, CA 94612
james.minsun.com
510.869.2056
==============================================
Support Hotline: 510-451-5400
CUSTOMERS open a NEW CASE with Technical Support:
http://www.forte.com/support/case_entry.html
CUSTOMERS view your cases and enter follow-up transactions:
http://www.forte.com/support/view_calls.htmlEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issues with dynamic action (PL/SQL)
Hi!
I'm having perfomance issues with a dynamic action that is triggered on a button click.
I have 5 drop down lists to select columns which the users want to filter, 5 drop down lists to select an operation and 5 boxes to input values.
After that, there is a filter button that just submits the page based on the selected filters.
This part works fine, the data is filtered almost instantaneously.
After this, I have 3 column selectors and 3 boxes where users put values they wish to update the filtered rows to,
There is an update button that calls the dynamic action (procedure that is written below).
It should be straight out, the only performance issue could be the decode section, because I need to cover cases when user wants to set a value to null (@) and when he doesn't want update 3 columns, but less (he leaves '').
Hence P99_X_UC1 || ' = decode(' || P99_X_UV1 ||','''','|| P99_X_UC1 ||',''@'',null,'|| P99_X_UV1 ||')
However when I finally click the update button, my browser freezes and nothing happens on the table.
Can anyone help me solve this and improve the speed of the update?
Regards,
Ivan
P.S. The code for the procedure is below:
create or replace
PROCEDURE DWP.PROC_UPD
(P99_X_UC1 in VARCHAR2,
P99_X_UV1 in VARCHAR2,
P99_X_UC2 in VARCHAR2,
P99_X_UV2 in VARCHAR2,
P99_X_UC3 in VARCHAR2,
P99_X_UV3 in VARCHAR2,
P99_X_COL in VARCHAR2,
P99_X_O in VARCHAR2,
P99_X_V in VARCHAR2,
P99_X_COL2 in VARCHAR2,
P99_X_O2 in VARCHAR2,
P99_X_V2 in VARCHAR2,
P99_X_COL3 in VARCHAR2,
P99_X_O3 in VARCHAR2,
P99_X_V3 in VARCHAR2,
P99_X_COL4 in VARCHAR2,
P99_X_O4 in VARCHAR2,
P99_X_V4 in VARCHAR2,
P99_X_COL5 in VARCHAR2,
P99_X_O5 in VARCHAR2,
P99_X_V5 in VARCHAR2,
P99_X_CD in VARCHAR2,
P99_X_VD in VARCHAR2
) IS
l_sql_stmt varchar2(32600);
p_table_name varchar2(30) := 'DWP.IZV_SLOG_DET';
BEGIN
l_sql_stmt := 'update ' || p_table_name || ' set '
|| P99_X_UC1 || ' = decode(' || P99_X_UV1 ||','''','|| P99_X_UC1 ||',''@'',null,'|| P99_X_UV1 ||'),'
|| P99_X_UC2 || ' = decode(' || P99_X_UV2 ||','''','|| P99_X_UC2 ||',''@'',null,'|| P99_X_UV2 ||'),'
|| P99_X_UC3 || ' = decode(' || P99_X_UV3 ||','''','|| P99_X_UC3 ||',''@'',null,'|| P99_X_UV3 ||') where '||
P99_X_COL ||' '|| P99_X_O ||' ' || P99_X_V || ' and ' ||
P99_X_COL2 ||' '|| P99_X_O2 ||' ' || P99_X_V2 || ' and ' ||
P99_X_COL3 ||' '|| P99_X_O3 ||' ' || P99_X_V3 || ' and ' ||
P99_X_COL4 ||' '|| P99_X_O4 ||' ' || P99_X_V4 || ' and ' ||
P99_X_COL5 ||' '|| P99_X_O5 ||' ' || P99_X_V5 || ' and ' ||
P99_X_CD || ' = ' || P99_X_VD ;
--dbms_output.put_line(l_sql_stmt);
EXECUTE IMMEDIATE l_sql_stmt;
END;Hi Ivan,
I do not think that the decode is performance relevant. Maybe the update hangs because some other transaction has uncommitted changes to one of the affected rows or the where clause is not selective enough and needs to update a huge amount of records.
Besides that - and I might be wrong, because I only know some part of your app - the code here looks like you have a huge sql injection vulnerability here. Maybe you should consider re-writing your logic in static sql. If that is not possible, you should make sure that the user input only contains allowed values, e.g. by white-listing P99_X_On (i.e. make sure they only contain known values like '=', '<', ...), and by using dbms_assert.enquote_name/enquote_literal on the other P99_X_nnn parameters.
Regards,
Christian -
Performance issues with the Tuxedo MQ Adapter
We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?Hi Todd,
We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
This is our configuration:
"MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
SYSTEM_ACCESS=FASTPATH
MAXGEN=1 GRACE=86400 RESTART=N
MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
SICACHEENTRIESMAX="500"
/tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
*SERVER
MINMSGLEVEL=0
MAXMSGLEVEL=0
DEFMAXMSGLEN=4096
TPESVCFAILDATA=Y
*QUEUE_MANAGER
LQMID=QMTESX01
NAME=QMTESX01
*SERVICE
NAME=txgralms0
FORMAT=MQSTR
TRAN=N
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
Thanks in advance,
Marling -
Short dump due to performance issue
Hi all,
I am facing performance issue in my sandbox. Below is the content of short dump.
Short text
Unable to fulfil request for 805418 bytes of memory space.
What happened?
Each transaction requires some main memory space to process
application data. If the operating system cannot provide any more
space, the transaction is terminated.
What can you do?
Try to find out (e.g. by targetted data selection) whether the
transaction will run with less main memory.
If there is a temporary bottleneck, execute the transaction again.
If the error persists, ask your system administrator to check the
following profile parameters:
o ztta/roll_area (1.000.000 - 15.000.000)
Classic roll area per user and internal mode
usual amount of roll area per user and internal mode
o ztta/roll_extension (10.000.000 - 500.000.000)
Amount of memory per user in extended memory (EM)
o abap/heap_area_total (100.000.000 - 1.500.000.000)
Amount of memory (malloc) for all users of an application
server. If several background processes are running on
one server, temporary bottlenecks may occur.
Pls help me to resolve this issue
Regards,
Kalyani
Edited by: kalyani usa on Jan 9, 2008 9:04 PMHi Rob Burbank,
I am pasting the transaction I found in the dump
Transaction......... "SESSION_MANAGER "
Transactions ID..... "4783E5B027A73C1EE10000000A200A17"
Program............. "SAPMSYST"
Screen.............. "SAPMSYST 0500"
Screen line......... 16
Also i am pasting the screenshot of ST02
Nametab (NTAB) 0
Table definition 99,22 6.799 3.591 62,97 20.000 12.591 62,96 0 8.761
Field definition 99,06 31.563 345 1,15 20.000 13.305 66,53 244 7.420
Short NTAB 99,22 3.625 2.590 86,33 5.000 3.586 71,72 0 1.414
Initial records 52,50 6.625 3.408 56,80 5.000 249 4,98 817 5.568
0
program 99,58 300.000 1.212 0,42 75.000 67.561 90,08 7.939 46.575
CUA 99,08 3.000 211 8,84 1.500 1.375 91,67 23.050 846
Screen 99,46 4.297 1.842 45,00 2.000 1.816 90,80 81 963
Calendar 100,00 488 401 85,14 200 111 55,50 0 89
OTR 100,00 4.096 3.281 100,00 2.000 2.000 100,00 0
0
Tables 0
Generic Key 99,69 29.297 2.739 9,87 5.000 177 3,54 57 56.694
Single record 89,24 10.000 63 0,64 500 468 93,60 241 227.134
0
Export/import 76,46 50.000 40.980 83,32 2.000 2.676
Exp./ Imp. SHM 97,82 4.096 3.094 94,27 2.000 1.999 99,95 0
SAP Memory Curr.Use % CurUse[KB] MaxUse[KB] In Mem[KB] OnDisk[KB] SAPCurCach HitRatio %
Roll area 0,16 432 18.672 131.072 131.072 IDs 98,11
Page area 0,19 496 187.616 65.536 196.608 Statement 95,00
Extended memory 9,89 151.552 1.531.904 1.531.904 0 0,00
Heap memory 0 0 1.953.045 0
0,00
Regards,
Kalyani -
Many-to-many performance issue
I realize that many-to-many joins have been discussed before (yes, I looked through many threads), but I'm having a slight variation on the issue. Our data warehouse has been functioning for a couple of years now, but we're now experiencing a dramatic degradation in report performance. I'll tell you everything I know and what I've tried. My hope is that someone will have an idea that hasn't occurred to me yet.
The troubling data links deal with accounts and account_types. Each transaction will have one account, but each account can have multiple account_types and each account_type is made up of multiple accounts. It ends up looking like this:
Transaction_cube --< account_dimension >--< account_type_table
Given the many-to-many relationship between account and account_type, this is the only architecture I could come up with that will maintain data integrity in the transaction cube.
I know that this is the cause of the performance issues because the reports run normally when this is removed. The volume of data obviously increases over time, but the problem appeared very suddenly -- not a gradual degradation that one would expect from a volume issue. The cube is partitioned by year and we're a little below last year's growth.
The other fact to throw in is that the account_type table did increase in size by an additional 30% when we first noticed the problem. However, the business was able to go back and remove half of the account_types (unused types) so now the table has fewer rows than it had before we noticed the problem (~15k rows in the account_type table).
We have tried pinning the table so that it remain in memory, but that did not help. I tried creating a materialized view combining accounts and account_types with a similar lack of improvement. I've tried adding indexes, but there is still a full-table scan. All database objects are analyzed nightly after the data load is completed.
I'm fresh out of ideas at this point. Any suggestions and/or ideas would be greatly appreciated.I've thought about that. What it would mean would be aprox. 20 additional columns for each of the different account_types. Unfortunately, that would also mean that all the reports that use the account_type would have to have a condition:
WHERE acct_type1='Income Stmt." OR acct_type2='Income Stmt." OR ....
Since the account_types are not set up in a hierarchy and there must be only one row for account, I'm not sure that this is a feasible solution.
Thank you for the suggestion.
Maybe you are looking for
-
Exception occured: java.rmi.NotBoundException: ebillRMIImplInstance
Hi pls help me ... this is the exception i got while running my client jar file... i;m not able to correct it...i'm using netbeans 6.. my client method is as given below... private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) {
-
Convert pdf to word doc with data
Adobe X Pro - File > Save as > Microsoft word > word document doesn't save the data. How can I get the data too? Thanks
-
How to find/add new libraries in LabVIEW
I am very new with LabVIEW and I am using version 6.0. I am trying to follow the tutorials but I cannot find some of the mentioned libraries. For example, the signal generation library (1SIGGEN.LLB) is missing. I would appreciate if one could tell me
-
Hi, I use a windows XP sp3 I have a Flex application which is embedded in JSP page and deployed in a weblogic server in a different unix machine. The JSP maintains session The Flex app uses remote objects to communciate with the Weblogic server which
-
Having problem opening files with gradients in CS2
When I try to open an illustrator 9 file from CS2, I get "An unknown error has occurred." message. I believe this may be a CS2 problem because when I checked further, I discovered that only AI 9 files where the gradient feature was used were affected