Fetching from LDB PNP - Performance Issue
Hi,
My apologies if my question sounds too basic, but i have not worked in HR earlier. So just needed to clarify once. I've researched it well on SDN , then created a solution but just wanted to check with you guys once, if this the the correct way in HR.
Requirement: Get Data from few infotypes and display report. Infotypes uses 0000, 0001, 002, 0008 and 0102. On selection screen I have Grievances fields from Infotype 0102 say SUBTY(Subtype) and GRSTY (Reason), PERNR and Employment status..
Issue: Performance issue when no pernr is entered on selection screen but GRSTY is entered as 04 and subty as 1. Basically all the records(PERNR's) are fetched with the given selection criteria of two fields(subtype and reason from 0102) and only the the filtered ones are displayed creating performance issues. (seen in Runtime, all pernr's ar fetched)
Solution I Proposed: I've recommended to Select PERNR's from PA0102 table which satisfies the selection criteria before the GET PERNR event and store those PERNR[] in PNPPERNR[] range. The performance changes are visible and better than earlier.
Question: Have i done the right way? Or if there's any other way to achieve it. This is what we do in general ABAP, so just wanted to check with experts, if I've done it correctly.
Thanks,
Santosh
Hi Ramesh,
Thanks for your time. Actually that's the problem, the functional team may want to see all data at once just for a particular reason code and for now, not agreeing for other options. Tried many options but at end came with this approach. Authorizations are do required which is the reason had to include GET PERNR as well.
How this is handled in general scenarios where we do not have a filtering criteria as per the PERNR structure?
Thanks,
Santosh
Similar Messages
-
Hello,
I am working on an existing 2010 Exchange implementation. The site has one Exchange 2010 server with no other Exchange servers running and/or available. I looked through the application logs and I see multiple Event ID 5013 errors that state,
"The routing group for Exchange server <ServerName>.<DomainName>.LOCAL was not determined in routing tables with timestamp 6/24/2014 8:01:48 PM. Recipients will not be routed to this server."
I would like to know if by having this old 2003 Exchange server in AD, if it could be causing performance issues or if it is just a warning message only. I have located the old server name using ADSIedit under, "CN=Configuration,DC=<DomainName>DC=Local\CN=Services\CN=Microsoft
Exchange\CN=First Organization\CN=Administrative Groups\CN=First Administrative Group\CN=Servers". The old 2003 Exchange server is then listed in the right window pane so it could be deleted but I would need to know if this would effect the Exchange
environment other than removing the error from the event logs.
Thank you for your help,
MichaelHi,
When you install Exchange 2010 in an existing Exchange 2003 environment, a routing group connector is created automatically. If you remove Exchange 2003 from you environment, you should also delete this routing group connector.
Besides, here is a related article about Event ID 5013 which may help you for your reference.
http://technet.microsoft.com/en-us/library/ff360498(v=exchg.140).aspx
Best regards,
Belinda Ma
TechNet Community Support -
Excluding Members from EPMA application - performance issues?
We are building a new application using dimensions from our Shared Library and I had a question about excluding some members.
For one dimension, I was able to use the "Add to App View" functionality, which was able to bring in only the one tree of members I wanted, without having to exclude anything, which seems to be the best way of doing this.
For another dimension, I need to pick and choose members from within the hierarchy, so I will need to add almost all parents, and then go through and exclude some members. Will this cause any performance issues having excluded members? Will the excluded members get deployed to planning or do they just sit in EPMA?
Any help would be appreciated.Hi,
Excluded members are not deployed to Planning, therefore would not impact performance negatively.
Cheers,
Alp -
How to Improve performance issue when we are using BRM LDB
HI All,
I am facing a performanc eissue when i am retriving the data from BKPF and respective BSEG table....I see that for fiscal period there are around 60lakhs records. and to populate the data value from the table to final internal table its taking so much of time.
when i tried to make use of the BRM LDB with the SAP Query/Quickviewer, its the same issue.
Please suggest me how to improve the performance issue.
Thanks in advance
ChakradharModerator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
Rob -
Performance Issue - higher fetch count
Hi,
The database version is 10.2.0.4.
Below is the tkprof report of an application session having performance issue.
We shared the screens of application team and were able to see the lag in report generation.
It shows an elapsed time of 157 seconds, however the same query when executed in database is taking fractions of a second.
Kindly help and suggest if more detail is needed.
call count cpu elapsed disk query current rows
Parse 149 0.00 0.00 0 0 0 0
Execute 298 0.02 0.02 0 0 0 0
Fetch 298 157.22 156.39 0 38336806 0 298
total 745 157.25 156.42 0 38336806 0 298
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 80
Rows Row Source Operation
2 SORT AGGREGATE (cr=257294 pr=0 pw=0 time=1023217 us)
32 FILTER (cr=257294 pr=0 pw=0 time=6944757 us)
22770 NESTED LOOPS (cr=166134 pr=0 pw=0 time=4691233 us)
22770 NESTED LOOPS (cr=166130 pr=0 pw=0 time=4600141 us)
82910 INDEX FULL SCAN S_LIT_BU_U1 (cr=326 pr=0 pw=0 time=248782 us)(object id 69340)
22770 TABLE ACCESS BY INDEX ROWID S_LIT (cr=165804 pr=0 pw=0 time=559291 us)
82890 INDEX UNIQUE SCAN S_LIT_P1 (cr=82914 pr=0 pw=0 time=247901 us)(object id 69332)
22770 INDEX UNIQUE SCAN S_BU_U2 (cr=4 pr=0 pw=0 time=48958 us)(object id 63064)
20 NESTED LOOPS (cr=91032 pr=0 pw=0 time=268508 us)
22758 INDEX UNIQUE SCAN S_ORDER_P1 (cr=45516 pr=0 pw=0 time=104182 us)(object id 70915)
20 INDEX RANGE SCAN CX_ORDER_LIT_U1 (cr=45516 pr=0 pw=0 time=114669 us)(object id 158158)
20 NESTED LOOPS (cr=128 pr=0 pw=0 time=364 us)
32 INDEX UNIQUE SCAN S_ORDER_P1 (cr=64 pr=0 pw=0 time=144 us)(object id 70915)
20 INDEX RANGE SCAN CX_ORDER_LIT_U1 (cr=64 pr=0 pw=0 time=158 us)(object id 158158)Rgds,
Sanjay
Edited by: 911847 on Feb 2, 2012 5:53 AM
Edited by: 911847 on Feb 5, 2012 11:50 PMHi,
I changed optimizer to first_rows and taken below details.
09:21:31 SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_mode string FIRST_ROWS_100
09:21:51 SQL> ALTER SESSION SET STATISTICS_LEVEL=ALL;
Session altered.
PLAN_TABLE_OUTPUT
SQL_ID fkcs93gkrt2zz, child number 0
SELECT COUNT (*) FROM SIEBEL.S_LIT_BU T1, SIEBEL.S_BU T2, SIEBEL.S_LIT T3
WHERE T3.BU_ID = T2.PAR_ROW_ID AND T1.BU_ID = '0-R9NH' AND T3.ROW_ID = T1.LIT_ID
AND (T3.X_VISIBILITY_BUSCOMP_ORDER = 'Y') AND (T3.ROW_ID = '1-28B0AH' OR T3.ROW_ID =
'1-28B0AF' OR T3.ROW_ID = '1-2V4GCV' OR T3.ROW_ID = '1-2F5USL' OR T3.ROW_ID =
'1-27PFED' OR T3.ROW_ID = '1-1KO7WJ' OR T3.ROW_ID IN ( SELECT SQ1_T1.LIT_ID FROM
SIEBEL.CX_ORDER_LIT SQ1_T1, SIEBEL.S_ORDER SQ1_T2 WHERE ( SQ1_T1.ORDER_ID =
SQ1_T2.ROW_ID) AND (SQ1_T2.ROW_ID = '1-2VVI61')) AND (T3.ROW_ID = '1-28B0AH' OR
T3.ROW_ID = '1-28B0AF' OR T3.ROW_ID = '1-2V4GCV' OR T3.ROW_ID = '1-2F5USL' OR
T3.ROW_ID = '1-27PFED' OR T3.ROW_ID = '1-1KO7WJ' OR T3.ROW_ID IN ( SELECT
SQ1_T1.LIT_ID FROM SIEBEL.CX_ORDER_LIT SQ1_T1, SIEBEL.S_ORDER SQ1_T2 WHERE (
SQ1_T1.ORDER_ID = SQ1_T2.ROW_ID) AND (SQ1_T2.ROW_ID = '1-2VVI61'))))
Plan hash value: 307628812
| Id | Operation | Name | E-Rows | OMem | 1Mem | Used-Mem |
| 1 | SORT AGGREGATE | | 1 | | | |
|* 2 | FILTER | | | | | |
| 3 | NESTED LOOPS | | 7102 | | | |
| 4 | MERGE JOIN | | 7102 | | | |
|* 5 | TABLE ACCESS BY INDEX ROWID| S_LIT | 7102 | | | |
| 6 | INDEX FULL SCAN | S_LIT_P1 | 41408 | | | |
|* 7 | SORT JOIN | | 41360 | 1186K| 567K| 1054K (0)|
|* 8 | INDEX FULL SCAN | S_LIT_BU_U1 | 41360 | | | |
|* 9 | INDEX UNIQUE SCAN | S_BU_U2 | 1 | | | |
| 10 | NESTED LOOPS | | 1 | | | |
|* 11 | INDEX UNIQUE SCAN | S_ORDER_P1 | 1 | | | |
|* 12 | INDEX RANGE SCAN | CX_ORDER_LIT_U1 | 1 | | | |
| 13 | NESTED LOOPS | | 1 | | | |
|* 14 | INDEX UNIQUE SCAN | S_ORDER_P1 | 1 | | | |
|* 15 | INDEX RANGE SCAN | CX_ORDER_LIT_U1 | 1 | | | |
Predicate Information (identified by operation id):
2 - filter((((INTERNAL_FUNCTION("T3"."ROW_ID") OR IS NOT NULL) AND IS NOT NULL)
OR INTERNAL_FUNCTION("T3"."ROW_ID")))
5 - filter("T3"."X_VISIBILITY_BUSCOMP_ORDER"='Y')
7 - access("T3"."ROW_ID"="T1"."LIT_ID")
filter("T3"."ROW_ID"="T1"."LIT_ID")
8 - access("T1"."BU_ID"='0-R9NH')
filter("T1"."BU_ID"='0-R9NH')
9 - access("T3"."BU_ID"="T2"."PAR_ROW_ID")
11 - access("SQ1_T2"."ROW_ID"='1-2VVI61')
12 - access("SQ1_T1"."ORDER_ID"='1-2VVI61' AND "SQ1_T1"."LIT_ID"=:B1)
14 - access("SQ1_T2"."ROW_ID"='1-2VVI61')
15 - access("SQ1_T1"."ORDER_ID"='1-2VVI61' AND "SQ1_T1"."LIT_ID"=:B1)
Note
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level -
Performance issues while query data from a table having large records
Hi all,
I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
SELECT SUM (B.BASE_TRANSACTION_VALUE)
FROM
MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A
WHERE A.ORGANIZATION_ID = B.ORGANIZATION_ID
AND A.ORGANIZATION_ID = :b1
AND B.REFERENCE_ACCOUNT = A.MATERIAL_ACCOUNT
AND B.TRANSACTION_DATE <= LAST_DAY (TO_DATE (:b2 , 'MON-YY' ) )
AND B.ACCOUNTING_LINE_TYPE != 15
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.02 0.05 0 0 0 0
Fetch 3 134.74 722.82 847951 1003824 0 2
total 7 134.76 722.87 847951 1003824 0 2
Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 193 (APPS)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
1 1 1 SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
788242 788242 788242 NESTED LOOPS (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
1 1 1 TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
1 1 1 INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
788242 788242 788242 TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
8704356 8704356 8704356 INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
788242 NESTED LOOPS
1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MTL_PARAMETERS' (TABLE)
1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF
'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
788242 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MTL_TRANSACTION_ACCOUNTS' (TABLE)
8704356 INDEX MODE: ANALYZED (RANGE SCAN) OF
'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
row cache lock 29 0.00 0.02
SQL*Net message to client 2 0.00 0.00
db file sequential read 847951 0.40 581.90
latch: object queue header operation 3 0.00 0.00
latch: gc element 14 0.00 0.00
gc cr grant 2-way 3 0.00 0.00
latch: gcs resource hash 1 0.00 0.00
SQL*Net message from client 2 0.00 0.00
gc current block 3-way 1 0.00 0.00
********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
Is there any way I can improve the performance of this query?
Regards
Edited by: mhosur on Dec 10, 2012 2:41 AM
Edited by: mhosur on Dec 10, 2012 2:59 AM
Edited by: mhosur on Dec 11, 2012 10:32 PMCREATE INDEX mtl_transaction_accounts_n0
ON mtl_transaction_accounts (
transaction_date
, organization_id
, reference_account
, accounting_line_type
/:p -
Returning multiple values from a called tabular form(performance issue)
I hope someone can help with this.
I have a form that calls another form to display a multiple column tabular list of values(needs to allow for user sorting so could not use a LOV).
The user selects one or more records from the list by using check boxes. In order to detect the records selected I loop through the block looking for boxes checked off and return those records to the calling form via a PL/SQL table.
The form displaying the tabular list loads quickly(about 5000 records in the base table). However when I select one or more values from the table and return back to the calling form, it takes a while(about 3-4 minutes) to return to the called form with the selected values.
I guess it is going through the block(all 5000 records) looking for boxes checked off and that is what is causing the noticeable pause.
Is this normal given the data volumes I have or are there any other perhaps better techniques or tricks I could use to improve performance. I am using Forms6i.
Sorry for being so long-winded and thanks in advance for any help.Try writing to your PL/SQL table when the user selects (or remove when deselect) by usuing a when-checkbox-changed trigger. This will eliminate the need for you top loop through a block with 5000 records and should improve your performance.
I am not aware of any performance issues with PL/SQL tables in forms, but if you still have slow performance try using a shared record-group instead. I have used these in the past for exactly the same thing and had no performance problems.
Hope this helps,
Candace Stover
Forms Product Management -
Database migrated from Oracle 10g to 11g Discoverer report performance issu
Hi All,
We are now getting issue in Discoverer Report performance as the report is keep on running when database got upgrade from 10g to 11g.
In database 10g the report is working fine but the same report is not working fine in 11g.
The query i have changed as I have passed the date format TO_CHAR("DD-MON-YYYY" and removed the NVL & TRUNC function from the existing query.
The report is now working fine in Database 11g backhand but when I am using the same query in Discoverer it is not working and report is keep on running.
Please advise.
Regards,Pl post exact OS, database and Discoverer versions. After the upgrade, have statistics been updated ? Have you traced the Discoverer query to determine where the performance issue is ?
How To Find Oracle Discoverer Diagnostic and Tracing Guides [ID 290658.1]
How To Enable SQL Tracing For Discoverer Sessions [ID 133055.1]
Discoverer 11g: Performance degradation after Upgrade to Database 11g [ID 1514929.1]
HTH
Srini -
Performance issue after Upgrade from 4.7 to ECC 6.0 with a select query
Hi All,
There is a Performance issue after Upgrade from 4.7 to ECC 6.0 with a select query in a report painter.
This query is working fine when executed in 4.7 system where as it is running for more time in ECC6.0.
Select query is on the table COSP.
SELECT (FIELD_LIST)
INTO CORRESPONDING FIELDS OF TABLE I_COSP PACKAGE SIZE 1000
FROM COSP CLIENT SPECIFIED
WHERE GJAHR IN SELR_GJAHR
AND KSTAR IN SELR_KSTAR
AND LEDNR EQ '00'
AND OBJNR IN SELR_OBJNR
AND PERBL IN SELR_PERBL
AND VERSN IN SELR_VERSN
AND WRTTP IN SELR_WRTTP
AND MANDT IN MANDTTAB
GROUP BY (GROUP_LIST).
LOOP AT I_COSP .
COSP = I_COSP .
PERFORM PCOSP USING I_COSP-_COUNTER.
CLEAR: $RWTAB, COSP .
CLEAR CCR1S .
ENDLOOP.
ENDSELECT.
I have checked with the table indexes, they were same as in 4.7 system.
What can be the reson for the difference in execution time. How can this be reduced without adjusting the select query.
Thanks in advance for the responses.
Regards,
Dedeepya.Hi,
ohhhhh....... lots of problems in select query......this is not the way you should write it.
Some generic comments:
1. never use SELECT
endselect.
SELECT
into table
for all entries in table
where.
use perform statment after this selection.
2. Do not use into corresponding fields. use exact structure type.
3. use proper sequence of fields in the where condition so that it helps table go according to indexes.
e.g in your case
sequence should be
LEDNR
OBJNR
GJAHR
WRTTP
VERSN
KSTAR
HRKFT
VRGNG
VBUND
PARGB
BEKNZ
TWAER
PERBL
sequence should be same as defined in table.
Always keep select query as simple as possible and perform all other calculations etc. afterwords.
I hope it helps.
Regards,
Pranaya -
Performance issue with view selection after migration from oracle to MaxDb
Hello,
After the migration from oracle to MaxDb we have serious performance issues with a lot of our tableview selections.
Does anybody know about this problem and how to solve it ??
Best regards !!!
Gert-JanHello Gert-Jan,
most probably you need additional indexes to get better performance.
Using the command monitor you can identify the long running SQL statements and check the optimizer access strategy. Then you can decide which indexes might help.
If this is about an SAP system, you can find additional information about performance analysis in SAP notes 725489 and 819641.
SAP Hosting provides the so-called service 'MaxDB Migration Support' to help you in such cases. The service description can be found here:
http://www.saphosting.de/mediacenter/pdfs/solutionbriefs/MaxDB_de.pdf
http://www.saphosting.com/mediacenter/pdfs/solutionbriefs/maxDB-migration-support_en.pdf.
Best regards,
Melanie Handreck -
Performance issue in update new records from Ekko to my ztable
I'm doing changes to an existing program
In this program I need to update any new purchase orders created in EKKO-EBELN to my ztable-ebeln.
I need to update my ztable with the new records created on that particular date.
This is a daily running job.
This is the code I wrote and I'm getting 150,000 records into this loop and I'm getting performance issue, can Anyone suggest me how to avoid performance issue.
loop at tb_ekko.
at new ebeln.
read table tb_ztable with key ebeln = tb_ekko-ebeln binary search.
if sy-subrc <> 0.
tb_ztable-ebeln = tb_ekko-ebeln.
tb_ztable-zlimit = ' '.
insert ztable from tb_ztable.
endif.
endat.
endloop.
Thanks
Hema.Modify your code as follows:
loop at tb_ekko.
at new ebeln.
read table tb_ztable with key ebeln = tb_ekko-ebeln binary search.
if sy-subrc <> 0.
tb_ztable_new-ebeln = tb_ekko-ebeln.
tb_ztable_new-zlimit = ' '.
append tb_ztable_new.
clear tb_ztable_new.
endif.
endat.
endloop.
insert ztable from table tb_ztable_new.
Regards,
ravi -
After upgrade LDB pnp entry date issue
i upgrade from 4.6c to ECC6, after upgrade my many reports which based on hr data,not showing entry date . like country effective date , nationality date ...entry date...etc..
any idea???? many reports based on LDB PNP.
Moderator message - Cross post locked
Edited by: Rob Burbank on Nov 23, 2009 10:06 AMonly infotype 0041 or others too?
Do you use PNP or PNPCE?
Configuration error? different date types?
no central person? OType CP in HRP1001? -
Upgrading from 4.2 to 5.1 Performance Issues
Hi All,
We are finally upgrading from 4.2 to 5.1. Both our 4.2 and 5.1 servers are identical, however, we are noticing major performance issues in one of our applications that contains 7 account hierarchies and 4 entity hierarchies. Creating a simple any by any can take up to 3.5 minutes in 5.1 where as it takes less than 2 seconds in 4.2. We could split up some of the account hierarchies into separate dimension and applications, but I find it hard to believe that we need to redesign our applications to fit 5.1 handling of multiple hierarchies.
Has anyone else run into similar issues, if so, what was done to overcome them?
Thanks.Hi,
We had some performance issues with BPC 5.1 SP4, we upgrade to SP9 and take recomendations found on the How To doc below found here on sdn:
[How to Use MDX Dimension Formulas Article |https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/008d665b-94bf-2a10-78b2-b32ffe04ba73]
As I know just account hierarchies are supported (at least in our case) with hard limitations on hierarchies formulas.
Hope this helps,
Regards,
Carlos -
Performance Issues during Upgrade of EBS from 11.5.10.2 to 12.1.1
Hi,
We're upgrading our EBS , from Rel 11.5.10.2 to 12.1.1.
We're stuck , while running script ar120bnk.sql (ran more than 20 Hours) :
Regarding the tables involved in this Process :
select owner , table_name,num_rows,last_analyzed,sample_size
from dba_tables
where table_name in
'RA_CUSTOMER_TRX_ALL',
'RA_CUST_TRX_LINE_GL_DIST_ALL',
'XLA_UPGRADE_DATES',
'AR_SYSTEM_PARAMETERS_ALL',
'RA_CUST_TRX_TYPES_ALL',
'RA_CUST_TRX_LINE_GL_DIST_ALL',
'XLA_TRANSACTION_ENTITIES_UPG')
AR RA_CUSTOMER_TRX_ALL 55,540,740 04/02/2012 12:41:56 5554074
AR RA_CUST_TRX_LINE_GL_DIST_ALL 380,513,830 04/02/2012 13:54:12 38051383
AR RA_CUST_TRX_TYPES_ALL 90 04/02/2012 14:04:54 90
AR AR_SYSTEM_PARAMETERS_ALL 6 04/02/2012 12:19:49 6
XLA XLA_UPGRADE_DATES 4 05/02/2012 17:12:57 4
As you can see: RA_CUST_TRX_LINE_GL_DIST_ALL is more tan 380 million rows !
and RA_CUSTOMER_TRX_ALL is more than 55 million rows.
We have more huge tables in the AR schema , and we would like to know if we are unique customer
with huge AR schema objects , and if NOT how come that we are getting stuck on threed statment in
AR schema.
Bellow an output of all the objects that have more than 10 million rows in AR schema :
select owner , table_name,to_char(num_rows,'999,999,999') ,last_analyzed
from dba_tables
where owner = 'AR'
and num_rows > 10000000
order by num_rows desc nulls last
AR AR_DISTRIBUTIONS_ALL 408,567,520 04/02/2012 11:49:57
AR RA_CUST_TRX_LINE_GL_DIST_ALL 380,513,830 04/02/2012 13:54:12
AR MLOG$_AR_CASH_RECEIPTS_ALL 310,777,690 04/02/2012 12:30:33
AR RA_CUSTOMER_TRX_LINES_ALL 260,211,090 04/02/2012 13:30:26
AR AR_RECEIVABLE_APPLICATIONS_ALL 166,834,930 04/02/2012 12:16:54
AR MLOG$_RA_CUSTOMER_TRX_ALL 150,962,980 04/02/2012 12:33:23
AR AR_CASH_RECEIPT_HISTORY_ALL 145,737,410 04/02/2012 11:40:31
AR RA_CUST_TRX_LINE_SALESREPS_ALL 130,287,580 04/02/2012 14:03:54
AR AR_PAYMENT_SCHEDULES_ALL 108,652,480 04/02/2012 12:05:32
AR RA_CUSTOMER_TRX_ALL 55,540,740 04/02/2012 12:41:56
AR AR_CASH_RECEIPTS_ALL 53,182,340 04/02/2012 11:29:53
AR AR_DOC_SEQUENCE_AUDIT 52,865,150 04/02/2012 11:52:46
AR RA_MC_TRX_LINE_GL_DIST 17,317,730 04/02/2012 14:05:18
AR AR_MC_DISTRIBUTIONS_ALL 13,037,030 04/02/2012 11:53:35
AR AR_MC_RECEIVABLE_APPS 12,672,050 04/02/2012 11:53:57
AR AR_TRX_SUMMARY 12,457,560 04/02/2012 12:20:16
AR RA_CUST_RECEIPT_METHODS 11,105,750 04/02/2012 13:35:38
AR HZ_ORGANIZATION_PROFILES 10,271,640 04/02/2012 12:24:44
How to Upgrade AR tables whith Huge amount of Datas ( > 50 Millions Rows ) ?Hi,
Dont worry, you are not the only one even we have one customer whose AR_DISTRIBUTIONS_ALL table is 80 GB now and i can even do select count(*) for this table.
We had to keep this much data for business requirements, but i wonder if this is a bug or users mistake.
Due to this we are facing seruios performance issues for AR reports and raised SRs but no resolution yet. And this guy who is assigned to us is reall ynot been helpful to fix the issue.
Although we did not upgrade for this customer, but we migrated from 11.5.9 to R12.1.1 by re-implementation. But all these increasing size of these tables happened after migration.
And i believe most of the time in your upgrade is going to building the indexes. You can ask Oracle if they can edit the driver file to skip building the indxes and rebuild them after upgrade. But again it will also take time.
Another option for you is to "Archive and Purge the data" as per chapter 10 of
11i Receivables user guide.
http://docs.oracle.com/cd/B25284_01/current/acrobat/115arug.zip
Thanks
Edited by: EBSDBA on Feb 8, 2012 10:04 PM -
I am using iphone 3gs, I upgraded my i-os to 6.0 from 5.1.1 however I am facing performance issue, along with this all the applications including settings or any other shuts down automatically. Is there any way by which I can use my phone in better way?
No, downgrading from any version of iOS to an earlier version is not supported.
Maybe you are looking for
-
Open FCP 6 project files in FCP 5
Hey guys, I've got a client who has FCP 5 and wants to open a project file created in FCP 6. I'm well aware of the incompatibility of the two project files, but is there any way he can access the edit? I know I could do an EDL, but I'm not sure how s
-
My screen is turning black as the line moves across my song
As my song is playing in Logic Express the part that has already played turns black. I can't edit my song as it is playing and have to pause then move the cursor, click somewhere else and then go back to be able to edit. Anybody know why this is ha
-
Firewall keeps prompting to allow incoming connections
Hi, This is, by far, Lion's most annoying new feature. Every time I open iTunes after startup, I get asked to allow incoming connections through the firewall. This behavior started happening after I made the mistake of upgrading to Lion. Removing itu
-
Getting a early watch alert in production for query display.
Hi All, Getting a early watch alert in production for query display. Number of Excel cells send to frontend: 78630 cells to the frontend on average. Please suggest to overcome and rectify in the production environment. Regards, Siddhardh
-
Hello all. video works fine at my house where I can turn on UPNP. However at work nothing works because UPNP isn't enabled. I have to forward all ports to my laptop. So my question is what ports to forward?