TIME CONSUMING FOR BKPF TABLE
HI EXPERTS,
I HAVE WRITTEN AN ALV REPORT PROGARM FOR TAKING DEBTOR LIST.
I AM USING BKPF,BSID AND BSEG TABLE .
BUT IT TAKES LOT OF TIME EXECUTION.
MAINL;Y WHILE SELECTING FROM BKPF TABLE .
WHAT IS THE ALTERNATIVE FOR THIS ......
REGARDS,
MANI
Hi,
Do like this
Avoid using Into corresponding, declare the internal table with the required fields.
Try to use BUKRS & GJAHR in Where clause, at least try to pass blank values.
SELECT BELNR BUDAT FROM BKPF
INTO TABLE IT_BKPF
WHERE BUDAT IN DATE1
AND BLART IN ( 'AB', 'RV', 'DR','BR' ).
Regards,
Satish
Similar Messages
-
Hi all ,
I created secondary indexes as per given thread bu vikrant , but hw to use this index in program.
reply me.
bye ,
jyotsnaContinue your thread here: error when selecting bkpf table
Thread locked. -
Difference in Time updated in BKPF and FAGLFLEXA tables
Hi
We are extracting BW reports based on BKPF and FAGLFLEXA tables on the
basis of date and time. The time stamp are updated in these two tables
does not match and there is a difference of 1-2 hours between time
updated in BKPF and FAGLFLEXA table . The time stamp in the table
FAGLFLEXA has a time which is earlier by 1-2 hours earlier than the
time updated in the table BKPF for the same document which causes BW reports incorrect.
I request your inputs as to how to make time updated in FAGLFLEXA
table to sync with the time updated in BKPF table at the time of
posting the document. Is there any OSS notes that needs to be applied
for updating the time stamp in FAGLFLEXA to match with the time updated
in BKPF table.
We have also seen the SAP Standard settings for time zone wherein
different settings are maintained for day light saving. We would like
to get your guidance as to whether we can modify the SAP Standard
settings and what is the impact we will have in the system if we change
the settings for the Time zone and also would like to know the effect
on the documents already created in case sync is established between
these two tables.
Requesting your inputs in this regard at the earliest please.
Thanks in advance
Regards
KamleshHi Rajesh,
You can get that discussion here as well:
http://www.dbmonster.com/Uwe/Forum.aspx/oracle-server/8792/RAC-nodes-and-date-time-syncWhat is interesting is that MEA0730 said
Well I asked Oracle! They said "exact" clock sync is not important.
The only thing that will happen is that if a node is more than 15
minutes out of sync it will get evicted. They did recommend running a
network time sync to keep them from getting too far out of sync.and DA Morgan said
I just ran a test with 10gR2 and the eject time-out is about 10 sec.But you will notice that the OP's DB servers have almost 2 1/2 minutes time difference.
I wonder what is the time then what a node will get evicted?
iinfi,
what is your Oracle version (4 digits)?
Regards
Edited by: user11150436 on Oct 21, 2010 4:33 PM
iinfi,
1 more thing I can add is that I have struggled with this with an OAS cluster install where the 2 nodes time difference was out of sync and the opmn processes wouldn't start because of this, but exactly how far they were out of sync I can't tell and I think this was on 10.1.3
Edited by: user11150436 on Oct 21, 2010 4:35 PM -
Time consuming problem in Self update rule
Hi all:
We have time consuming problem in self update rule.I have ODS ZOMS001,for this we created self update rule.In process chain we include this self update rule and during delta update,it takes 20 to 25 mins even if there is two records or 10000 records.In delta for this self update rule,it takes the whole records in ODS
EX: If i have 10000 records during initialise and 10 records in Delta update...For the delta self update rule it takes 100010 records..But it only update delta records values.
we have to reduce the total time consuming for this self update during delta..
Waiting for your inputs.
It would be helpful for your valuable reply.
Rgds
MSKI think retransporting is the only option available to you. You cannot modify anything in your production system.
IF you have a chance to speak with basis people,ask them to open the system status to modifiable for few minutes.
and make necessary changes in production and bring it back to normal (This is not a best practise in all situations).
hope this helps.
Praveen -
Long duration time extraction BKPF table
I have access to SAP R/3 on a Oracle database.
For one object GL Open Items I need to extract all required data from 2 tables BSIS and BKPF. (=select all open items of required G/L Accounts)
Selection Criteria as follows:
step 1:Select BSIS records on MANDT, BUKRS and HKONT (from a list of GL Accounts).
step 2:Select BKPF records where:
- BSIS-MANDT=BKPF-MANDT
- BSIS-BUKRS=BKPF-BUKRS
- BSIS-GJAHR=BKPF-GJAHR
- BSIS-BELNR=BKPF-BELNR
I have used SQL statements in order to get the data out of SAP but this will lead to an enormous long duration time, caused by the BKPF table. Therefore I need to improve my performance of BKPF.
Do you have a solution in how to fasten the BKPF data extraction? (other tooling/special code written, etc.)? SQL tooling is not a must!
ThanksLong duration time extraction BKPF table
Markus -
Creating sequences for all tables in the database at a time
Hi ,
I need to create sequences for all the tables in my database.
i can create individually ,using toad and sqlplus.
Can any one give me a code for creating the sequences dynamically at a time for all the tables.
it is urgent ..
Regards.I need to create sequences for majority of the tables that are having ID column
which is sequences."The majority" is not the same as all. So you probably want to drive your generation script off the ALL_TAB_COLUMNS view...
where column_name = 'ID'You need to think about this carefully. You might want different CACHE sizes or different INCREMENT BY clauses for certain tables. You might even (whisper it) want a sequence to be shared by more than one table.
Code generation is a useful technique, but it is a rare application where one case fits all.
Cheers, APC
Blog : http://radiofreetooting.blogspot.com/ -
How much time it take to rebuild an index for a table with 20 millions rows
Hi all,
i need to rebuild the index of a table containing 20 000 000 row (i don't know why the other people working on this didn't think of rebuilding the index regularly, because i asked and apparently it has never been done :cry: :cry:) i am not a sql developper nor a DBA so i can't mesure how long it take to rebuild the index, does any one have an idea (aproximativly of course :aie:), the other question is there any formula to use in order to calculate how often to rebuild the indexes (i can for example retieve how much rows are delated or inserted daily ...)
Thanks again
Tahataha wrote:
:aie: that's why i am asking because i don't know (and to be sure which solution is best)
so the table is like this (the columns) :
45 varchar2, 5 timestamp, 30 Number no LOB columns, (15 indexes : 5 unique indexes and that those indexes uses at a maximum 4 columns)15 indexes - 100,000 deletes: this could mean 1,500,000 block visits to maintain index leaf blocks as the table rows are deleted. If you're unlucky this could turn into 1,500,000 physical block read requests; if you're lucky, or the system is well engineered this could be virtually no physical I/O. The difference in time could be huge. At any rate it is likely to be 1,500,000 redo entries at 250 - 300 bytes per entry for a total of about 400MB of redo (so how large are your redo logs and how many log switches are you going to cause).
yes the tables is used by an application so (update, insert ) can take place at any time
for the deletion , there is the batch which does a mass delete on the table ( 4 or 5 time each day)
You haven't answered the question - how long does it take to do a sample batch delete.
If you can enable SQL tracing, or take a before/after snapshot of v$sesstat or v$session_event for the session as it does the delete then you can get some idea of where the time is going - for all you know it might be spending most of its time waiting for a lock to do away.
>
"How many leaf blocks are currently allocated to the index(es) ?" how can i answer to this question ? may be if i check the all_objects table ?
If you keep your statistics up to date then dba_indexes is a good place, cross-checked with dba_segments, and you can use the dbms_space package for more detail. I have a code sample on my blog which allows you to compare the current size of your indexes with the size they would be if rebuilt at some specific percentage: http://jonathanlewis.wordpress.com/index-sizing/ (It's such good code that Oracle Corp. has copied it into MOS note 989186.1)
Regards
Jonathan Lewis -
Regarding performance tuning for BSEG & BKPF table data fetch
Hi Friends:
Plz see the below select queries. This is really impacting the performance of my report. Plz suggest the steps to increase the performance of the report.Points will be rewarded.
Thanks:
FORM GET_DATA .
Selecting the Document number from BSEG table
SELECT BELNR BUKRS FROM BSEG INTO TABLE L_DOC_NO
WHERE BUKRS IN S_BUKRS
AND GJAHR = P_GJAHR
AND HKONT IN S_SAKNR.
IF SY-SUBRC <> 0.
MESSAGE ID 'ZTFI' TYPE 'E' NUMBER 006.
ENDIF.
CLEAR L_DOC_NO.
SORT L_DOC_NO BY BELNR.
Selecting the Document Number Based on the selection-screen.
SELECT BELNR BUKRS BUDAT CPUDT BLART MONAT FROM BKPF INTO TABLE
L_BKPF
FOR ALL ENTRIES IN L_DOC_NO
WHERE BUKRS = L_DOC_NO-BUKRS AND
BELNR = L_DOC_NO-BELNR AND
GJAHR = P_GJAHR AND
BUDAT IN S_BUDAT AND
MONAT IN S_MONAT.
IF SY-SUBRC <> 0.
MESSAGE ID 'ZTFI' TYPE 'E' NUMBER 006.
ENDIF.
*Fetch the Line Items
SORT L_BKPF BY BELNR.
SELECT BELNR BUKRS BUZEI HKONT SHKZG WRBTR FROM BSEG INTO TABLE
L_BSEG
FOR ALL ENTRIES IN L_BKPF
WHERE BUKRS = L_BKPF-BUKRS
AND BELNR = L_BKPF-BELNR
AND GJAHR = P_GJAHR
AND BUZEI BETWEEN '001' AND '999'.Hi,
Let me understand your code first.
The below code (the 2 selects) gets data from BSEG first and then gets data from BKPF table.
"*Selecting the Document number from BSEG table
SELECT BELNR BUKRS FROM BSEG INTO TABLE L_DOC_NO
WHERE BUKRS IN S_BUKRS
AND GJAHR = P_GJAHR
AND HKONT IN S_SAKNR.
IF SY-SUBRC 0.
MESSAGE ID 'ZTFI' TYPE 'E' NUMBER 006.
ENDIF.
CLEAR L_DOC_NO.
SORT L_DOC_NO BY BELNR.
"*Selecting the Document Number Based on the selection-screen.
SELECT BELNR BUKRS BUDAT CPUDT BLART MONAT FROM BKPF INTO TABLE L_BKPF
FOR ALL ENTRIES IN L_DOC_NO
WHERE BUKRS = L_DOC_NO-BUKRS AND
BELNR = L_DOC_NO-BELNR AND
GJAHR = P_GJAHR AND
BUDAT IN S_BUDAT AND
MONAT IN S_MONAT.
IF SY-SUBRC 0.
MESSAGE ID 'ZTFI' TYPE 'E' NUMBER 006.
ENDIF.
The below code, can't you avoid by taking all the fields required on your 1st select on BSEG table?
*Fetch the Line Items
SORT L_BKPF BY BELNR.
SELECT BELNR BUKRS BUZEI HKONT SHKZG WRBTR FROM BSEG INTO TABLE
L_BSEG
FOR ALL ENTRIES IN L_BKPF
WHERE BUKRS = L_BKPF-BUKRS
AND BELNR = L_BKPF-BELNR
AND GJAHR = P_GJAHR
AND BUZEI BETWEEN '001' AND '999'.
Please check the below blog on "Performance of Nested Loops" by Rob Burbank which would be really helpful.
/people/rob.burbank/blog/2006/02/07/performance-of-nested-loops
Hope this helps. Rwd points if helpful.
Thanks,
Balaji -
How to populate date & time when user enter data for custom table in sm30
Can anyone tell me How to populate system date & time when user enter data for custom table in sm30..
Req is
i have custom table and using sm30 user can enter data.
after saving date i want to update date & time in table
Pls let me know where to write the code?
Thanks in AdvanceYou have to write the code in EVENT 01 in SE54 transaction. Go to SE54, enter your Ztable name and in the menu 'Environment-->Events'. Press 'ENTER' to go past the popup message. In the next screen, click on 'New Entries'. In the first column, enter 01 and in the next column give some name for your routine(say UPDATE_USER_DATE_TIME). Then click on the souce code icon that appears in blue at the end of the row. In the code, you need logic like below.
FORM update_user_date_time.
DATA: f_index LIKE sy-tabix.
DATA: BEGIN OF l_total.
INCLUDE STRUCTURE zztable.
INCLUDE STRUCTURE vimtbflags.
DATA END OF l_total.
DATA: s_record TYPE zztable.
LOOP AT total INTO l_total.
IF l_total-vim_action = aendern OR
l_total-vim_action = neuer_eintrag.
MOVE-CORRESPONDING l_total TO s_record.
s_record-zz_user = sy-uname.
s_record-zz_date = sy-datum.
s_record-zz_time = sy-uzeit.
READ TABLE extract WITH KEY l_total.
IF sy-subrc EQ 0.
f_index = sy-tabix.
ELSE.
CLEAR f_index.
ENDIF.
MOVE-CORRESPONDING s_record TO l_total.
MODIFY total FROM l_total.
CHECK f_index GT 0.
MODIFY extract INDEX f_index FROM l_total.
ENDIF.
ENDLOOP.
ENDFORM. " UPDATE_USER_DATE_TIME
Here ZZTABLE is the Z table and ZZ_USER, ZZ_DATE, and ZZ_TIME are the fields that are updated. -
Table.Join/Merge in Power Query takes extremly long time to process for big tables
Hi,
I tried to simply merge/inner join two big tables(one has 300,000+ rows after filtering and the other has 30,000+ rows after filtering) in PQ. However, for this simple join operation, PQ took at least 10 minutes (I killed the Query Editor after 10
minutes' processing) to load the preview.
Here's how I did the join job: I first loaded tables into the workbook, then did the filtering for each table and at last, used the merge function to do the join based on a same field.
Did I do anything wrong here? Or is there any way to improve the load efficiency?
P.S. no custom SQL was used during the process. I was hoping the so called "Query Folding" can help speed the process, but it seems it didn't work here.
Thanks.
Regards,
QilongHi!
You should import the source tables
in Access. This will speed up the work of
PQ in several times. -
How to reduce time for gather statistics for a table.
I have a table size 520 gb
Its one of the partition size is 38 gb
and total indexes of related table is 412 gb.
Server/instance details.
==========
56 cpu -> Hyper threading enable
280 gb ram
35 gb sga
27 gb buffer cache
4.5 gb shared pool size
25 gb pga
undo size 90gb
temp size 150 gb
Details :
exec dbms_stats.gather_table_stats('OWNER','TAB_NAME',PARTNAME=>'PART_NAME',CASCADE=>FALSE,ESTIMATE_PERCENT=>10,DEGREE=>30,NO_INVALIDATE=>TRUE);
when i am firing this in an ideal time when there is no load that time also is is taking 28 mins to complete.
Can anybody please reply me how can we reduce the stats gather time.
Thanks in advance,
Tapas Karmakar
Oracle DBA.Enable tracing to see where the time is going.
parallel 30 seems optimistic - unless you have a large number of discs to support the I/O ?
you haven't limited histogram collection, and most of the time spent of histograms may be wasted time - what histograms do you really need, and how many does Oracle analyse for and then discard ?
Using a block sample may help slightly
You haven't limited the granularity of the stats collection to the partition - the default is partition plus table, so I think you're also doing a massive sample on the table after completing the partition. Is this what you want to do, or do you have an alternative strategy for generating table-level stats.
Regards
Jonathan Lewis -
Program SAPLSBAL_DB taking long time for BALHDR table entries
Hi Guys,
I am running a Z program in Quality and Production system both which is uploading data from Desktop.
In Quality system the Z program is successfully uploading datas but in production system its taking very long time even sometime getting time out.
As per trace analysis, Program SAPLSBAL_DB taking long time for BALHDR table entries.
Can anybody provide me any suggestion.
Regards,
Shyamal.These are QA screen shots where no issue, but we are getting very long time in CRP.
Regards,
Shyamal -
TimesTen Release 11.2.1.9.6 (64 bit Linux/x86_64)
Command> dssize;
PERM_ALLOCATED_SIZE: 51200000
PERM_IN_USE_SIZE: 45996153
PERM_IN_USE_HIGH_WATER: 50033464
TEMP_ALLOCATED_SIZE: 2457600
TEMP_IN_USE_SIZE: 19680
TEMP_IN_USE_HIGH_WATER: 26760
Is there any command/query/etc, which would allow to understand what database objects (for example tables) are consuming memory and how much of it?
tried to use ttsize function, but it gives some senseless results – for example, for the biggest table, tokens, it produces following output (that this table is 90GB in size – what physically cannot be true):
Command> call ttsize('tokens',null,null);
< 90885669274.0000 >
1 row found.Are you able to use the command line version of ttSize instead? This splits out how much space is being used by indexes (in the Temp section of the TT memory segment), which I think is being combined into one, whole figure in the procedure version of ttSize you're using. For example:
ttSize -tbl ia my_ttdb
Rows = 4
Total in-line row bytes = 17524
Total = 17524
Command> create index i1 on ia(a);
ttSize -tbl ia my_ttdb;
Rows = 4
Total in-line row bytes = 17524
Indexes:
Range index JSPALMER.I1 adds 5618 bytes
Total index bytes = 5618
Total = 23142
Command> call ttsize ('ia',,);
< 23142.0000000000 >
1 row found.
In 11.2.2 we added the procedure ttComputeTabSizes which populates system tables with detailed table size data, and was designed to be an alternative to ttSize. Unfortunately it still doesn't calculate index usage though, and it isn't in 11.2.1. -
Awkey for awtyp = 'BKPF' and awtyp = 'MKPF' in BKPF table
At my current client, there is the following situation in the BKPF table:
When the awtyp is 'BKPF' in the bkpf table, the awkey is number/company/year, as expected.
But when the awtyp is 'MKPF' in the bkpf table, the awkey is just number/year, with no intervening company code.
Is this standard practice for awkey in bkpf when awtyp = 'MKPF' ???
Or is this due to a configuration error that should be corrected ????Hi,
Good morning and greetings,
That is standard SAP...If AWTYP is BKPF in the Object Key would be Document Number/Company Code/Year and in the case of MKPF it would be Document Number/Year as it pertains more to WA document type transaction which is from MM module and that is the reason the system will not be in a position to place the company code in the transaction.
Please reward points if found useful
Thanking you
With kindest regards
Ramesh Padmanabhan -
Gather table stats takes time for big table
Table has got millions of record and it is partitioned. when i analyze the table using the following syntax it takes more than 5 hrs and it has got one index.
I tried with auto sample size and also by changing the estimate percentage value like 20, 50 70 etc. But the time is almost same.
exec dbms_stats.gather_table_stats(ownname=>'SYSADM',tabname=>'TEST',granularity =>'ALL',ESTIMATE_PERCENT=>100,cascade=>TRUE);
What i should do to reduce the analyze time for Big tables. Can anyone help me. lHello,
The behaviour of the ESTIMATE_PERCENT may change from one Release to another.
In some Release when you specify a "too high" (>25%,...) ESTIMATE_PERCENT in fact you collect the Statistics over 100% of the rows, as in COMPUTE mode:
Using DBMS_STATS.GATHER_TABLE_STATS With ESTIMATE_PERCENT Parameter Samples All Rows [ID 223065.1]For later Release, *10g* or *11g*, you have the possibility to use the following value:
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZEIn fact, you may use it even in *9.2*, but in this release it is recommended using a specific estimate value.
More over, starting with *10.1* it's possible to Schedule the Statistics collect by using DBMS_SCHEDULE and, specify a Window so that the Job doesn't run during production hours.
So, the answer may depends on the Oracle Release and also on the Application (SAP, Peoplesoft, ...).
Best regards,
Jean-Valentin
Maybe you are looking for
-
How can I smooth out the pixelation of my drawing?
I'm trying to create a line drawing effect similar to image 1 below .I have studied numerous you tube videos but my images end up looking to pixilated (see image 2). Image 1 has more of a that line drawn look of a pen tool. I would like to be able to
-
Adobe AIR Application certificate issue - Urgent
Hello, I have created my AIR application in flex 3 using AIR 1.0. I have created it with local air certificate and dispatch the air application to the client. Now after 2-3 months i have got some new features to be added into that application for sam
-
How do i put photo's in my ipod?!
I need to know how to but photo's in my ipod. I have an ipod nano. 4 gb
-
I am still using PP 6.5 as part of the Video Collection Professional. I am building up my war chest to upgrade but that is daunting as I understand my old dualy athlon with a gig of ram won't cut it anymore. A new system is in the upgrade budget too
-
Former pc user converted to mac..ipod not recognised? help please
I have just changed my operating system from windows to mac osx. I plugged in my ipod and it only recharges,it doesn't recognise ipod. I assume this is becauseI need to reformat the ipod.Can anyone inform the the process involved,I have looked throug