ME05 program RM06W003 background processing takes more time
Hi friends,
I am running ME05 (program: RM06W003) in the background to generate source list automatically based on the variants. But it takes too long to finish.
So, Anybody knows any optimizing process or performance steps to run this program RM06W003 in the background?
Thanks in advance for help.
Mourougane
Hi,
Search for any SAP note to resolve this issue , incase if you didn't find any note , raise a message to SAP for solution.
Another solution , while shecduling it in bacground , take the trace with ST05 and runtimes with SE30 , observe them you will know where the problem is.
Regards,
syesms
Similar Messages
-
MM: ME05 background execution takes more time
Hi friends,
I am running ME05 (program: RM06W003) in the background to generate source list automatically based on the variants. But it takes too long to finish.
So, Anybody knows any optimizing process or performance steps to run this program RM06W003 in the background?
Thanks in advance for help.
Mourougane
Edited by: Mourougane DJEARAMANE on Aug 18, 2009 4:38 PMHello Mourougane,
at first you have to identify what the bottle neck is .. you can do this with the transactions ST03n or STAD. If you can rerun the job again .. STAD would be the best solution.
After you have identified the bottle neck (database, ABAP, maybe system related things) you can go into deeper analysis.
STAD: http://help.sap.com/SAPHELP_NW04S/helpdata/EN/ec/af4ddc0a1a4639a037f35c4228362d/content.htm
Regards
Stefan -
Re: WorkFlow Back Ground Process Running More Time
Hi ,
I have a similar situation where Workflow Background Process is running for hours during peak business hours only for item type REQAPPRV.
Can you please suggest the action plan taken in your case.
Thanks,user12558002 wrote:
Hi ,
I have a similar situation where Workflow Background Process is running for hours during peak business hours only for item type REQAPPRV.
Can you please suggest the action plan taken in your case.
Thanks,Please post the details of the application release, database version and OS.
Was this working before? If yes, any changes been done recently?
Can you find any errors in the CM/Workflow/DB log files?
Please see the following docs.
Workflow Background Process Hangs on Deferred = Yes and OM Order Line [ID 817642.1
Running "Workflow Background Process" During Day Time Hangs For 'OM Order Line' Parameter [ID 564504.1
WF 2.x: Workflow Background Process Performance Troubleshooting Guide [ID 186361.1]
How to Recreate a Corrupted Workflow Background Queue WF_Deferred_Table_M [ID 1176723.1]
Workflow Background Process Takes Long Time to Run After Conversion To ATG Rup5 DB 10g [ID 469702.1]
11.5.10.4: Workflow Background Process Seems To Take Longer After RUP.4 [ID 560144.1]
Performance Degradation when the Workflow Background Process is Running [ID 743338.1]
How Often or Frequent Should You Run Workflow Background Process to Improve Performance for Deferred OEOL? [ID 1308607.1]
Thanks,
Hussein -
'BAPI_GOODSMVT_CREATE' takes more time for creating material document
Hi Experts,
I m using 'BAPI_GOODSMVT_CREATE' in my custom report, it takes more time for creating Material documents.
Please let me know if there is any option to overcome this issue.
Thanks in advance
Regards,
LeoHi,
please check if some of following OSS notes are not valid for your problem:
[Note 838036 - AFS: Performance issues during GR with ref. to PO|https://service.sap.com/sap/support/notes/838036]
[Note 391142 - Performance: Goods receipt for inbound delivery|https://service.sap.com/sap/support/notes/391142]
[Note 1414418 - Goods receipt for customer returns: Various corrections|https://service.sap.com/sap/support/notes/1414418]
The other idea is not to commit each call, but executing commit of packages e.g. after 1000 BAPI calls.
But otherwise, I am afraid you can not do a lot about performance of standard BAPI. Maybe there is some customer enhancement which is taking too long inside the BAPI, but this has to be analysed by you. To analyse performance, just execute your program via tr. SE30.
Regards
Adrian -
'BAPI_GOODSMVT_CREATE' takes more time for creating material document for the 1st time
Hi Experts,
I am doing goods movement using BAPI_GOODSMVT_CREATE in my custom code.
Then there is some functional configuration such that, material documents and TR and TO are getting created.
Now I need to get TO and TR numbers from LTAK table passing material documnt number and year, which I got from above used BAPI.
The problem I am facing is very strange.
Only for the 1st time, I am not finding TR and TO values in LTAK table. And subsequent runs I get entries in LTAK in there is a wait time of 5 seconds after bapi call.
I have found 'BAPI_GOODSMVT_CREATE' takes more time for creating material document with similar issue, but no solution or explanation.
Note 838036 says something similar, but it seems obsolete.
Kindly share your expertise and opinions.
Thanks,
AnilHi,
please check if some of following OSS notes are not valid for your problem:
[Note 838036 - AFS: Performance issues during GR with ref. to PO|https://service.sap.com/sap/support/notes/838036]
[Note 391142 - Performance: Goods receipt for inbound delivery|https://service.sap.com/sap/support/notes/391142]
[Note 1414418 - Goods receipt for customer returns: Various corrections|https://service.sap.com/sap/support/notes/1414418]
The other idea is not to commit each call, but executing commit of packages e.g. after 1000 BAPI calls.
But otherwise, I am afraid you can not do a lot about performance of standard BAPI. Maybe there is some customer enhancement which is taking too long inside the BAPI, but this has to be analysed by you. To analyse performance, just execute your program via tr. SE30.
Regards
Adrian -
Hi All,
I have cloned KSB1 tcode to custom one as required by business.
Below query takes more time than excepted.
Here V_DB_TABLE = COVP.
Values in Where clause are as follows
OBNJR in ( KSBB010000001224 BT KSBB012157221571)
GJAHR in blank
VERSN in '000'
WRTTP in '04' and '11'
all others are blank
VT_VAR_COND = ( CPUDT BETWEEN '20091201' and '20091208' )
SELECT (VT_FIELDS) INTO CORRESPONDING FIELDS OF GS_COVP_EXT
FROM (V_DB_TABLE)
WHERE LEDNR = '00'
AND OBJNR IN LR_OBJNR
AND GJAHR IN GR_GJAHR
AND VERSN IN GR_VERSN
AND WRTTP IN GR_WRTTP
AND KSTAR IN LR_KSTAR
AND PERIO IN GR_PERIO
AND BUDAT IN GR_BUDAT
AND PAROB IN GR_PAROB
AND (VT_VAR_COND).
Checked in table for this condition it has only 92 entries.
But when i execute program takes long time as 3 Hrs.
Could any one help me on this>1.Dont use SELECT/ENDSELECT instead use INTO TABLE addition .
> 2.Avoid using corresponding addition.create a type and reference it.
> If the select is going for dump beacause of storage limitations ,then use Cursors.
you got three large NOs .... all three recommendations are wrong!
The SE16 test is going in the right direction ... but what was filled. Nobody knows!!!!
Select options:
Did you ever try to trace the SE16? The generic statement has for every field an in-condition!
Without the information what was actually filled, nobody can say something there
are at least 2**n combinations possible!
Use ST05 for SE16 and check actual statement plus explain! -
Calc takes more time than previous
Hi All,
I have a problem with the calc as this calc take more time to execute please help!!!
I have included calc cache high in the .cfg file.
FIX (&As, &Af, &C,&RM, @RELATIVE("Pr",0), @RELATIVE("MS",0), @RELATIVE("Pt",0), @RELATIVE("Rn",0),@RELATIVE("Ll",0))
CLEARDATA "RI";
/* 22 Comment */
FIX("100")
"RI" = @ROUND ((("RDL")/("SBE"->"RDL"->"TMS"->"TP"->"TR"->"AF"->"Boom")),8);
ENDFIX
FIX("200")
"RI" = @ROUND ((("RDL")/("ODE"->"RDL"->"TMS"->"T_P"->"TR"->"AF"->"Boom")),8);
ENDFIX
Appriciate your help.
Regards,
Mink.Mink,
If the calculation script ,which you are using is the same which performed better before and data being processes is same ( i mean data might not have exceptionally grown more).Then, there must be other reasons like server side OS , processor or memory issues.Consult sys admin .Atleast you ll be sure that there is nothing wrong with systems.
To fine tune the calc , i think , you can minimise fix statements . But,thats not the current issue though
Sandeep Reddy Enti
HCC
http://analytiks.blogspot.com -
Oracle coherence first read/write operation take more time
I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.In which case, why bother using Coherence? You're not really gaining anything, are you?
What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
Just my two cents.
Cheers,
Steve
NB. I don't work for Oracle, so maybe they have a different opinion. :) -
Automatic DOP take more time to execute query
We upgraded database to oracle 11gR2. While testing Automatic DOP feature with our existing query it takes more time than with parallel.
Note: No constrains or Index created on table to gain performance while loading data (5000records / sec)
Os : Sun Solaris 64bit
CPU = 8
RAM = 7456M
Default parameter settings:
parallel_degree_policy string MANUAL
parallel_degree_limit string CPU
parallel_threads_per_cpu integer 2
arallel_degree_limit string CPU
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
Query:
SELECT COUNT(*)
from (
SELECT
/*+ FIRST_ROWS(50), PARALLEL */
Query gets executed in 22minutes : execution plan
COUNT(*)
9600
Elapsed: 00:22:10.71
Execution Plan
Plan hash value: 3765539975
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 21 | 2164K (1)| 07:12:52 | | |
| 1 | SORT AGGREGATE | | 1 | 21 | | | | |
| 2 | PARTITION RANGE OR| | 89030 | 1825K| 2164K (1)| 07:12:52 |KEY(OR)|KEY(OR)|
|* 3 | TABLE ACCESS FULL| SUBSCRIBER_EVENT | 89030 | 1825K| 2164K (1)| 07:12:52 |KEY(OR)|KEY(OR)|Automatic DOP Query: parameters set
alter session set PARALLEL_DEGREE_POLICY = limited;
alter session force parallel query ;Query:
SELECT COUNT(*)
from (
SELECT /*+ FIRST_ROWS(50), PARALLEL*/
This query takes more than 2hrs to execute
COUNT(*)
9600
Elapsed: 02:07:48.81
Execution Plan
Plan hash value: 127536830
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart|Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 21 | 150K (1)| 00:30:01 | | | | | |
| 1 | SORT AGGREGATE | | 1 | 21 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 21 | | | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 21 | | | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 89030 | 1825K| 150K (1)| 00:30:01 |KEY(OR)|KEY(OR)| Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| SUBSCRIBER_EVENT | 89030 | 1825K| 150K (1)| 00:30:01 |KEY(OR)|KEY(OR)| Q1,00 | PCWP | |
Note
- automatic DOP: Computed Degree of Parallelism is 16 because of degree limitcan some one help us to find out where we did wrong or any pointer will really helpful to resolve an issue.
Edited by: Sachin B on May 11, 2010 4:05 AMGenerated AWR report for ADOP
Foreground Wait Events DB/Inst: HDB/hdb Snaps: 158-161
-> s - second, ms - millisecond - 1000th of a second
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by wait time desc, waits desc (idle events last)
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
direct path read 522,173 0 125,051 239 628.4 99.3
db file sequential read 663 0 156 235 0.8 .1
log file sync 165 0 117 712 0.2 .1
Disk file operations I/O 267 0 63 236 0.3 .1
db file scattered read 251 0 36 145 0.3 .0
control file sequential re 217 0 32 149 0.3 .0
library cache load lock 2 0 10 4797 0.0 .0
cursor: pin S wait on X 3 0 9 3149 0.0 .0
read by other session 5 0 2 429 0.0 .0
kfk: async disk IO 613,170 0 2 0 737.9 .0
sort segment request 1 100 1 1007 0.0 .0
os thread startup 16 0 1 43 0.0 .0
direct path write temp 1 0 1 527 0.0 .0
latch free 51 0 0 2 0.1 .0
kksfbc child completion 1 100 0 59 0.0 .0
latch: cache buffers chain 19 0 0 2 0.0 .0
latch: shared pool 36 0 0 1 0.0 .0
PX Deq: Slave Session Stat 21 0 0 1 0.0 .0
library cache: mutex X 45 0 0 1 0.1 .0
CSS initialization 2 0 0 6 0.0 .0
enq: KO - fast object chec 1 0 0 11 0.0 .0
buffer busy waits 3 0 0 1 0.0 .0
cursor: pin S 9 0 0 0 0.0 .0
CSS operation: action 2 0 0 1 0.0 .0
direct path write 1 0 0 2 0.0 .0
jobq slave wait 17,554 100 8,942 509 21.1
PX Deq: Execute Reply 4,060 95 7,870 1938 4.9
SQL*Net message from clien 96 0 5,756 59962 0.1
PX Deq: Execution Msg 618 56 712 1152 0.7
KSV master wait 11 0 0 2 0.0
PX Deq: Join ACK 16 0 0 1 0.0
PX Deq: Parse Reply 14 0 0 1 0.0
Background Wait Events DB/Inst: HDB/hdb Snaps: 158-161
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
control file sequential re 6,249 0 2,375 380 7.5 55.6
control file parallel writ 2,003 0 744 371 2.4 17.4
db file parallel write 1,604 0 503 313 1.9 11.8
log file parallel write 861 0 320 371 1.0 7.5
db file sequential read 363 0 151 415 0.4 3.5
db file scattered read 152 0 64 421 0.2 1.5
Disk file operations I/O 276 0 21 77 0.3 .5
os thread startup 316 0 15 48 0.4 .4
ADR block file read 24 0 11 450 0.0 .3
rdbms ipc reply 17 12 7 403 0.0 .2
Data file init write 6 0 6 1016 0.0 .1
direct path write 21 0 6 287 0.0 .1
log file sync 7 0 6 796 0.0 .1
ADR block file write 10 0 4 414 0.0 .1
enq: JS - queue lock 1 0 3 2535 0.0 .1
ASM file metadata operatio 1,801 0 2 1 2.2 .0
db file parallel read 30 0 1 40 0.0 .0
kfk: async disk IO 955 0 1 1 1.1 .0
db file single write 1 0 0 415 0.0 .0
reliable message 10 0 0 23 0.0 .0
latch: shared pool 75 0 0 2 0.1 .0
latch: call allocation 26 0 0 2 0.0 .0
CSS initialization 7 0 0 6 0.0 .0
asynch descriptor resize 352 100 0 0 0.4 .0
undo segment extension 2 100 0 5 0.0 .0
CSS operation: action 9 0 0 1 0.0 .0
CSS operation: query 42 0 0 0 0.1 .0
latch: parallel query allo 4 0 0 0 0.0 .0
rdbms ipc message 37,948 97 104,599 2756 45.7
DIAG idle wait 16,762 100 16,927 1010 20.2
ASM background timer 1,724 0 8,467 4912 2.1
shared server idle wait 282 100 8,465 30019 0.3
pmon timer 3,123 90 8,465 2711 3.8
wait for unread message on 8,381 100 8,465 1010 10.1
dispatcher timer 141 100 8,463 60019 0.2
Streams AQ: qmn coordinato 604 50 8,462 14010 0.7
Streams AQ: qmn slave idle 304 0 8,462 27836 0.4
smon timer 35 71 8,382 239496 0.0
Space Manager: slave idle 1,621 99 8,083 4986 2.0
PX Idle Wait 2,392 99 4,739 1981 2.9
class slave wait 46 0 623 13546 0.1
KSV master wait 2 0 0 27 0.0
SQL*Net message from clien 7 0 0 1 0.0
Wait Event Histogram DB/Inst: HDB/hdb Snaps: 158-161
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
-> Ordered by Event (idle events last)
% of Waits
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
ADR block file read 24 100.0
ADR block file write 10 100.0
ADR file lock 12 100.0
ASM file metadata operatio 1812 99.0 .3 .4 .2 .1
CSS initialization 9 100.0
CSS operation: action 11 90.9 9.1
CSS operation: query 54 100.0
Data file init write 6 16.7 16.7 16.7 50.0
Disk file operations I/O 533 88.7 2.6 .6 1.5 .2 6.4
PX Deq: Signal ACK EXT 4 100.0
PX Deq: Signal ACK RSG 2 100.0
PX Deq: Slave Session Stat 21 42.9 28.6 28.6
SQL*Net break/reset to cli 6 100.0
SQL*Net message to client 102 100.0
SQL*Net more data to clien 4 100.0
asynch descriptor resize 527 100.0
buffer busy waits 4 75.0 25.0
control file parallel writ 2003 9.3 .5 .0 .1 90.0
control file sequential re 6466 10.6 .0 .0 .0 .1 .2 89.0
cursor: pin S 9 100.0
cursor: pin S wait on X 3 33.3 33.3 33.3
db file parallel read 30 6.7 30.0 63.3
db file parallel write 1604 7.4 .1 .6 16.5 75.5
db file scattered read 403 3.7 .2 2.5 13.6 14.9 3.5 61.5
db file sequential read 1017 12.3 .8 2.3 7.3 6.6 2.0 68.8
db file single write 1 100.0
direct path read 522.2 2.2 2.1 .1 .0 1.8 17.9 75.9
direct path write 22 4.5 4.5 90.9
direct path write temp 1 100.0
enq: JS - queue lock 1 100.0
enq: KO - fast object chec 1 100.0
enq: PS - contention 1 100.0
kfk: async disk IO 614.1 100.0 .0
kksfbc child completion 1 100.0
latch free 58 46.6 27.6 15.5 10.3
latch: cache buffers chain 19 36.8 10.5 52.6
latch: call allocation 26 76.9 11.5 7.7 3.8
latch: parallel query allo 4 100.0
latch: shared pool 111 44.1 28.8 27.0
library cache load lock 2 100.0
library cache: mutex X 45 84.4 8.9 4.4 2.2
log file parallel write 861 10.0 .1 .1 89.5 .2
log file sync 172 6.4 90.1 3.5
os thread startup 332 100.0
rdbms ipc reply 18 72.2 11.1 16.7
read by other session 5 100.0
reliable message 11 81.8 9.1 9.1
sort segment request 1 100.0
undo segment extension 2 50.0 50.0
ASM background timer 1724 .8 .6 .1 .6 97.9
DIAG idle wait 16.8K 100.0
KSV master wait 13 7.7 23.1 61.5 7.7
PX Deq: Execute Reply 4060 .4 .0 .0 .1 3.4 96.0
PX Deq: Execution Msg 617 34.7 1.5 2.4 1.5 1.5 .2 .8 57.5
PX Deq: Join ACK 16 93.8 6.3
PX Deq: Parse Reply 14 71.4 7.1 14.3 7.1
PX Idle Wait 2384 .0 .6 99.3
SQL*Net message from clien 103 82.5 1.0 1.9 1.0 13.6
Space Manager: slave idle 1621 .2 99.8
Streams AQ: qmn coordinato 604 50.0 50.0
Wait Event Histogram DB/Inst: HDB/hdb Snaps: 158-161
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
-> Ordered by Event (idle events last)Edited by: Sachin B on May 11, 2010 4:52 AM -
MRP job takes more time (Duration / Sec)
Hello PP Guru’s,
I have below situation in production environment,
MRP job for plant – A (Takes more time – 14.650)
MRP job for plant – B (Takes less time – 4.512)
When I compare the variant / attributes with plant A and B, I observe only difference in attribute is scheduling (2 - Lead Time Scheduling and Capacity Planning) for plant A.
For plant B scheduling (1- Determination of Basic Dates for Planned) it was updated.
So in my observation this scheduling is playing a major role for MRB job which takes more time in plant – A.
I am in process of changing the variant attribute for plant – A from scheduling 2 to scheduling 1.
I wanted to know from experts whether if I change any impact / problem will happen in future for plant A or not.
Please let me know what all the hidden impacts are there if I change the scheduling in variant attribute.
I look forward to your valuable input to reduce time for my MRP related job.
Regards,
Kumar SHi Kumar,
You have no need to change the inhouse production time you just need to update the Lot size dependent inhouse production time in work scheduling view of material master. That you can do by scheduling the routing/recipe.
Transactions CA97 or CA97N can be used to update the in-house production time with the information from the routing.
If business don't want to have the capacity planning for planned orders then you change the scheduling from 2 to 1 basic date scheduling.
Expert Caetano already answer you query
The reports listed below can be used to compare the MRP past executions regarding the runtime:
RMMDMONI: This report compares the runtime of the MRP execution and also provides the total of planning elements (planned orders, purchase requisitions, etc) changed, created or deleted. It also shows which planning parameters where used and how much time MRP spent on each step (database read, BOM explosion, MRP calculation, scheduling, BAdIs, etc). With this information is possible to observe the relation of runtime and number of elements changed/created/deleted and also to see on which step MRP is taking more time.
RMMDPERF: This report shows the "material hit list", that means, which materials had the highest runtime during the MRP execution and also on which step MRP is taking more time. Knowing which materials have the highest runtime, allow you to isolate the problem and reproduce it on MD03, where it is possible to run an ABAP or SQL trace for a more detailed analysis.
Regards,
R.Brahmankar -
Hi Experts,
Users raising bill using MIRO it takes more time when i check throuh SM50 and st03 i found that the data fectching from the table BSEG takes more time kinldy let me know whether i need to increase table parameter.....size if so how can i do that ?
Regards...
VenkiThe famous BSEG table is a cluster table.
It holds the Accounting Document Segment. It is part of the Pool cluster RFBLG and lives in the package: FBAS (Financial accounting 'Basis').
You can't read a cluster table exactly the way you read a database (old speak, transparent table).
You can use a program to read called RFPPWF05
Note 435694: Display BSEG item by calling FB09D (modified FB09)
Other possiblity: Other possibility: CALL DIALOG 'RF_ZEILEN_ANZEIGE', but since this is a dialog I don't think this would work.
In any event go to FBAS Package (development class) to see your business objects, class library and functions.
Having such criticality, you set to tablepsace to 'autogrow'
Regards
Sekhar -
Import SCA files in Development Tab of the Transport Studio take more time
Hi,
After Check-In files in the Transport Studio, the import of SCA files starts in the development Tab of the Transport Studio.
The import takes more time. Why this happens?
Am I missing any configuration? Please specify in detail.
Thanks in Advance,
SathyaSC: sap.com_SAP-JEE:
SDM-deploy
Returncode : Not executed.
How to check the username, password and url for SDM?
Log file of Repository-import:
Info:Starting Step Repository-import at 2009-10-13 22:15:49.0484 +5:00
Info:Component:sap.com/SAP_JTECHS
Info:Version :SAP AG.20060119105400
Info:3. PR is of type TCSSoftwareComponent
Info:Component:sap.com/SAP_BUILDT
Info:Version :SAP AG.20060411165600
Info:2. PR is of type TCSSoftwareComponent
Info:Component:sap.com/SAP-JEE
Info:Version :SAP AG.20060119105300
Info:1. PR is of type TCSSoftwareComponent
Info:Step Repository-import ended with result 'not needed' at 2009-10-13 22:15:49.0500 +5:00
Log File of CBS-make :
Import got failed.
Info:build process already running: waiting for another period of 30000 ms
Info:no changes on the CBS request queue (DM0_DEMObp1_D) after a waiting time of 14430000 ms
Fatal:The request queue is not processed by the CBS during the given time intervall => TCS cannot import the request because queue is not empty
Fatal:There seems to be a structural problem in the NWDI. Please look after the operational status of the CBS.
Fatal Exception:com.sap.cms.tcs.interfaces.exceptions.TCSCommunicationException: communication error: The request queue is not processed during the given time intervall. There seems to be a structural problem in the NWDI. Please look after the operational status of the CBS.:communication error: The request queue is not processed during the given time intervall. There seems to be a structural problem in the NWDI. Please look after the operational status of the CBS.
com.sap.cms.tcs.interfaces.exceptions.TCSCommunicationException: communication error: The request queue is not processed during the given time intervall. There seems to be a structural problem in the NWDI. Please look after the operational status of the CBS.
at com.sap.cms.tcs.client.CBSCommunicator.importRequest(CBSCommunicator.java:369)
at com.sap.cms.tcs.core.CbsMakeTask.processMake(CbsMakeTask.java:120)
at com.sap.cms.tcs.core.CbsMakeTask.process(CbsMakeTask.java:347)
at com.sap.cms.tcs.process.ProcessStep.processStep(ProcessStep.java:77)
at com.sap.cms.tcs.process.ProcessStarter.process(ProcessStarter.java:179)
at com.sap.cms.tcs.core.TCSManager.importPropagationRequests(TCSManager.java:376)
at com.sap.cms.pcs.transport.importazione.ImportManager.importazione(ImportManager.java:216)
at com.sap.cms.pcs.transport.importazione.ImportQueueHandler.execImport(ImportQueueHandler.java:585)
at com.sap.cms.pcs.transport.importazione.ImportQueueHandler.startImport(ImportQueueHandler.java:101)
at com.sap.cms.pcs.transport.proxy.CmsTransportProxyBean.startImport(CmsTransportProxyBean.java:583)
at com.sap.cms.pcs.transport.proxy.CmsTransportProxyBean.startImport(CmsTransportProxyBean.java:559)
at com.sap.cms.pcs.transport.proxy.LocalCmsTransportProxyLocalObjectImpl0.startImport(LocalCmsTransportProxyLocalObjectImpl0.java:1736)
at com.sap.cms.ui.wl.Custom1.importQueue(Custom1.java:1169)
at com.sap.cms.ui.wl.wdp.InternalCustom1.importQueue(InternalCustom1.java:2162)
at com.sap.cms.ui.wl.Worklist.onActionImportQueue(Worklist.java:880)
at com.sap.cms.ui.wl.wdp.InternalWorklist.wdInvokeEventHandler(InternalWorklist.java:2338)
at com.sap.tc.webdynpro.progmodel.generation.DelegatingView.invokeEventHandler(DelegatingView.java:87)
at com.sap.tc.webdynpro.progmodel.controller.Action.fire(Action.java:67)
at com.sap.tc.webdynpro.clientserver.window.WindowPhaseModel.doHandleActionEvent(WindowPhaseModel.java:422)
at com.sap.tc.webdynpro.clientserver.window.WindowPhaseModel.processRequest(WindowPhaseModel.java:133)
at com.sap.tc.webdynpro.clientserver.window.WebDynproWindow.processRequest(WebDynproWindow.java:344)
at com.sap.tc.webdynpro.clientserver.cal.AbstractClient.executeTasks(AbstractClient.java:143)
at com.sap.tc.webdynpro.clientserver.session.ApplicationSession.doProcessing(ApplicationSession.java:298)
at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessingStandalone(ClientSession.java:705)
at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessing(ClientSession.java:659)
at com.sap.tc.webdynpro.clientserver.session.ClientSession.doProcessing(ClientSession.java:227)
at com.sap.tc.webdynpro.clientserver.session.RequestManager.doProcessing(RequestManager.java:150)
at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doContent(DispatcherServlet.java:56)
at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doPost(DispatcherServlet.java:47)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
at java.security.AccessController.doPrivileged(Native Method)
at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
Info:Step CBS-make ended with result 'fatal error' ,stopping execution at 2009-10-14 02:16:28.0296 +5:00 -
When I close my iPhone and I want to open it to use it again, the opening process takes more than an hour, I regretted to buy the iPhone because of this problem that you do not suffer at all with Nokia,how I can solve this problem?
mostafa182 wrote:
... how I can solve this problem?
The Basic Troubleshooting Steps are:
Restart... Reset... Restore from Backup... Restore as New...
Restart / Reset
http://support.apple.com/kb/ht1430
Backing up, Updating and Restoring
http://support.apple.com/kb/HT1414
If you try all these steps and you still have issues... Then a Visit to an Apple Store or AASP (Authorized Apple Service Provider) is the Next Step...
Be sure to make an appointment first... -
Hi all
I want to fetch just twenty thousands records from table. My query take more time to fetch twenty thousands records. I post my working query, Could you correct the query for me. thanks in advance.
Query
select
b.Concatenated_account Account,
b.Account_description description,
SUM(case when(Bl.ACTUAL_FLAG='B') then
((NVL(Bl.PERIOD_NET_DR, 0)- NVL(Bl.PERIOD_NET_CR, 0)) + (NVL(Bl.PROJECT_TO_DATE_DR, 0)- NVL(Bl.PROJECT_TO_DATE_CR, 0)))end) "Budget_2011"
from
gl_balances Bl,
gl_code_combinations GCC,
psb_ws_line_balances_i b ,
gl_budget_versions bv,
gl_budgets_v gv
where
b.CODE_COMBINATION_ID=gcc.CODE_COMBINATION_ID and bl.CODE_COMBINATION_ID=gcc.CODE_COMBINATION_ID and
bl.budget_version_id =bv.BUDGET_VERSION_ID and gv.budget_version_id= bv.budget_version_id
and gv.latest_opened_year in (select latest_opened_year-3 from gl_budgets_v where latest_opened_year=:BUDGET_YEAR )
group by b.Concatenated_account ,b.Account_descriptionHi,
If this question is related to SQL then please post in SQL forum.
Otherwise provide more information how this sql is being used and do you want to tune the SQL or the way it fetches the information from DB and display in OAF.
Regards,
Sandeep M. -
Takes more time to start & shutdown the database
Hi All,
I have created a database in oracle9i by following manual steps. Every thing was created successfully and am able to start the database and shutdown also.
but the problem is while giving the startup command it takes more time to start the database and the same during the shutdown. So anyone help me..
the follwing are the pfile specifications:
db_name=practice
instance_name=practice
control_files= 'E:\practice\control\control1.ctl',
'D:\practice\control\control2.ctl'
db_block_size=2048
db_cache_size=20m
shared_pool_size=20m
background_dump_dest='E:\practice\bdump'
user_dump_dest='E:\practice\udump'
Thanks in AdvanceEvery thing was created successfully and am able to start the database and > shutdown also.Please restate the above.
problem is while giving the startup command it takes more time to start the >database and the same during the shutdownHow have you compared? Could it be O/S resources, installation of additional software; you have not mentioned the O/S and complete version of your database.
You can review the following although I am bit unclear;
http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/instreco.htm#440322
Adith
Maybe you are looking for
-
How to deploy web service as part of console.war
Hello, I need to expose the deploy functionality of bpel console as web service. I see that there is a JSP page which pretty much does the same thing in console.war. So I know what my Java class is going to look like the problem is I am very new to O
-
good morning. iused dba_jobs_running and dba_jobs to identifiy the sid and the running jobs. it shows it was in a active position. but application develpers ask to restart the job. let me know how to restart a job Thanks in advance ganeshprrasadc
-
I have Bridge CC and want to use dual monitors
I have no idea even if this is possible. I like Nikon ViewNX2 because it gives me a small view panel in my screen and at the same time a full screen view on my second monitor. Is this possible with Adobe Bridge CC on MAC Yosemite? Thank you Brian
-
Run crystal report through BI, can't locate par file on BOBJ server
Attempting to run crystal report through BI iView. Followed steps here: http://wiki.sdn.sap.com/wiki/display/BOBJ/CreateBOEXI3.1IntegrationKitIViewTemplateintoSAPEP+Portal successful until we try to locate the par file: com.businessobjects.pct.maste
-
Controlling printing and printer
I am trying to control Adobe Pro (create OLE object of AcroExch) during the printing phase with a number of different options for a print shop. So far so good with a couple of exceptions: a) If Adobe is up when we go to set the printer, I cannot, the