Extraction jobs waits long time
Hi,
I tried to extract 0ORGUNIT information from one of our test systems.
As can be read from the job log at the end, the jobs sits and waits for about 14 hours before actually starting data extraction. We're talking a mere 46.000 entries here. I'm trying to understand why the jobs waits soo long.
Any idea is welcome?
16:17:18 Job started
16:17:18 Step 001 started (program SBIE0001, variant &0000000100868, user name 972044)
16:17:19 DATASOURCE = 0ORGUNIT_ATTR
16:17:19 *************************************************************************
16:17:19 * Current values of selected profile parameter *
16:17:19 *************************************************************************
16:17:19 * abap/heap_area_nondia......... 2000683008 *
16:17:19 * abap/heap_area_total.......... 2000683008 *
16:17:19 * abap/heaplimit................ 40894464 *
16:17:19 * zcsa/installed_languages...... NEFD *
16:17:19 * zcsa/system_language.......... E *
16:17:19 * ztta/max_memreq_MB............ 64 *
16:17:19 * ztta/roll_area................ 10485760 *
16:17:19 * ztta/roll_extension........... 2000683008 *
16:17:19 *************************************************************************
05:54:05 BEGIN BW_BTE_CALL_BW204020_E 46.091
05:54:05 END BW_BTE_CALL_BW204020_E 46.091
05:54:05 BEGIN EXIT_SAPLRSAP_002 46.091
06:29:42 END EXIT_SAPLRSAP_002 46.091
06:29:42 Asynchronous sending of data package 000001 in task 0002 (1 parallel tasks)
06:30:33 tRFC: Data package = 000001, TID = 2DDE00246B9444ED2B370016, duration = 00:00:46, ARFCSTATE =
06:30:33 tRFC: Begin = 24.08.2006 06:29:47, End = 24.08.2006 06:30:33
06:30:33 Job finished
Gimmo,
Apart from my interactive session and the batch job nothing happens on the system.
sm50 shows no activity other than mine.
Sufficient resources are available.
What I do see in sm12 (locks) is:
Cli User Time Mode Table Lock argument
040 972044 09:27:24 E EDIDC 0400000000005084008
Selected lock entries: 1
I suspect somethings not right in the ALE arena?
Regards,
Eric
Similar Messages
-
Data extraction is taking long time
Hi,
I am extracting data into infocube from datamart. it's fullupload and almost extracting 24lack records. generally it should take less than time but taking more than 6 hours to upload. data selection and scheduling is happening correctely but acknowledgement from source infocube is taking more time(may be data mart)
BW statistics are not activated to this infocube.
here is job log to this data load.
01:32:26 'SAPGGB', TABNAME => '"/BI0/0P00000050"',
01:32:26 ESTIMATE_PERCENT => 10 , METHOD_OPT => 'FOR ALL
01:32:26 COLUMNS SIZE 75', DEGREE => 1 , GRANULARITY =>
01:32:26 'ALL', CASCADE => TRUE ); END;
01:32:27 SQL-END: 2009.09.02 01:32:27 00:00:01
06:35:44 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 10,101 records
06:35:44 Result of customer enhancement: 10,101 records
06:35:44 Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
06:35:44 tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
06:35:44 tRFC: Start = 2009.09.02 01:30:14, End = 2009.09.02 01:30:14
06:35:45 Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks
06:35:46 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 10,101 records
06:35:46 Result of customer enhancement: 10,101 records
06:35:46 tRFC: Data Package = 0, TID = , Duration = 00:00:01, ARFCSTATE =
06:35:46 tRFC: Start = 2009.09.02 06:35:45, End = 2009.09.02 06:35:46
06:35:46 Asynchronous send of data package 2 in task 0004 (1 parallel tasks)
06:36:55 tRFC: Data Package = 1, TID = 0A1401543C764A9E124057C6, Duration = 00:0
06:36:55 tRFC: Start = 2009.09.02 06:35:45, End = 2009.09.02 06:36:55
06:36:55 Asynchronous transmission of info IDoc 4 in task 0005 (1 parallel tasks
06:36:56 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 10,101 records
06:36:56 Result of customer enhancement: 10,101 records
please advice me where can i check and possible reasons to take long time.
Thanks,
KasiHello Kasi,
I am facing the similar issue of long running data load but in my case data source is 2LIS_13_VDITM.
Back ground job in ERP system is long running and is taking time in the below step.
00:56:56 Call customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 91.869 records
00:56:56 Result of customer enhancement: 104.153 records
00:56:56 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 91.869 records
02:13:59 Result of customer enhancement: 104.153 records
02:14:02 PSA=0 USING & STARTING SAPI SCHEDULER
02:14:02 Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
02:14:05 IDOC: Info IDoc 2, IDoc No. 348602556, Duration 00:00:00
02:14:05 IDoc: Start = 16.02.2010 00:56:34, End = 16.02.2010 00:56:34
02:14:06 Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks)
Please note that this long running issue is not occuring daily and there were no recent changes with enhancement EXIT_SAPLRSAP_001 in CMOD.
Kindly let me know how to overcome with this issue.
Thanks in advance... -
Background Job running long time.
Hello All,
One of customer background job running for long time more than 11 hours, but usually this jobs get completed within 4 hrs.
I have checked in WP trace file, it shows "WP has reached abap/heaplimit=40000000".
This is the Value I can see in RZ11. Now it is not possible to change the value.
Frequently facing this problem, please guide me to solve this issue.
Regards
Vinay.Hi
First of all, abap/heaplimit is the limit of memory usage by a WP.
Follwing is the documentation from help.sap.com
The value of the parameter should be between 10000000 (10 MB) and 2000000000 (2GB), the recommended default setting is 40000000 (40 MB).
The objective is to have the least number of work process restarts as possible, without a swap space bottleneck occurring. The heap memory allocated by the work processes has to be released again.
As shown in the graphic, the value of abap/heaplimit should be smaller than abap/heap_area_dia or abap/heap_area_nondia, so that the dialog step that is running can still be executed. This prevents the work process from working against the operating systemu2019s swap space limit if there is a programmed termination of the work process.
So,Do the above checks. And importantly check the memory utilizations at the time of running your job and check whether any other job was also running at the sametime.
Increasing the parametes is not a good idea.. If you continuously face this issue then only think of parametes. As u said ,this problem is not happenng too often.
Hope this is useful -
My procedure hang and wait long time
i have a procedure that runs only following sql statement when i execute this procedure it take long time!!one day !!how could i improve the speed??
for job in (select r.job_name,r.run_id from
applsys.fnd_sch_job_runs r
where r.start_date< add_months(sysdate,-8))
loop
delete from applsys.fnd_sch_job_run_details
where run_id = job.run_id;
delete from applsys.fnd_sch_job_runs r where r.job_name=job.job_name ;
commit;
end loop;
*************************************************************Your code...
for job in (select r.job_name,
r.run_id
from applsys.fnd_sch_job_runs r
where r.start_date < add_months( sysdate, -8 ) )
loop
delete
from applsys.fnd_sch_job_run_details
where run_id = job.run_id;
delete
from applsys.fnd_sch_job_runs r
where r.job_name =j ob.job_name ;
commit;
end loop;I will bet that there is an FK on fnd_sch_job_run_details that references fnd_sch_job_runs.
If that FK is not indexed then you will (in addition to locking fnd_sch_job_run_details) perform a FTS on fnd_sch_job_run_details for each row deleted from fnd_sch_job_runs.
Avoid this by:
1. Make sure that if there is an FK on fnd_sch_job_run_details that references fnd_sch_job_runs, that it is indexed.
2. Replace the cursor loop with something like
delete
from applsys.fnd_sch_job_run_details
where run_id IN ( select r.run_id
from applsys.fnd_sch_job_runs r
where r.start_date < add_months( sysdate, -8 ) );
delete
from applsys.fnd_sch_job_runs r
where r.start_date < add_months( sysdate, -8 );
commit; -
Auto message restart job take long time to run
Dear all,
I have configre the auto message restart job in sdoe_bg_job_monitor
but it take long time to run.
i have execute the report i have find out that it is faching records from smmw_msg_hdr .
in the table present 67 laks records are there.
actually it is taking a lot of time while getting data from the table
is there any report or tcode for clearing data in the table.
I need ur valiuble help to resolve the issue.
Regards
lakshman balanaguHI,
If you are using oracle database you may need to run table statistics report (RSOANARA) to update the index. The system admin should be able to this.
Regards,
Vikas
Edited by: Vikas Lamba on Aug 3, 2010 1:20 PM -
Batch job taking longer time then expected
Hi All,
We have a scheduled batch job which run at 11:30 pm daily.
When user did testing in UAT environment , it took 56 hrs to complete.But now when they run the same batch job in production system took more than 80 hrs.
FYI : The production server RAM(40GB approx) is more than UAT server RAM (4 GB).
Can anyone plz help to explore.
Thanks in advance.Please post:
The exact version of Oracle (10gR2 is not a version, 10.2.0.5 is a version).
The platform and OS you are using.
Any differences in init.ora parameters (including double underscore parameters you see in create pfile from spfile).
Any differences in kernel parameters.
Any differences in hardware (including network). For that matter, what hardware, how is swap defined.
Any differences in how the data was originally loaded (for example, production data entered over time online, UAT imported).
Any differences in what else is running.
How and when you've collected statistics.
It's not even twice as long, so it could be a relatively obscure difference that is your bottleneck. You have more ram, so it could be something like, you are cpu bound because you are thrashing a larger SGA, and not letting the cpu service i/o when it needs to. Statspack or AWR may give a clue about that, as can OS tools.
Remember, you can see what is happening on your system, we can't. So you have to tell us for us to help you. Cut and paste is more believable than you typoing in stuff. Use the tag before and after any output you post. -
Background Job Running long time in Production system ???
Hi guys,
I have custom report which runs normally in 20 minutes. i am executing this report every day. but when i execute the same report yesterday it took 5 hours.
i did the performance trace ....the sql statements are in acceptable range only. i don't see any major time consumption.
since it's in production system we can't debug the active job....
can u guys, throw light on this issue.
Note : there are many posts related to this already in the forum, but not helps to resolve my issue.
Regards
GirishGiri,
You can use STAD (or older one SE30) to have a trace of the program if you have that access and will be a great help in finding the problem.
If you can't due to some reason, then next time job runs for long (while in ACTIVE state), check SM50 regularly to see that the is process hanging on a table fetch, check that any of the DB table is not getting frequent locks in SM12.
Also, you should check that if your job does any table update, is there any new index created in prod system for that table.
If you can't find anything, then take the necessary access in Prod from basis temporarily and debug the job to find exact issue.
Regards,
Diwakar -
Urgent::Compression Job taking long time???
Can anybody know regarding compression of cube that how much time it should take for around 15,000 records in cube.
For Us ,it is taking around 3-4 hrs?
How can we finish it early????
We have around 1900 request in Cube .And each request having around 10,000 records.
So if we go likewise ,than it will be very time consuming ,decrease performance of other loads and very boring???
pls give ur suggestions??
thanx in advance...Hi Sonika ,
Pls find my answer in front of ur q?
Please check the
1.all availability of the background processes in sm50. NO
Ans--only one job is running
2. please check st04 ->detail analysis menu -> oracle session ..check is there any locked memory thr.
Ans--No locked memory
3. check in sm12 that ur cube is locked
Ans- no locked
3. please check any back up is going on in db12 (if u r authorized)
Ans-No Back up is running
4. check table spaces in DB02
Ans-Which table space,i mean table name -
Help plese.
You need to be more specific. What system? How did you download? anyway, if it takes that long, just stasrt over. Something is wrong. Or use a manual download and extract it with a packer tool like WinZip or WinRar.
Direct Download Links for Adobe Software
Mylenium -
Extract Audio taking long time!
I have two computers: MacBook Pro 2.33Ghz, and a 2.5 dual G5 Power PC.
A minute long clip on the G5 extracts under 10 seconds, but on the MacBook, it takes about 2-3 minutes.
Why the difference?
(I am cutting this in iMovie 6... gave up on v. 8)I wanna complain as well! I know audio editing and movie editing are really Big Tasks from the computer's point of view, but, still! It usually takes *6 minutes* to Extract Audio from a 5 second clip in iMovie.... I'm using iMovie 6.0.3 running 10.5.2 (where do I get 6.0.4? I searched in Downloads from Apple) w/ 4 gigs of RAM on a 2.2 ghz MacBook w/ more than 50 gigs of disk space avail on a 4200 rpm 250 gig HD. This is while running a few other processes but not all that much....
Is this just a structural bug in iMovie? I mean, I've rebuilt my whole work pattern to do other stuff while I do these extractions....! -
0CRM_SALES_ACT_1 takes a long time to extract data from CRM system
Hi gurus,
I am using the datasource 0CRM_SALES_ACT_1 to extract activities data from CRM side. However, it is taking too long time to get any information there.
I applied the SAP NOTE 829397 (Activity extraction takes a long time: 0crm_sales_act_1) but it did not solve my problem.
Does anybody knows something about that?
Thanks in advance,
Silvio Messias.Hi Silvio,
I've experienced a similar problem with this extractor. I attempted to Initialize Delta with Data Transfer to no avail. The job ran for 12+ hours and stayed in "yellow" status (0 records extracted). The following steps worked for me:
1. Initialize Delta without Data Transfer
2. Run Delta Update
3. Run Full Update and Indicate Request as Repair Request
Worked like a champ, data load finished in less than 2 minutes.
Hopefully this will help.
Regards.
Jason -
Job is in scheduled state for longer time.
Hi All,
I am running a load which involves extraction from source system. The load was taking longer than the usual time. I grew suspicious and checked the status of the job in the source system. But that was sitting in the scheduled state and when i try to change the job status , not able to do so. I checked the BGD processes availability and they were all in waiting status.
What can be done for this? Suggest ....Hi..the issue you have is probably due to the fact that already a lot many jobs are running in background in your R/3 system and unless they finish you job will not get any background processes to get it to active state from the scheduled state.
Now thats why usually BW loading are done at night or off hours of business when transactions or r/3 jobs are not running much.
Schedule your job at a time when background processes are available then you won't face this issue.
Contact the basis handling the R/3 system.
Regards,
RK -
Hi i have 50 infoobjects as part of my aggregates and in that 10 infoobjects have received changes in masterdata.so in my process chain the Attribute change run in running for a long time.can i kill the job and repeat the same.
Hi,
I believe this would be your Prod system, so don't just cancel it but look at the job log. If it is still processing then don't kill it and wait for the change run to complete but if you can see that nothing is happening and it is stuck for a long time then you can go ahead and cancel it.
But please be sure, as these kind of jobs can create problems if you cancel them in the middle of a job.
Regards,
Arminder Singh -
Account based COPA datsource taking long time to extract data
Hi
We have created a Account based COPA datasource but it is not extracting data in RSA3 even though the underlying tables have data in it.
If the COPA datasource is created using fields only from CE4 (segment ) and not CE1 (line items ) table then it extracts data but tat too after very long time.
If the COPA datasource is created using fields from CE4 (segment ) and CE1 (line items ) table then it does not extarct any records and RSA3 gives a time out error..
Also job scheduled from BW side for extracting data goes on for days but does not fetch any data and neither gives any error.
The COPA tables have huge amount of data and so performance could be a issue. But we have also created the indexes on them. Still it is not helping.
Please suggest a solution to this...
Thanks
GauravHi Gaurav
Check this note 392635 ,,might be usefull
Regards
Jagadish
Symptom
The process of selecting the data source (line item, totals table or summarization level) by the extractor is unclear.
More Terms
Extraction, CO-PA, CE3XXXX, CE1XXXX, CE2XXXX, costing-based, account-based,profitability analysis, reporting, BW reporting, extractor, plug-in, COEP,performance, upload, delta method, full update, CO-PAextractor, read, datasource, summarization level, init, DeltaInit, Delta Init Cause and Prerequisites
At the time of the data request from BW, the extractor determines the data source that should be read. In this case, the data source to be used depends on the update mode (full initialization of the deltamethod or delta update), and on the definition of the DataSources (line item characteristics (except for REC_WAERS FIELD) or calculated key figures) and the existing summarization levels.
Solution
The extractor always tries to select the most favorable source, that is,the one with the lowest dataset. The following restrictions apply:
o Only the 'Full' update mode from summarization levels is
supported during extraction from the account-based profitability
analysis up to and including Release PI2001.1. Therefore, you can
only everload individual periods for a controlling area. You can
also use the delta method as of Release PI2001.2. However, the
delta process is only possible as of Release 4.0. The delta method
must still be initialized from a summarization level. The following
delta updates then read line items. In the InfoPackage, you must
continue to select the controlling area as a mandatory field. You
then no longer need to make a selection on individual periods.
However, the period remains a mandatory field for the selection. If
you do not want this, you can proceed as described in note 546238.
o To enable reading from a summarization level, all characteristics
that are to be extracted with the DataSource must also be contained
in this level (entry * in the KEDV maintenance transaction). In
addition, the summarization level must have status 'ACTIVE' (this
also applies to the search function in the maintenance transaction
for CO-PA data sources, KEB0).
o For DataSources of the costing-based profitability analysis,
30.03.2009 Page 2 of 3
SAP Note 392635 - Information: Sources with BW extraction from the CO-PA
data can only be read from a summarization level if no other
characteristics of the line item were selected (the exception here
is the 'record currency' (REC_WAERS) field, which is always
selected).
o An extraction from the object level, that is, from the combination
of tables CE3XXXX/CE4XXXX ('XXXX' is the name of the result area),
is only performed for full updates if (as with summarization
levels) no line item characteristics were selected. During the
initialization of the delta method this is very difficult to do
because of the requirements for a consistent dataset (see below).
o During initialization of the delta method and subsequent delta
update, the data needs to be read up to a defined time. There are
two possible sources for the initialization of the delta method:
- Summarization levels manage the time of the last update/data
reconstruction. If no line item characteristics were selected
and if a suitable, active summarization level (see above)
exists, the DataSource 'inherits' the time information of the
summarization level. However, time information can only be
'inherited' for the delta method of the old logic (time stamp
administration in the profitability analysis). As of PlugIn
Release PI2004.1 (Release 4.0 and higher), a new logic is
available for the delta process (generic delta). For
DataSources with the new logic (converted DataSources or
DataSources recreated as of Plug-In Release PI2004.1), the line
items that appear between the time stamp of the summarization
level and the current time minus the security delta (usually 30
minutes) are also read after the suitable summarization level
is read. The current time minus the security delta is set as
the time stamp.
- The system reads line items If it cannot read from a
summarization level. Since data can continue to be updated
during the extraction, the object level is not a suitable
source because other updates can be made on profitability
segments that were already updated. The system would have to
recalculate these values by reading of line items, which would
result in a considerable extension of the extraction time.
In the case of delta updates, the system always reads from line
items.
o During extraction from line items, the CE4XXXX object table is read
as an additional table for the initialization of the delta method
and full update so that possible realignments can be taken into
account. In principle, the CE4XXXX object table is not read for
delta updates. If a realignment is performed in the OLTP, no
further delta updates are possible as they would make the data
inconsistent between OLTP and BW. In this case, a new
initialization of the delta method is required.
o When the system reads data from the line items, make sure that the
30.03.2009 Page 3 of 3
SAP Note 392635 - Information: Sources with BW extraction from the CO-PA
indexes from note 210219 for both the CE1XXXX (actual data) and
CE2XXXX (planning data) line item tables have been created.
Otherwise, you may encounter long-running selections. For
archiving, appropriate indexes are delivered in the dictionary as
of Release 4.5. These indexes are delivered with the SAP standard
system but still have to be created on the database. -
Any of you interested in a job opportunity- a long time project !!!
Any of you interested in a job opportunity a long time contract or a permanent role in ALC ES implementation ????? contact me on [email protected] This for my direct client and i know this is the perfect place for you guys to understand that there is a job for you waiting. I can help you in finding the best for you, all i need is a email from you on my ID and i will for sure will help you in getting the best project.
Start with
URL url = getClass().getClassLoader().getResource(fileName)Then observe that URL has a [openStream()|http://java.sun.com/javase/6/docs/api/java/net/URL.html#openStream()] method that will give you an InputStream. Since you are reading text you might want to create an InputStreamReader from which you can do your reading much like you used the FileReader before when you were working with a File.
Class also has a [getResourceAsStream()|http://java.sun.com/javase/6/docs/api/java/lang/Class.html#getResourceAsStream(java.lang.String)] which combines the first two steps into one. Ie use getResource() as you mentioned when you want to get a URL that some other class will use to read (eg an image) or use getResourceAsStream() when you are going to do the reading yourself.
Maybe you are looking for
-
Hi All, I am having problems adding a new row to a datagridview control in my form. I have a simple form with a textbox, a button and a datagridview. When the user types in an item number and clicks on the button, the system will populate the datagri
-
Influence Adapter Status in Monitoring from CCIInteraction.send method
Hi, I'm working on a custom adapter and would like the adapter und channel status become read when something goes wrong in my CCIInteraction.send method. I know that there is a callback method getStatus() in SPIManagedConnectionFactory which returns
-
Oracle SOA Suite 11.1.1.5 vs 11.1.1.4 Comaprision List
Hello, We are currently having Oracle SOA SUite 11.1.1.3 and willing to upgarde from 11.1.1.3 to latest version? Can someone send me the list of bug fixes done in SOA Suite 11.1.1.5 compared to 11.1.1.4? Thanks, Ram
-
Groups Authenticated users & Everyone difference
Hi Everyone, There are builtin groups Authenticated users & Everyone. when i check for some iviews, folders, their permissions are set to Everyone with enduser as checked and for some objects, the permissions are given as Authenticated users group w
-
Fireworks 8 png image 160x400px
I created a button, used Alt and drag to create 5 vertical buttons, yellow up, blue down. It's a png file that I will export to Dreamweaver. In Properties, the Text option is greyed out. In my 'experimental' button, the Text option is available and I