Long Time for execution of select query
Hi,
I have a select query
select * from Table where Time1 and time 2;
The table has a large no. of colums than rows. So the query taking a lot of time to execute.
Is there any way we can reduce the time taken by the query.
Thanks
Jit
Message was edited by:
user637843
select * from Table where Time1 and time 2;I doubt this query run for long time. More or less 1 millsecond, the time for Oracle to check the query syntax and return errors.
Nicolas.
Similar Messages
-
We are running a report ? it is taking long time for execution. what step
we are running a report ? it is taking long time for execution. what steps will we do to reduce the execution time?
Hi ,
the performance can be improved thru many ways..
First try to select based on the key fields if it is a very large table.
if not then create a Secondary index for selection.
dont perform selects inside a loop , instead do FOR ALL ENTRIES IN
try to perform may operations in one loop rather than calling different loops on the same internal table..
All these above and many more steps can be implemented to improve the performance.
Need to look into your code, how it can be improved in your case...
Regards,
Vivek Shah -
How can I get the elapse time for execution of a Query for a session
Hi ,
How can I get the elapse time for execution of a Query for a session?
Example - I have a report based on the procedure ,when the user execute that it takes say 3 min. to return rows.
Is there any possible way to capture this session info. for this particular execution of query along with it's execution elapse time?
Thanks in advance.Hi
You can use the dbms_utility.get_time tool (gives binary_integer type value).
1/ Initialize you time and date of beginning :
v_beginTime := dbms_utility.get_time ;
2/ Run you procedure...
3/ Get end-time with :
v_endTime := dbms_utility.get_time ;
4/ Thus, calculate elapsed time by difference :
v_elapsTime := v_endTime - v_beginTime ;
This will give you time elapsed in of 100th of seconds...
Then you can format you result to give correct print time.
Hope it will help you.
AL -
Sharepoint Designer workflow takes long time for execution of action
Hi All ,
I have created declarative workflow using SharePoint designer 2010.which is getting executed successfully,But taking lot of time for execution.
Below are details of it
workflow contains only one activity "assign Task to User" and workflow will start automatically after uploading document.
workflow takes 10 minutes to create task for user , 10 minutes to claim task and 10 minutes to execute if any action(Approve or Reject) is taken on task.
no error in log file or event log related to workflow.
options tried:
1.I have tried options suggested in article(http://www.codeproject.com/Articles/251828/How-to-Improve-Workflow-Performance-in-SharePoint#_rating ),but no luck
2. Reduced the interval of worflow timer job to 1 from 5 .still no luck
Any thoughts regarding this would be appreciated.
ragava_28Hi Thuan,
I have similar issue posted here
http://social.msdn.microsoft.com/Forums/sharepoint/en-US/82410142-31bc-43a2-b8bb-782c99e082d3/designer-workflow-with-takes-time-to-execute?forum=sharepointcustomizationprevious
Regards,
SPGeek03 -
Long time for execution for scheduled CIF background jobs
Hi,
we hv schedlued CIF background job to be run daily around 10.30 PST.
there is large variation for the time required for execution of this job.
on 26 Dec, it took approx 48000 seconds while regular average is 120 seconds only.
today, despite past of 6000 seconds, the job is still in ACTIVE stage.
can anyone know why such long delay for such jobs?
how can i reduce its execution time (as one of case in a week)?
rgds/JayHi Jay,
A few obvious things to look for:
1. Multiple CIF activation jobs running at the same time
2. Large change in the master data, eg new plant, new Material Masters, new customers, etc etc.
3. Conflicts with other non CIF programs that may be going after the same data
4. Communication degradation between the OLTP and SCM clients
Normally you refer such questions to someone on your Basis team, or perhaps your DBA. They can turn on tracing tools that can track the changes in your environment that may be contributing to the changes in run time.
Regards,
DB49 -
Every 3rd data package taking long time for execution
Hi Everyone
We are facing a strange situation. Our scenario involves doing a full load from DSO to CUBE.
Start routines are not very database intensive and care has been taken to write them in a optimized way.
But strangely every 3rd data package is taking exceptionally longer time than other data packages.
a) DTP is having 3 parallal processes.
b)time spent in extraction , rule, and updation is constant for every data package.
c)start routine time is larger for every 3rd data package and keeps on increasing. for e.g. 5 mins, 10 mins, 24 mins, 33 mins etc it increases by each 3rd package.
I tried to anlayze the data which was taking so much time but found no difference in terms of data in normal and longer time taking DTP (i.e. there was not logical difference in data for start routine to behave like this).
I was wondering what can be the possible reasons for it and may be some other external system factors can be responsible for it. If someone can help in this regard that will be highly appreciated.Hi Hemanth,
In your start routine, are you by any chance adding or multiplying the number of records to the source_package? Something like copy source package into an internal table, add records to internal table and then copy it back to source package? If some logic of this sorts is in your start routine, you need to refresh your internal table. Otherwise, the internal table records goes on increasing with every data package. So, the processing time might increase as the load progresses. This is one common mistake I have seen. Please check your code if you have something like that and refresh the internal tables. See if this makes any difference.
Thanks and Regards
Subray Hegde -
ME2O taking more time for execution
This has been found sometime ME2O takes very long time for execution. Please refer the below screen shot for selection.
If you provide the component no, the search time is little less. But still sometimes the execution take more than expected time. This is because SAP standard program search all the deliveries even though these are completed.
There is SAP note:1815460 - ME2O: Selection of delivery very slow. This will help to improve the execution time a lot.
Regards,
Krishnendu.THanks for sharing this information,
-
You are running a report. It is taking long time for
You are running a report. It is taking long time for
execution. What steps will you do to reduce the
execution time.
plx explain clearlyAvoid loops inside loops.
Avoid select inside loops.
Select only the data that is required instead of using select *
Select the field in the sequence as they are present in the database, and also specify the fields in the where clause in the same sequence.
When ur using for all entries in the select statement, check whether the internal table to which ur refering is not initial.
Remove select... endselect instead use into table
Avoid Select Single inside the loop, instead select all the data before the loop and read that internal table inside the loop using binary search.
Sort the Internal tables where ever necessary. -
Query taking long time for EXTRACTING the data more than 24 hours
Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBAI have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
round(a.account_balance, 2) account_balance,
nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
'DD-MON-YYYY') due_date,
ah.current_balance - ah.previous_balance amount,
decode(ah.invoice_id, null, 'A', 'I') transaction_type
from account a, account_history ah, invoice i
where a.account_id = ah.account_id
and a.account_type_id = 1000002
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id = i.invoice_id(+)
AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647; -
Validate/madatory in order to avoid long time for the program execution.
Below is my selction screen. I would like to do validations for specific fields to make them mandatory fields as well.
Could you please suggeste me what are the fields that I can validate/madatory in order to avoid long time for the program run.
*Program selections
SELECTION-SCREEN : BEGIN OF BLOCK b1 WITH FRAME TITLE text-h01.
SELECT-OPTIONS : o_as4loc FOR v_as4local,
o_ddlang FOR ddlanguage,
o_vkorg FOR vkorg MEMORY ID vko,
o_auart FOR auart MEMORY ID aat,
o_fdnam FOR fdnam,
o_spart FOR spart MEMORY ID spa,
o_vbeln FOR vbeln MEMORY ID aun,
o_posnr FOR posnr,
o_matnr FOR matnr MEMORY ID mat,
o_pltyp FOR pltyp MEMORY ID vpl,
o_j_3ar FOR j_3arqda,
o_kzr00 FOR kbetr, " -ZR00
o_kalsm FOR kalsmasd,
o_kmwst FOR kbetr, " -MWST
o_kbmen FOR kbmeng,
o_land1 FOR land1 MEMORY ID LND
SELECTION-SCREEN : END OF BLOCK b1.
Thanks in advance.Hi,
Make key fields as well as fields on which there in an index in the database table mandatory.
Regards,
Abhijit G. Borkar -
Procedure is taking more time for execution
hi,
when i am trying to execute the below procedure it is taking more time for
execution.
can you pls suggest the possaible ways to tune the query.
PROCEDURE sp_sel_cntr_ri_fact (
po_cntr_ri_fact_cursor OUT t_cursor
IS
BEGIN
OPEN po_cntr_ri_fact_cursor FOR
SELET c_RI_fAt_id, c_RI_fAt_code,c_RI_fAt_nme,
case when exists (SELET 'x' FROM A_CRF_PARAM_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_EMPI_ERV_CALIB_DETAIL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_IC_CNTRY_IC_CRF_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_IC_CRF_CNTRYIDX_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.x_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.y_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_PAR_MARO_GAMMA_PRIME_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM D_ANALYSIS_FAT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM D_CALIB_CNTRY_RI_FATOR t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_BUSI_PORT_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_CNTRY_LOSS_DIST_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_CNTRY_LOSS_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_CRF_BUS_PORTFOL_CRITERIA t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_CRF_CORR_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_HYPO_PORTF_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
else
'No'
end used_analysis_ind,
creation_date, datetime_stamp, user_id
FROM A_IC_CNTR_RI_FAT
ORDER BY c_RI_fAt_id_nme DESC;
END sp_sel_cntr_ri_fact;[When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=1812597]
-
Taking Long Time To Show Printer Selection Window
Hi experts,
When users are taking printout through SAP,it's taking long time to show printer selection window(Local Printer Selection Windows)
present i am using sap ehp7 on sybase in windows.
Thanks in advance..Hi,
Have you tried by reinstalling of printer with patch updation of printer as well as of windows also ? If possible try to uninstall the printer & check again by reselection.
Addition to it could you please share SAP GUI version on the system & confirm for, are you getting this issue on every system ( OS ) ?
Regards,
Gaurav -
Bex Reports takes long time for filtering
Hi,
We have gone live in last December.And already our inventory cube contains some 15 million records,sales cube contains 12 million records.
Is there any specific limit to number of records.Because while doing filtering in inventory report or sales reports it is taking very long time.
Is there any alternative or we should delete some the data from the cube.
for filtering any value it is taking long time than running the query itself.
Pls help...
Regards,
viren.Hi Viren,
I hope a cube can perform well even at 100 million records with some performance tunning. So i absolutely doubt why it is taking long time for your cube with just 10-15 million records.
Do a performance analysis and check whether aggregates will be helpful or not.
Check the below link for how to do a performance analysis.
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
Hope it helps.
Thx,
Soumya -
Hi Experts,
One of my data target (Data comming from another ODS) is taking long time for loading, basically it takes below 10 times only, but today it is running from last 40 minesu2026
In Status Tab it showing....
Job termination in source system
Diagnosis
The background job for data selection in the source system has been terminated. It is very likely that a short dump has been logged in the source system
Procedure
Read the job log in the source system. Additional information is displayed here.
To access the job log, use the monitor wizard (step-by-step analysis) or the menu path <LS>Environment -> Job Overview -> In Source System
Error correction:
Follow the instructions in the job log messages.
Can anyone please solve my problem.
Thanks in advance
DavidHi Experts,
Thanks for your answers, and my load failed when the data comes from one ODS to another ODS. find the below job log
Job started
Step 001 started (program SBIE0001, variant &0000000007169, user ID RCREMOTE)
Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
DATASOURCE = 8ZPP_OP3
Current Values for Selected Profile Parameters *
abap/heap_area_nondia......... 20006838008 *
abap/heap_area_total.......... 20006838008 *
abap/heaplimit................ 83886080 *
zcsa/installed_languages...... ED *
zcsa/system_language.......... E *
ztta/max_memreq_MB............ 2047 *
ztta/roll_area................ 5000000 *
ztta/roll_extension........... 4294967295 *
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:31:55, End = 06.09.2010 01:31:55
Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 2 in task 0004 (2 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:32:00, End = 06.09.2010 01:32:00
Asynchronous transmission of info IDoc 4 in task 0005 (2 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 3 in task 0006 (3 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:32:04, End = 06.09.2010 01:32:04
Asynchronous transmission of info IDoc 5 in task 0007 (3 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 4 in task 0008 (4 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:32:08, End = 06.09.2010 01:32:08
Asynchronous transmission of info IDoc 6 in task 0009 (4 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 5 in task 0010 (5 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:32:11, End = 06.09.2010 01:32:11
Asynchronous transmission of info IDoc 7 in task 0011 (5 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Asynchronous send of data package 13 in task 0026 (6 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:01, ARFCSTATE =
tRFC: Start = 06.09.2010 01:32:44, End = 06.09.2010 01:32:45
tRFC: Data Package = 8, TID = 0AEB465C00AE4C847CEA0070, Duration = 00:00:17,
tRFC: Start = 06.09.2010 01:32:29, End = 06.09.2010 01:32:46
Asynchronous transmission of info IDoc 15 in task 0027 (5 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 14 in task 0028 (6 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:32:48, End = 06.09.2010 01:32:48
tRFC: Data Package = 9, TID = 0AEB465C00AE4C847CEF0071, Duration = 00:00:18,
tRFC: Start = 06.09.2010 01:32:33, End = 06.09.2010 01:32:51
Asynchronous transmission of info IDoc 16 in task 0029 (5 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 15 in task 0030 (6 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:32:52, End = 06.09.2010 01:32:52
Asynchronous transmission of info IDoc 17 in task 0031 (6 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 16 in task 0032 (7 parallel tasks)
tRFC: Data Package = 10, TID = 0AEB465C00684C847CF30070, Duration = 00:00:18,
tRFC: Start = 06.09.2010 01:32:37, End = 06.09.2010 01:32:55
tRFC: Data Package = 11, TID = 0AEB465C02E14C847CF70083, Duration = 00:00:17,
tRFC: Start = 06.09.2010 01:32:42, End = 06.09.2010 01:32:59
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:32:56, End = 06.09.2010 01:32:56
Asynchronous transmission of info IDoc 18 in task 0033 (5 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 17 in task 0034 (6 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:33:00, End = 06.09.2010 01:33:00
tRFC: Data Package = 12, TID = 0AEB465C00AE4C847CFB0072, Duration = 00:00:16,
tRFC: Start = 06.09.2010 01:32:46, End = 06.09.2010 01:33:02
Asynchronous transmission of info IDoc 19 in task 0035 (5 parallel tasks)
Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
Result of customer enhancement: 100,000 records
Asynchronous send of data package 18 in task 0036 (6 parallel tasks)
tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
tRFC: Start = 06.09.2010 01:33:04, End = 06.09.2010 01:33:04
Asynchronous transmission of info IDoc 20 in task 0037 (6 parallel tasks)
ABAP/4 processor: DBIF_RSQL_SQL_ERROR
Job cancelled
Thanks
Daivd
Edited by: david Rathod on Sep 6, 2010 12:04 PM -
Impdp taking long time for only few MBs data...
Hi All,
I have one query related to impdp. I have one expdp file and size is 47M. When I restore this dmp using impdp it will take more time. Also initially table_data loaded finsih very fast but then later on alter function/procedure/view taking a lot time almost 4 to 5 hrs.
I have no idea why its taking long time... Earlier I can see one DB link is failed and it given error for "TNS name could not resolved" so I create DB link also before run impdp but the same result. Can any one suggest what could be the cause to taking long time for only 47MB data?
Note - Both expdp and impdp database version is 11.2.0.3.0. If I import the same dmp file into 11.2.0.1.0 then its done in few mins.
Thanks...Also Read
Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [ID 453895.1]
DataPump Import (IMPDP) is Very Slow at Object/System/Role Grants, Default Roles [ID 1267951.1]
Maybe you are looking for
-
[Solved] Native resolution 1366x768 with Vesa drivers (cedar trail)
Hey, I've been trying to get a native widescreen resolution on this machine and not had any luck. This is an intel atom D2550 on an NM10, so it loads the gma500_gfx driver first but that doesn't work. I just get a black screen on boot. After trying e
-
I have Adobe Photoshop Elements 10. It came installed on my HP Desktop. I am can not get the warp text tool to work. It has worked in the past. Once I click warp text, the little window does not pop up, and if I try to do anything else after I click
-
Process: iPhoto [376] Path: /Applications/iPhoto.app/Contents/MacOS/iPhoto Identifier: com.apple.iPhoto Version: 9.2 (9.2) Build Info: iPhotoProject-626000000000000~2 Code Type: X86 (Native) Parent Process:
-
Hi, I am facing the problem in opening the word and excel files received through email in my MS office professional 2013 paper licence, Microsoft Windows 8.1 enviornment. please help in this matter regards Rajeev Bhagwat Administrative Officer-I Purc
-
Using eMac keyboard with iMac help needed.
I have an iMac from around 2009, at the weekend the keyboard stopped working and until I can afford a replacement a friend has given me an old eMac keyboard he had spare to use in the meantime (which could be a while). Just this morning I've realised