Query on long running client import process
Hi Gurus,
I have few queries regarding the parameter PHYS_MEMSIZE. Let me brief about the SAP server configuration before I get into the actual problem.
We are running on ECC 6.0 on windows 2003, SP2 - 64 bit and DB is SQL 2005 with 16GB RAM and page file configured 22 GB.
As per the Zero Administration Memory management I have learnt as a rule of thumb, use SAP/DB = 70/30 and set the parameter PHYS_MEMSIZE to approximately 70% of the installed main memory. Please suggest should I change the parameter as described in zero administration memory guide? If so what are the precautions we should take care for changing memory parameters. Are there any major dependencies and known issues associated to this parameter?
Current PHYS_MEMSIZE parameter set to 512 MB.
Few days ago we had to perform the client copy using EXPORT / IMPORT method. Export process was normal and went well However the import process took almost 15 HRs to complete. Any clues what could be the possible reasons for long running client copy activity in SQL environment. I am suspecting the parameter PHY_MEMSIZE was configured to 512 MB which appears to be very low.
Please share your ideas and suggestions on this incase if anyone have ever experienced this sort of issue because we are going to perform a client copy again in next 10 days so i really need your inputs on this.
Thanks & Regards,
Vinod
Edited by: vinod kumar on Dec 5, 2009 9:24 AM
Hi Nagendra,
Thanks for your quick response.
Our production environment is running on ACtive/Active clustering like One central Instance and Dialog Instance. Database size is 116 GB with 1 data file and log file is 4.5 Gb which are shared in cluster.
As suggested by you if I need to modify the PHYS_MEMSIZE to 11 or 12 GB(70% of physical RAM). What are the precautions should I consider and I see there are many dependencies associated with this parameter as per the documentation of this parameter.
The standard values of the following parameters are calculated
According to PHYS_MEMSIZE
em/initial_size_MB = PHYS_MEMSIZE (extension by PHYS_MEMSIZE / 2)
rdisp/ROLL_SHM
rdisp/ROLL_MAXFS
rdisp/PG_SHM
rdisp/PG_MAXFS
Should I make the changes to both Central and dialog instance as well. Please clarify me,. Also are there any other parameters should i enhance or adjust to speedup the client copy process.
Many Thanks...
Thanks & Regards,
Vinod
Similar Messages
-
How to Handle Long running alerts in process chain...Urgent!!!
Hi All,
I am trying to find ways for handling long running alerts in process chains.
I need to be sending out mails if the processes are running for longer than a threshold value.
I did check this post:
Re: email notification in process chain
Appreciate if anyone can forward the code from this post or suggest me other ways of handling these long running alerts.
My email id is [email protected]
Thanks and Regards
Pavana.Hi Ravi,
Thanks for your reply. I do know that i will need a custom program.
I need to be sending out mails when there is a failure and also when process are running longer than a threshold.
Please do forward any code you have for such a custom program.
Expecting some help from the ever giving SDN forum.
Thanks and Regards
Pavana
Message was edited by:
pav ana -
Hello all
How can I speed up the removal of the big productive client?
I use:
1) No Archive log mode
2) Parralel processing
I reran the removal of 4 times. Total removed about 500 million entires from 1 200 billion
There are very few tables, but I think it is a large tables
Please,see part of the client deletion information.
Current Action:
Copy/Delete Tables
Process
Server
Table
Time
MVKE
15:46:36
00002
CDCLS
16:28:49
00004
BDCP2
16:25:52
00005
MARC
15:48:07
00007
CKMLPR
16:09:21
00008
MSTA
15:46:41
00009
FAGLFLEXA
15:36:21
00010
FAGL_SPLINFO
15:33:26
00011
MOFF
15:46:17
00014
MBEWH
16:30:05
00016
CDHDR
16:28:52
Statistics for this Run
- No. of Tables
65979 of
66248
- Number of Exceptions
3
- Deleted Lines
62747951
Sm66:
SAPLSCCR Direct Read MSTA
SAPLSCCR Direct Read CDHDR
SAPLSCCR Direct Read CKMLPR
SAPLSCCR Direct Read MOFF
SAPLSCCR Direct Read MBEWH
SAPLSCCR Direct Read BDCP2
SAPLSCCR Direct Read MARC
SAPLSCCR Direct Read FAGLFLEXA
SAPLSCCR Direct Read CDCLS
SAPLSCCR Direct Read FAGL_SPLIN
SAPLSCCR Direct Read MVKE
Only direct read, and nothing else.Ouu, truncate?Please,tell me more about this..
Long story. Just need a copy of the productive system without application data. Only customizing.
I created new client in the system and copied customizing from PRD client(SCCL), but i need also delete productive client from system,.,Only client with cust -
I am dealing with an issue that I believe I have boiled it down to being a Forms issue. One of my developers has a form that is taking 40+ minutes to run a pretty complicated query. At first I believed that it was a query or development issue, however the same query can be ran from Toad or from SQLPlus in under a few seconds. I have even ran the query from SQLPlus on the forms server with the same speedy performance. The only environment in which this query takes almost an hour to run is if it is ran from her .FMX ... I am soooooo at a loss right now as to what I could do to fix this. Has anyone experienced something of this nature?
Additionally the query returns ZERO results and this is an expected outcome so I don't believe it has to do with Toad buffering or SQLPlus return the rows as they are fetched. Anyway I'm at a loss and any help what-so-ever will be greatly appreciated.To show what can go wrong look at this simple example.
HR@> CREATE TABLE a (ID VARCHAR2(10) PRIMARY KEY);
Table created.
HR@>
HR@> insert into a select rownum from dual connect by rownum <= 1e6;
1000000 rows created.
HR@>
HR@> set timing on
HR@>
HR@> select * from a where id = 100;
ID
100
Elapsed: 00:00:00.34
HR@>
HR@> select * from a where id = '100';
ID
100
Elapsed: 00:00:00.00
HR@> explain plan for
2* select * from a where id = 100
HR@>
HR@> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2248738933
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 7 | 522 (12)| 00:00:07 |
|* 1 | TABLE ACCESS FULL| A | 1 | 7 | 522 (12)| 00:00:07 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter(TO_NUMBER("ID")=100)Because of implicit conversion (as explain plan shows) select * from a where id = 100 takes longer than select * from a where id = '100'. -
Error in client import process
Hi Team,
Yesterday we have performed the client copy from qulaity to developement server using Export and import method(Using SAP_ALL profile).Created two transport requests one is for cleint specific and another one is for client specific texts.
client specifix transport ran to sucess but transport request for texts(QASKX00080) was ended with return code -8. Error log given some list of objects which says original object can not be replace. Below are the objects list.. Please suggest and let us know what action needs to be taken.
Objects list:
R3TRFORMZ110_CHECKFAX original object cannot be replaced
R3TRFORMZ110_IN_AVIS original object cannot be replaced
R3TRFORMZ110_IN_CHECK original object cannot be replaced
R3TRFORMZ140_ACC_STAT_01 original object cannot be replaced
R3TRFORMZ140_ACC_STAT_02 original object cannot be replaced
R3TRFORMZ140_PAY_CONF_01 original object cannot be replaced
R3TRFORMZ140_PAY_CONF_02 original object cannot be replaced
R3TRFORMZ140_PAY_CONF_IO original object cannot be replaced
R3TRFORMZ140_STNDRD_LTTR original object cannot be replaced
R3TRFORMZ141_VENDOR_BACS original object cannot be replaced
R3TRFORMZ150_DUNN_01 original object cannot be replaced
R3TRFORMZ150_DUNN_02 original object cannot be replaced
R3TRFORMZ150_DUNN_03 original object cannot be replaced
R3TRFORMZ150_DUNN_04 original object cannot be replaced
R3TRFORMZ150_DUNN_05 original object cannot be replaced
R3TRFORMZ150_DUNN_06 original object cannot be replaced
R3TRFORMZ150_DUNN_07 original object cannot be replaced
R3TRFORMZ150_DUNN_08 original object cannot be replaced
R3TRFORMZ150_DUNN_09 original object cannot be replaced
R3TRFORMZVINVOICE01 original object cannot be replaced
R3TRFORMZ_AUDDIS_LETTER original object cannot be replaced
R3TRFORMZ_BACS_EMAIL_LET original object cannot be replaced
R3TRFORMZ_CONFVENDOREMAI original object cannot be replaced
R3TRFORMZ_CUST_AGREEMENT original object cannot be replaced
R3TRFORMZ_CUST_VATNO_REQ original object cannot be replaced
R3TRFORMZ_DD_BULKCHNGLTR original object cannot be replaced
R3TRFORMZ_DD_REQUEST_NEW original object cannot be replaced
R3TRFORMZ_DD_SCHEDULE original object cannot be replaced
R3TRFORMZ_ECP_CHEQUE original object cannot be replaced
R3TRFORMZ_ECP_INVOICE original object cannot be replaced
R3TRFORMZ_INVOICE_LIST original object cannot be replaced
R3TRFORMZ_INVOICE_LISTS original object cannot be replaced
R3TRFORMZ_MEDRUCK_CONF original object cannot be replaced
R3TRFORMZ_MEDRUCK_EVAL original object cannot be replaced
R3TRFORMZ_MEDRUCK_PO original object cannot be replaced
R3TRFORMZ_MEDRUCK_RFQ original object cannot be replaced
R3TRFORMZ_MEDRUCK_TENDER original object cannot be replaced
R3TRFORMZ_MEDRUCK_VATA original object cannot be replaced
R3TRFORMZ_MEDRUCK_VATB original object cannot be replaced
R3TRFORMZ_REJ_DD_LETTER original object cannot be replaced
R3TRFORMZ_RVINVOICE01 original object cannot be replaced
R3TRFORMZ_RVINVOICE02 original object cannot be replaced
Forms:
Object FORM Z110_CHECKFAX has not yet been imported successfully
Object FORM Z110_IN_AVIS has not yet been imported successfully
Object FORM Z110_IN_CHECK has not yet been imported successfully
Object FORM Z140_ACC_STAT_01 has not yet been imported successfully
Object FORM Z140_ACC_STAT_02 has not yet been imported successfully
Object FORM Z140_PAY_CONF_01 has not yet been imported successfully
Object FORM Z140_PAY_CONF_02 has not yet been imported successfully
Object FORM Z140_PAY_CONF_IO has not yet been imported successfully
Object FORM Z140_STNDRD_LTTR has not yet been imported successfully
Object FORM Z141_VENDOR_BACS has not yet been imported successfully
Object FORM Z150_DUNN_01 has not yet been imported successfully
Object FORM Z150_DUNN_02 has not yet been imported successfully
Object FORM Z150_DUNN_03 has not yet been imported successfully
Object FORM Z150_DUNN_04 has not yet been imported successfully
Object FORM Z150_DUNN_05 has not yet been imported successfully
Object FORM Z150_DUNN_06 has not yet been imported successfully
Object FORM Z150_DUNN_07 has not yet been imported successfully
Object FORM Z150_DUNN_08 has not yet been imported successfully
Object FORM Z150_DUNN_09 has not yet been imported successfully
Object FORM ZVINVOICE01 has not yet been imported successfully
Object FORM Z_AUDDIS_LETTER has not yet been imported successfully
Object FORM Z_BACS_EMAIL_LET has not yet been imported successfully
Object FORM Z_CONFVENDOREMAI has not yet been imported successfully
Object FORM Z_CUST_AGREEMENT has not yet been imported successfully
Object FORM Z_CUST_VATNO_REQ has not yet been imported successfully
Object FORM Z_DD_BULKCHNGLTR has not yet been imported successfully
Object FORM Z_DD_REQUEST_NEW has not yet been imported successfully
Object FORM Z_DD_SCHEDULE has not yet been imported successfully
Object FORM Z_ECP_CHEQUE has not yet been imported successfully
Object FORM Z_ECP_INVOICE has not yet been imported successfully
Object FORM Z_INVOICE_LIST has not yet been imported successfully
Object FORM Z_INVOICE_LISTS has not yet been imported successfully
Object FORM Z_MEDRUCK_CONF has not yet been imported successfully
Object FORM Z_MEDRUCK_EVAL has not yet been imported successfully
Object FORM Z_MEDRUCK_PO has not yet been imported successfully
Object FORM Z_MEDRUCK_RFQ has not yet been imported successfully
Object FORM Z_MEDRUCK_TENDER has not yet been imported successfully
Thanks & Regards,
VinodHi Mohit,
Checked in transport log in STMS transaction.There i could not find the reason why these objects were failed to overwrite the original obejcts. As Sven and markus said and also sap help says the same If you still want to import the object, repeat the import with Unconditional Mode 2.
Recently we had performed cleint copy from production to Qulaity and the process went well with out any errors. yesterday we did client copy from Quality to Developemt. We are Curious these objects appeared to copy back from PRD to QAS without generating any problems u2013 why these errors were generated this time round when we perform same obejcts from QAS to DEV.
The objects that are created the errors are all SAP script forms and donu2019t appear to have been copied back to DEV from QAS u2013 all these objects are only ever changed on DEV and then transported to QAS and PRD so they should already be identical to each other. We have carried out some spot checks on a sample of the listed objects and can find no differences.
Thanks & Regards,
vinod -
Reg: Long Running BTC work process
Hi Expert,
Yesterday and today i check SAPMMC only one background work process still running (48 hours), How to analyse and why running for long time please explain for me. SM50 in transaction code also stilll one btc workprocess running.
BTC 3196 RUN YES 48:11:32 156101 SAPLTHFB,
SAPLSKTM,
SAPLOLEA,
SAPMSSYD 001 BASIS
Please give solution i will give reward points..
Thanks,
Selvasee if this is helpfull
http://forums.sdn.sap.com/thread.jspa?threadID=1406031 -
I realise that I may need to get more details for a different list but to start with I thought I would ask a more general community.
I have a retail POS application that inserts transaction rows to a staging table. I then need an internal database process to import these transactions via some sort of PL/SQL or Java SP process. My question is, what would be the best way of having such a process running in the background (similar to DBMS_JOB) and polling or being woken up by DBMS_ALERT to process a new row when it arrives?
Thanks for any thoughtsI'm not sure why you need a staging table at all. Why not have the POS application call a stored procedure that does whatever processing is necessary?
Assuming you do need a staging table, whether to use DBMS_JOB or DBMS_ALERT really depends on the requirements. DBMS_JOB would be ideal if you want to periodically process rows in batches, DBMS_ALERT if you want to process rows one at a time (in which case the whole staging table idea seems pointless) or based on some state being reached in the POS application (i.e. closing out the register).
Justin -
Long Running dialog work process in production ststem
Dear Friends,
IIn our production system one dialog work process is running more then 5 days. i can not terminate or cancel the process.'
I tried all the ways to cancel the work process. i restart the system (offline backup) but still that work process are showing running status....
Kindly advice....
Thanks & Regards,
Sundar.CIIn our production system one dialog work process is running more then 5 days. i can not terminate or cancel the process.'
I tried all the ways to cancel the work process. i restart the system (offline backup) but still that work process are showing running status....
Maximum runtime of dialog work process is controlled by parameter rdisp/max_wprun_time I am not sure how it is possible to have dialog work process running for so many days. Secondly, when you restart the system means new PIDs will be allocated and it will be not the same dialog work process. Check SM50, trace information for more details and clues. -
Upgrading Stellent 7.5 to OCS 10gR3 Import Process failing HELP NEEDED
Hi,
I am upgrading Stellent 7.5 to Oracle COntent Server 10gR3. Here is what I have done.
1. Migrated all the configuration from Stellent to 10gR3
2. Migrated the Folders from Stellent to 10gR3
3. Migrated the content by creating an Archive and then importing the Archive in 10gR3.
I am seeing lot of errors in the log file. Following are the errors I see in the log file.
1.
Could not send mail message from (null) with subject line: Content Release Notification. Could not get I/O for connection to: hpmail.rtp.ppdi.com java.net.ConnectException: Connection timed out
2.
Import error for archive 'ProductionContent' in collection 'prod_idc': Invalid Metadata for 'ID_000025'. Virtual folder does not exist.
3.
Import error for archive 'ProductionContent' in collection 'prod_idc': Content item 'ID_004118' was not successfully checked in. The primary file does not exist.
4.
Import error for archive 'ProductionContent' in collection 'prod_idc': Content item 'ID_004213' was not successfully checked in. IOException (System Error: /u01/app/oracle/prod/ucm/server/archives/productioncontent/09-dec-21_23.29.44_396/4/vault/dmc_unblinded_documents/4227 (No such file or directory)) java.io.FileNotFoundException: /u01/app/oracle/prod/ucm/server/archives/productioncontent/09-dec-21_23.29.44_396/4/vault/dmc_unblinded_documents/4227
5.
Import error for archive 'ProductionContent' in collection 'prod_idc': Content item 'ID_031414' with revision label '2' was not successfully checked in. The release date (11/4/08 9:12 AM) of the new revision is not later than the release date (11/4/08 9:12 AM) of the latest revision in the system.
6.
Import error for archive 'ProductionContent' in collection 'prod_idc': Invalid Metadata for 'ID_033551'. Item with name '07-0040_IC_Olive-View_UCLA_ERI_Cellulitis_2008-08-26.pdf' already exists in folder '/Contribution Folders/2007/07-0040/07-0040Site_Specific_Documents/07-0040Olive_View_UCLA_Medical_Center/07-0040Archive/07-0040Essential_Documents_ARC/07-0040Informed_Consent_ARC/'.
7.
Import error for archive 'ProductionContent' in collection 'prod_idc': Aborting. Too many errors.
QUESTIONS:
Is there a way to keep the import processing running even if the errors are coming. As it looks like when there are too many errors the import process stops in the middle.
How do I find out the total number of folders and the documents. As I want to run the same query on Stellent 7.5 and find out total number of folders and the documents and then run the same query on 10gR3 and compare the results. Just want to fnd out how much content is imported.
How do I run the import process over again as half of the content is imported and my import process failed in the middle when running the process over again what settings do I need to provide to make sure no duplicates get created etc.
Any help is really appreciated.
ThanksHi
There are a couple of ways to get around the issues that you are facing such that import process is not interrupted because of these. They are as follows :
1. Use ArchiveReplicationException component . This will keep the import process running and make a log of the failed process which can be used for assessing / gauging the success of the import and what all needs to be redone.
I would suggest this as the option for your case.
2. Put the config variable for arciver exception to 9999 so that the archive process stops only after hitting the limit of 9999 errors.
I would suggest to go for step 1 as that would be a much more foolproof and methodical way of knowing what all items have failed during import.
Thanks
Srinath -
Dispatcher not started after client import
hi all,
here i imported one client and the client import process is running but in that system we have other clients my question is users are unable to log in into that other clients
so i checked into os leve (dpmon) and after i try to stopsap and its stoped and after it is not started the error message is shown as
oscollector is already running,, and
trying to start sid database .....
log file :/ /home / sidadm / startdb.log
please what can i do for my server starthi srineevas elow are shown my usr/sap/sid/sys/exe/run/startdb
#!/bin/sh
@(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ind/ind/startdbora#4 $
NAME: startdb
PURPOSE: startup an ORACLE database
PARAMETER:
none
Environment
ORACLE_HOME must be set
ORACLE_SID must be set
Return Codes:
0 ok. DB may have already been running.
1 erroneous license check
2 DB not startable: inconsistency?
3 environment check failed
4 orasrv error / starting of listener error
5 script was killed
USAGE: startdb [ORACLE_HOME] [startRemoteService]
DESCRIPTION:
- initialize
- start orasrv / now: lsnrctl start
- check database state (connect)
- startup database, change archivelog mode
if this procedure is changed (e.g. Test System)
there is no guarantee of hotline support if the system crashes
- check database state (connect)
- check archive log state
- licence check
DGUX and Sequent exceptions
R3trans is not available on the DG standalone DB server
therefore the Oracle processes are checked
#History
#2001-03-01 rv replace SAPSYSTEMNAME with DB_SID where appropriate, due to SDMS
#2000-11-14 rv include begin/end messages from startsap
#1996-07-25 kgd SQLNETV2 and SQLNETV1
#1996-07-11 ar Sequent exceptions
#1996-06-11 kgd SQLNET*V2 adaptions
#1994-12-7 mau warning "Transport system not init." not only to
logfile but also to stdout
Project: 22C 241
#1995-07-25 dg included check that caller is admuser
#====================================================================
FUNCTION: check_db_running
PURPOSE: check wheather db is runnig without R3trans
set returncode = 0 db available
returncode != 0 db error
check_db_running()
in order to find out if the database is available
check whether the Oracle processes are running or not
echo "
" >> $LOG
echo `date` >> $LOG
echo "check if Oracle processes are running" >> $LOG
$PS | grep ora_[a-z][a-z][a-z][a-z]_${ORACLE_SID} > /dev/null 2>&1
if test $? -eq 0
then
echo 'Database already running' >> $LOG
returncode=0;
else
echo 'database not available' >> $LOG
now check if sgadef exists
if a database connect fails but sgadef exists,
it is possible that there is a severe database problem
otherwise continue with startup procedure
if test -f $ORACLE_HOME/dbs/sgadef$.dbf -o \
-f $ORACLE_HOME/dbs/sgadef$.ora
then
echo "*** ERROR:Database possibly left running when system" >> $LOG
echo " went down(system crash?)." >> $LOG
echo " Notify Database Administrator." >> $LOG
returncode=$error_dbnotavail
else
echo 'There are no Oracle processes running - ' >> $LOG
echo 'Database is probably already stopped.' >> $LOG
returncode=$error_dbnotavail
fi
fi -
In forms how to cancel long running process or query in 10g
We have application which is hosted on 10g AS. Some forms has lot of processing to be done and sometimes user wants to cancel the processing in between maybe because he wants to change some value and refire processing.
Based on the search on net 'Esc' key was used in earlier version of forms to function as User requested Cancel operation. How can same be done in 10g. Do we have to do anything in fmrweb.res for this. Is there some setting to be done in forms or in AS for this functionality.
Does this matter on whether JInitiator is used or JPI is used for running the application?
Edited by: suresh_mathew on May 21, 2013 1:36 AMHi,
Exit can be used to cancel query mode i.e. in case you go into query mode by Exit you can cancel query mode. Suppose you went into query mode and you have fired query which will take some time to fetch how can I abort it.
In earlier version of form there was 'Cancel' facility wherein if triggered it used to fire an error message 'Ora--01013 user requested cancel of current operation"
With this facility you can abort any query which is executing or any long running process which forms is currently performing.
fmrweb.res would have entry like
27 : 0 : "Esc" : 1001 : "Cancel"
The above entry I picked from OPN
Java Function Numbers And Key Mappings For Forms Deployed Over Web [ID 66534.1]
Unfortunately this is not working for us even if I put this in frmweb.res of 10g AS
Basically I want ability to Abort/Cancel a long running process be it query execution or standard process triggered in the form.
Any advise or help is highly appreciated.
Suresh -
Long running process/query
One of my after submit processes involves a lot of processing, temp tables, sorting, aggregating, etc.
When this process is invoked from a page, is there a way to put up a generic 'Processing...' page that auto refreshes and when my back-end process is done, takes me to my results page?
ThanksLet me see if I understand you...
So, the first page (that would have invoked the long running process) branches to the "processing page". That page has a onload that does doSubmit(). The after submit process on this processing page calls my long running process via dbms_job. When the long running process ends, the branch on the processing page takes me back to the first page (or whereever).
But then as far as the browser is concerned, the processing page never "finishes loading". IE's progress bar would creep ahead sloooowly and it would appear as if the page is loading for a looong time!
Wouldnt it be cleaner to use the META refresh tag like John was suggesting?
Also, using either of your approaches, how would I allow the user to "Cancel" the long running process and go back to the first page?
Could you please put up a quick sample of this in action on htmldb.oracle.com?
Thanks -
Long running query--- included steps given by Randolf
Hi,
I have done my best to follow Randolf instruction word-by-word and hope to get solution for my
problem soon. Sometime back I have posted a thread on this problem then got busy with other
stuff and was not able to follow it. Here I am again with same issue.
here is link for my previous post
long running query in database 10gHere is backgroud of my requriemment.
I am working on Oracle forms 10g which is using package given below. We want to display client information
with order count basd on different status like Pending, Error, back Order, expedited, std shipping.
Output will look something like.
client name pending error backorder expedited std shipping
ABC 24 0 674 6789 78900
XYZ 35 673 5700 0 798274
.There are total 40 clients . The long running query are expedited and std shipping.
When i run package from Oracle Form Developer it takes 3 mintues to run but when I run same query in our application using forms
(which uses Oracle Application Server) it takes around 1 hour, which is completly unacceptable.
User wants it be done in less than 1 mintue.
I have tried combining Pending,error and backorder queries together but as far as I know it will not
work in Oracle Form as we need a place holder for each status.
Please dont think it is Forms related question, it is a Performance problem.
PACKAGE BODY ORDER_COUNT_PKG IS
PROCEDURE post_query IS
BEGIN
BEGIN
SELECT count(*)
INTO :ORDER_STATUS.PENDING
FROM orders o
WHERE o.status = 'P'
AND (parent_order_id is null
OR (order_type='G'
AND parent_order_id=original_order_number))
AND o.client = :ORDER_STATUS.CLIENT_NUMBER;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
BEGIN
SELECT count(*)
INTO :ORDER_STATUS.ERROR
FROM orders o
WHERE o.status = 'E'
AND (parent_order_id is null
OR (order_type='G'
AND parent_order_id=original_order_number))
AND o.client = :ORDER_STATUS.CLIENT_NUMBER;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
BEGIN
SELECT count(*)
INTO :ORDER_STATUS.BACK_ORDER
FROM orders o
WHERE o.status = 'B'
AND (parent_order_id is null
OR (order_type='G'
AND parent_order_id=original_order_number))
AND o.client = :ORDER_STATUS.CLIENT_NUMBER;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
BEGIN
SELECT count(*)
INTO :ORDER_STATUS.EXPEDITE
FROM orders o,shipment_type_methods stm
WHERE o.status in ('A','U')
AND (o.parent_order_id is null
OR (o.order_type = 'G'
AND o.parent_order_id = o.original_order_number))
AND o.client = stm.client
AND o.shipment_class_code = stm.shipment_class_code
AND (nvl(o.priority,'1') = '2'
OR stm.surcharge_amount <> 0)
AND o.client = :ORDER_STATUS.CLIENT_NUMBER
GROUP BY o.client;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
BEGIN
SELECT count(*)
INTO :ORDER_STATUS.STD_SHIP
FROM orders o,shipment_type_methods stm
WHERE o.status in ('A','U')
AND (o.parent_order_id is null
OR (o.order_type = 'G'
AND o.parent_order_id = o.original_order_number))
AND nvl(o.priority,'1') <> '2'
AND o.client = stm.client
AND o.shipment_class_code = stm.shipment_class_code
AND stm.surcharge_amount = 0
AND o.client = :ORDER_STATUS.CLIENT_NUMBER
GROUP BY o.client;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
END post_query;
END ORDER_COUNT_PKG;one of the query which is taking long time is
SELECT count(*)
FROM orders o,shipment_type_methods stm
WHERE o.status in ('A','U')
AND (o.parent_order_id is null
OR (o.order_type = 'G'
AND o.parent_order_id = o.original_order_number))
AND nvl(o.priority,'1') <> '2'
AND o.client = stm.client
AND o.shipment_class_code = stm.shipment_class_code
AND stm.surcharge_amount = 0
AND o.client = :CLIENT_NUMBER
GROUP BY o.clientThe version of the database is 10.2.1.0.2
SQL> alter session force parallel dml;These are the parameters relevant to the optimizer:
SQL> show parameter user_dump_dest
NAME TYPE VALUE
user_dump_dest string /u01/app/oracle/admin/mcgemqa/
udump
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACTHere is the output of EXPLAIN PLAN:
SQL> explain plan for
2 SELECT count(*)
3 FROM orders o,shipment_type_methods stm
4 WHERE o.status in ('A','U')
5 AND (o.parent_order_id is null
6 OR (o.order_type = 'G'
7 AND o.parent_order_id = o.original_order_number))
8 AND nvl(o.priority,'1') <> '2'
9 AND o.client = stm.client
10 AND o.shipment_class_code = stm.shipment_class_code
11 AND stm.surcharge_amount = 0
12 AND o.client = :CLIENT_NUMBER
13 GROUP BY o.client
14 /
Explained.
Elapsed: 00:00:00.12
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 559278019
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 35 | 46764 (3)| 00:09:22 |
| 1 | SORT GROUP BY NOSORT | | 1 | 35 | 46764 (3)| 00:09:22 |
|* 2 | TABLE ACCESS BY INDEX ROWID | ORDERS | 175K| 3431K| 25979 (3)| 00:05:12 |
| 3 | NESTED LOOPS | | 25300 | 864K| 46764 (3)| 00:09:22 |
|* 4 | TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS | 1 | 15 | 2 (0)| 00:00
|* 5 | INDEX RANGE SCAN | U_SHIPMENT_TYPE_METHODS | 2 | | 1 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | ORDERS_ORDER_DATE | 176K| | 2371 (8)| 00:00:29 |
Predicate Information (identified by operation id):
2 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
"O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE")
4 - filter("STM"."SURCHARGE_AMOUNT"=0)
5 - access("STM"."CLIENT"=:CLIENT_NUMBER)
6 - access("O"."CLIENT"=:CLIENT_NUMBER)
filter("O"."STATUS"='A' OR "O"."STATUS"='U')
24 rows selected.
Elapsed: 00:00:00.86
SQL> rollback;
Rollback complete.
Elapsed: 00:00:00.07Here is the output of SQL*Plus AUTOTRACE including the TIMING information:
SQL> SELECT count(*)
2 FROM orders o,shipment_type_methods stm
3 WHERE o.status in ('A','U')
4 AND (o.parent_order_id is null
5 OR (o.order_type = 'G'
6 AND o.parent_order_id = o.original_order_number))
7 AND nvl(o.priority,'1') <> '2'
8 AND o.client = stm.client
9 AND o.shipment_class_code = stm.shipment_class_code
10 AND stm.surcharge_amount = 0
11 AND o.client = :CLIENT_NUMBER
12 GROUP BY o.client
13 /
Elapsed: 00:00:03.09
Execution Plan
Plan hash value: 559278019
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 35 | 46764 (3)| 00:09:22 |
| 1 | SORT GROUP BY NOSORT | | 1 | 35 | 46764 (3)| 00:09:22 |
|* 2 | TABLE ACCESS BY INDEX ROWID | ORDERS | 175K| 3431K| 25979 (3)| 00:05:12 |
| 3 | NESTED LOOPS | | 25300 | 864K| 46764 (3)| 00:09:22 |
|* 4 | TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS | 1 | 15 | 2 (0)| 00:00
|* 5 | INDEX RANGE SCAN | U_SHIPMENT_TYPE_METHODS | 2 | | 1 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | ORDERS_ORDER_DATE | 176K| | 2371 (8)| 00:00:29 |
Predicate Information (identified by operation id):
2 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
"O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE")
4 - filter("STM"."SURCHARGE_AMOUNT"=0)
5 - access("STM"."CLIENT"=:CLIENT_NUMBER)
6 - access("O"."CLIENT"=:CLIENT_NUMBER)
filter("O"."STATUS"='A' OR "O"."STATUS"='U')
Statistics
55 recursive calls
0 db block gets
7045 consistent gets
0 physical reads
0 redo size
206 bytes sent via SQL*Net to client
238 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> The TKPROF output for this statement looks like the following:
SELECT count(*)
FROM orders o,shipment_type_methods stm
WHERE o.status in ('A','U')
AND (o.parent_order_id is null
OR (o.order_type = 'G'
AND o.parent_order_id = o.original_order_number))
AND nvl(o.priority,'1') <> '2'
AND o.client = stm.client
AND o.shipment_class_code = stm.shipment_class_code
AND stm.surcharge_amount = 0
AND o.client = :CLIENT_NUMBER
GROUP BY o.client
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.04 0.04 0 0 0 0
Fetch 2 2.96 2.91 0 7039 0 1
total 4 3.01 2.95 0 7039 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 95
Rows Row Source Operation
1 SORT GROUP BY NOSORT (cr=7039 pr=0 pw=0 time=2913701 us)
91 TABLE ACCESS BY INDEX ROWID ORDERS (cr=7039 pr=0 pw=0 time=261997906 us)
93 NESTED LOOPS (cr=6976 pr=0 pw=0 time=20740 us)
1 TABLE ACCESS BY INDEX ROWID SHIPMENT_TYPE_METHODS (cr=2 pr=0 pw=0 time=208 us)
3 INDEX RANGE SCAN U_SHIPMENT_TYPE_METHODS (cr=1 pr=0 pw=0 time=88 us)(object id 81957)
91 INDEX RANGE SCAN ORDERS_ORDER_DATE (cr=6974 pr=0 pw=0 time=70 us)(object id 81547)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.02 0.02
********************************************************************************The DBMS_XPLAN.DISPLAY_CURSOR output:
SQL> variable CLIENT_NUMBER varchar2(20)
SQL> exec :CLIENT_NUMBER := '14'
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.06
SQL> SELECT /*+ gather_plan_statistics */ count(*)
2 FROM orders o,shipment_type_methods stm
3 WHERE o.status in ('A','U')
4 AND (o.parent_order_id is null
5 OR (o.order_type = 'G'
6 AND o.parent_order_id = o.original_order_number))
7 AND nvl(o.priority,'1') <> '2'
8 AND o.client = stm.client
9 AND o.shipment_class_code = stm.shipment_class_code
10 AND stm.surcharge_amount = 0
11 AND o.client = :CLIENT_NUMBER
12 GROUP BY o.client
13 /
COUNT(*)
91
Elapsed: 00:00:02.85
SQL> set termout on
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
SQL_ID 4nfj368y8w6a3, child number 0
SELECT /*+ gather_plan_statistics */ count(*) FROM orders o,shipment_type_methods stm WHERE
o.status in ('A','U') AND (o.parent_order_id is null OR (o.order_type = 'G'
AND o.parent_order_id = o.original_order_number)) AND nvl(o.priority,'1') <> '2' AND
o.client = stm.client AND o.shipment_class_code = stm.shipment_class_code AND
stm.surcharge_amount = 0 AND o.client = :CLIENT_NUMBER GROUP BY o.client
Plan hash value: 559278019
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 1 | SORT GROUP BY NOSORT | | 1 | 1 | 1 |00:00:02.63 | 7039 |
|* 2 | TABLE ACCESS BY INDEX ROWID | ORDERS | 1 | 175K| 91 |00:03:56.87 | 7039 |
| 3 | NESTED LOOPS | | 1 | 25300 | 93 |00:00:00.02 | 6976 |
|* 4 | TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS | 1 | 1 | 1 |00:00:00.01 | 2 |
|* 5 | INDEX RANGE SCAN | U_SHIPMENT_TYPE_METHODS | 1 | 2 | 3 |00:00:00.01 | 1 |
|* 6 | INDEX RANGE SCAN | ORDERS_ORDER_DATE | 1 | 176K| 91 |00:00:00.01 | 6974 |
Predicate Information (identified by operation id):
2 - filter((("O"."PARENT_ORDER_ID" IS NULL OR ("O"."ORDER_TYPE"='G' AND
"O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER"))) AND NVL("O"."PRIORITY",'1')<>'
"O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE"))
4 - filter("STM"."SURCHARGE_AMOUNT"=0)
5 - access("STM"."CLIENT"=:CLIENT_NUMBER)
6 - access("O"."CLIENT"=:CLIENT_NUMBER)
filter(("O"."STATUS"='A' OR "O"."STATUS"='U'))
32 rows selected.
Elapsed: 00:00:01.30
SQL> I'm looking forward for suggestions how to improve the performance of this statement.
Thanks
SandyPlease find explain plan for No hint
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 559278019
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 35 | 46764 (3)| 00:09:22 |
| 1 | SORT GROUP BY NOSORT | | 1 | 35 | 46764 (3)| 00:09:22 |
|* 2 | TABLE ACCESS BY INDEX ROWID | ORDERS | 175K| 3431K| 25979 (3)| 00:05:12 |
| 3 | NESTED LOOPS | | 25300 | 864K| 46764 (3)| 00:09:22 |
|* 4 | TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS | 1 | 15 | 2 (0)| 00:00
|* 5 | INDEX RANGE SCAN | U_SHIPMENT_TYPE_METHODS | 2 | | 1 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | ORDERS_ORDER_DATE | 176K| | 2371 (8)| 00:00:29 |
Predicate Information (identified by operation id):
2 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
"O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE")
4 - filter("STM"."SURCHARGE_AMOUNT"=0)
5 - access("STM"."CLIENT"=:CLIENT_NUMBER)
6 - access("O"."CLIENT"=:CLIENT_NUMBER)
filter("O"."STATUS"='A' OR "O"."STATUS"='U')
24 rows selected.
Elapsed: 00:00:00.86Explain Plan for Parallel Hint
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 559278019
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 35 | 46764 (3)| 00:09:22 |
| 1 | SORT GROUP BY NOSORT | | 1 | 35 | 46764 (3)| 00:09:22 |
|* 2 | TABLE ACCESS BY INDEX ROWID | ORDERS | 175K| 3431K| 25979 (3)| 00:05:12 |
| 3 | NESTED LOOPS | | 25300 | 864K| 46764 (3)| 00:09:22 |
|* 4 | TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS | 1 | 15 | 2 (0)| 00:00
|* 5 | INDEX RANGE SCAN | U_SHIPMENT_TYPE_METHODS | 2 | | 1 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | ORDERS_ORDER_DATE | 176K| | 2371 (8)| 00:00:29 |
Predicate Information (identified by operation id):
2 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
"O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE")
4 - filter("STM"."SURCHARGE_AMOUNT"=0)
5 - access("STM"."CLIENT"='14')
6 - access("O"."CLIENT"='14')
filter("O"."STATUS"='A' OR "O"."STATUS"='U')
24 rows selected.
Elapsed: 00:00:08.92Explain Plan for USE_Hash hint
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1465232248
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 35 | 46786 (3)| 00:09:22 |
| 1 | SORT GROUP BY NOSORT | | 1 | 35 | 46786 (3)| 00:09:22 |
|* 2 | HASH JOIN | | 25300 | 864K| 46786 (3)| 00:09:22 |
|* 3 | TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS | 1 | 15 | 2 (0)| 00:00:0
|* 4 | INDEX RANGE SCAN | U_SHIPMENT_TYPE_METHODS | 2 | | 1 (0)| 00:00:01 |
|* 5 | TABLE ACCESS BY INDEX ROWID| ORDERS | 175K| 3431K| 46763 (3)| 00:09:22 |
|* 6 | INDEX RANGE SCAN | ORDERS_ORDER_DATE | 176K| | 4268 (8)| 00:00:52 |
Predicate Information (identified by operation id):
2 - access("O"."CLIENT"="STM"."CLIENT" AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_COD
E")
3 - filter("STM"."SURCHARGE_AMOUNT"=0)
4 - access("STM"."CLIENT"='14')
5 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
"O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
6 - access("O"."CLIENT"='14')
filter("O"."STATUS"='A' OR "O"."STATUS"='U')
25 rows selected.
Elapsed: 00:00:01.09
SQL> Thanks
Sandy -
Checking for status of a long-running BPEL process
Hi experts,
I have one BPEL process for originating customers' loan which usually involves various steps which takes around 15-20 mins to complete per instance. I would like to implement a user interface that can check the process's progress and display how it is going to the user.
I have thought of using an asynchronous process that updates a global variable listing status of each step inside the process and use a poller to invoke another operation (like "checkStatus" operation) of this process to retrieve this variable and display the value to the users. It may be achieved by using "OnEvent" activity waiting for "checkStatus" operation. It will run in parallel with the main process, have access to the global variable and reply this variable immediately to the caller.
It sounds like an idea in theory but invoking a web service operation for polling status is very heavy-weight and may impact performance during high-load.
I am just wondering if there is other solution to this problem as I believe the status checking is a very common expected requirement in long-running BPEL process. What might be the best practice to implement this?
Look forward to your responses.
Thanks and regards,
Edited by: Nghia Pham on Nov 5, 2012 4:48 PMHi.
I apologize for the slow reply. I am just back from overseas and did not have chance to go to the forum!
Thank you a lot for your responses!
The BAM looks like a suggested out-of-the-box solution. But it ties too much to Oracle API and will be hard to customise the way you want to interpret the status to end-users. If you want to display the process status with BAM in a web application interface, ADF is the only solution (please correct me if I am wrong). I would prefer a stand-alone, free of proprietary API solution so that we can build a screen that is technology-independent. As ADF UI is not our only supported view technology. For example, a PHP web client should be able to interpret the BPEL process status.
Will really appreciate your suggested solution.
Thanks and regards, -
MBeanServer will be invalid after the client of MBeanServer long running
I develop a java applcation,It is as a client of MBeanServer, It will use the MBeanServer to query some system info from WebAs, But after a long run (about for one days), The MBeanServerConnection will be broken and invalid ,When call the queryMBean , It will throws some SecurityExceptions about "Use not authorized".
I use the remote mode to get the MBeanServerConnection as belows source code:
Properties connectionProperties = new Properties();
connectionProperties.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"com.sap.engine.services.jndi.InitialContextFactoryImpl");
connectionProperties.setProperty(Context.PROVIDER_URL, "<host-name>:<p4-port>");
connectionProperties.setProperty(Context.SECURITY_PRINCIPAL, "<user-name>");
connectionProperties.setProperty(Context.SECURITY_CREDENTIALS, "<password>");
// create the MBeanServerConnection
MBeanServerConnection mbsc = JmxConnectionFactory.getMBeanServerConnection(JmxConnectionFactory.
PROTOCOL_ENGINE_P4, connectionProperties);
I try to get the mbeanserver using the ctx.lookup("jmx");
in a servlet, But after a long run (about one day), the exception still be thrown.
How can i solve this problem, Is it a bug of WebAs?
Thanks and hope somebody reply it.Hi,
I suppose what happens is that the security session expires. Default session expiration period is 27 hours.
To check whether this is the case you can try to increase the SessionExpirationPeriod property of security service.
However if this is the problem, it will be no resolution as you will be keeping a big ammount of sessions.
I hope this helps.
Greetings, Myriana
Maybe you are looking for
-
Transferring library to iTunes on a new computer
I have an extensive iTunes library on my old computer that I use for my video iPod. I recently got a new computer and want an easy way to transfer the library over to iTunes on that computer. When I plug my current iPod into the new computer it says
-
hi experts pls give tell me abt new g/l
-
Delete the table entries on db2
Hi, when iam applying pathces iam getting dump,i want to delete the entires on pat01 table,whats the command to see the tables and delete the entry on pat01,02,03 tables. Thanku
-
I have album artwork for all of my albums, but for some of them it shows the music symbol in cover flow indicating that there is no album artwork. How do I fix this?
-
TS3988 how do i download my movies from itunes on here
how do i down load movies on this computer windows 8 from icloud are just my itunes?