No bucket consumption for resource , operation, activity
Hi,
i know i am missing something really basic but i am not able to put my finger on what i am missing out.
Issue :
I am trying to CIF across SNP PDS from ECC 6.0 to SCM 7.0 system. Two PDS are failing with the error message "No bucket consumption for resource , operation, activity"
Things that i have checked :
1) The resource has valid definition u2013 using the APO resource button in the capacity definition, a bucket capacity has been created.
2) Workcenter has been cifed across and resource created. i am able to see the bucket capacity in the bucket capacity details in the SNP Bucket capacity view of the resource
3) Checked the control key and it is a valid one for capacity and scheduling. Also checked the formulas in the workcenter and they generate valid values for setup and production time.
4) Valid values entered for setup and maintenance in the routing.
Is there something else that I need to check please? What am i missing?
Regards,
Karthik
Edited by: Karbalas on May 6, 2011 4:18 PM
Thanks DB49.
It was a school boy error. I was CIFing across the resource as multi mixed using the APO resource button in the R/3 capacity definition but i forgot to mark the "Several operations" button and the individual capacity was set to 1. This had caused the issue in resource definition in APO
Regards,
karthik
Edited by: Karbalas on May 7, 2011 8:21 PM
Similar Messages
-
why i see this massegae? (iTunes could not connect to the iTunes store. You do not have enough access privileges for this operation. Make sure your connection is active and try again.)
Hey there Hannuj,
It sounds like you are unable to access the iTunes Store in the iTunes application. Based on the error message I would try the steps in the article here named:
iTunes for Windows: iTunes Store connection troubleshooting
http://support.apple.com/kb/HT1527
Remove pop-up blockers
Some pop-up or ad-blocking programs may interfere with the ability of iTunes to connect to the iTunes Store. Removing them in many cases will resolve the issue.
Flush DNS Setting in Windows
In some cases, the DNS information you computer uses to connect to the Internet needs to be reset. Follow these instructions to flush your Windows DNS information:
Windows XP
On the Start menu, click Run.
In the Open field type cmd and click OK.
In the resulting window, type ipconfig /flushdns and press Return on the keyboard.
You should see a message that the DNS Resolver Cache has been successfully flushed.
Windows Vista and Windows 7
On the Start menu, point to All Programs > Accessories and then right-click Command Prompt and chooseRun as Administrator from the shortcut menu. If Windows needs your permission to continue, clickContinue.
In the resulting window, type ipconfig /flushdns and press Return on the keyboard.
You should see a message that the DNS Resolver Cache has been successfully flushed.
Note: If, in the command prompt, you see this message: "The requested operation requires elevation", close the command prompt and repeat steps 1 and 2 above to be sure that Administrator privileges are used to access to Command Prompt.
See the following articles for additional troubleshooting information on connecting to the iTunes Store:
Can't connect to the iTunes Store.
iTunes for Windows: Network Connectivity Tests
Thank you for using Apple Support Communities.
Regards,
Sterling -
Datasource for QM Inspection Lot - Operation/Activity
Dear experts,
We need to develop a report for QM Result recorded for Inspection Lot having Operation/Activity number in BW
The field in R/3 is VORNR and Table PLPO (Inspection Plan Table) but this field is not available in any other QM tables.
We are not able to find any data source where the said field is coming.
Please help me out
Thanks & regards,
DK MarooCheck t-code: SE11
In data type field enter "VORNR".
Click on where used list (CtrlShiftF3) button, 3rd from left on top.
You will get one pop-up name - where used list data element. Here select check-box table fields. Click on execute.
You will get list of table name, where this field is available.
Hope this helps.
Thanks!!! -
Resource assignment for two operation
People!!!
Please help
There are two operation:
Check book duration 2 days
Read book duration 2 days
For 1<sup>st</sup> operation I would like to assign two Inst technicians
For 2<sup>nd</sup> operation I would like to assign two Inst technicians
12 hours is working hours for both operation but two technicians work 6 hours for operation 1, and 6 hours for operation 2 on the second day of work
How to assign them properly, the system currently calculated the total hours as 96 hours but it must be 72
Your prompt feedback is highly desirable))))How many hours per day are working time in your schedule? It sounds like you have a 24 hours scheduled set up.
If I have a 12 hour per day schedule (8 am to 8 pm) Sunday through Saturday I show this:
Check book - duration = 1.5 days starts Jan 25 at 8:00 am ends Jan 26 at 2:00 pm
Read book - duration = 1.5 days start Jan 26 at 2:00 pm ends Jan 27 at 8:00 pm
Both resources are assigned to the task for 18 hours of work.
In total there are 24 hours of work for Jan 25 (all on Check booK), 24 hours of Work on Jan 26 (12 hours on Check Book and 12 Hours on Read Book), and 24 hours of work on Jan 27 (all on read book).
Task are auto scheduled. -
Problems with Var. Bucket Consumpt. field in PPMs
Hi experts,
I'm trying to manually create a PPM (Production Process Model) into APO (to use it with SNP), however, the value entered in the
Bucket Consumption (Variable) field disappears when I active the PPM. Someone could tell me what I am doing wrong?
First, I created a Resource ( using fields of tab 1 Bucket ) with the following data:
Resource
Resource: R
Category (Cat): P - Production
Location: L
Time Zone: BRAZIL
Bucket Dimensn: Mass
Factory Calendar: BR
Bucket Capacity: 96000
Unit: KT - Kilotonne
Number of Periods: 365
Period Type: Day
Days -: 1095
Days +: 1095
ie, a resource with a capacity of 96000 Kilotonne per year.
The PPM I'm trying to create is the following:
PPM's General Data
Use of a Plan: S - PPM for Supply Network Planning (SNP)
Variable (Single Level Costs): 980,00
PPM's Operations and Activities
Operation: 0010:Produce
Activity: 10:Produce
Components of Activity
Product I/O Indicator Consump. Type From Date To Date Unit Var. Consumptn
A O E 01.01.1970 31.12.9999 KT 1000
B I S 01.01.1970 31.12.9999 KT 560
C I S 01.01.1970 31.12.9999 KT 180
D I S 01.01.1970 31.12.9999 KT 260
Mode of Activity
Mode Primary Resrce Location Unit Fix. dur.
10 R L D 4
Resource Mode
Resource Location Unit Var. Bucket Consumpt.
R L KT 1000 (here is the problem)
PPM's Model
Prod. Process Model Output Product Date From Date To Planning Locatn Discretization Maximum Lot Size
PPM_1 A 01.01.1970 31.12.9999 L x 9999999
ie, a PPM to produce 1000 KT of A in 4 days that consumes 1000 KT of R resource capacity.
The value of 1000 KT in the field Var. Bucket Consumpt. is not allowed. I could only register values between 0 and 9.
Thanks in advance for help.
Greetings,
Francisco Fonseca.
Edited by: Francisco Fonseca on Jun 26, 2009 1:56 PM
Edited by: Francisco Fonseca on Jun 26, 2009 1:59 PMHi,
Its a good practice that when you make PPM manually keep fixed duration as '1 Day' (in your mode of activity).
One thing i noticed that you made your resource as bucket resource, which is correct,
but the number you mentioned there (Bucket Capacity: 96000) is not right. What is the unit? It should be in hours (should be 8 or 9 or ).
That means in One day the resource is capable of producing goods for 8/9/ Hours. So, make the change there.
Now in your Mode of activity make fix duration 1 Day.
Components of Activity
Product I/O Indicator Consump. Type From Date To Date Unit Var. Consumptn
A O E 01.01.1970 31.12.9999 KT 1000
B I S 01.01.1970 31.12.9999 KT 560
C I S 01.01.1970 31.12.9999 KT 180
D I S 01.01.1970 31.12.9999 KT 260
Mode of Activity
Mode Primary Resrce Location Unit Fix. dur.
10 R L D 1
Now in resource: put following value
Resource Mode
Resource Location Unit Var. Bucket Consumpt.
R L H 1
You will get an output as 8000 or 9000 (depending on your resource capacity per day, ie 8hr or 9 hr )
and if you put Variable consumption as 8/9 hrs the you will get 1000 KT per day, in that case you put variable material/product consumtion as 250 KT,
so you can get 1000 KT per four day. Variable resource consumption responsible for variable output (variable consumption of FG).
Pls revert if still there is any issue.
Thanks,
Satyajit
Edited by: Satyajit Patra on Jul 2
Edited by: Satyajit Patra on Jul 2, 2009 11:14 AM -
PPM error with bucket consumption
Hi,
I am having a bit of trouble with the PPDS PPMs. Every time I regenerate the PPM for a specific product-location and run the PPM Check Plan, I get the error "You cannot use unit for bucket consumption of res..." and "Assign a value to at least one bucket consumption..." so what I am doing to temporarly solve it is to manually fill in the Bucket Consumption field from every resource of every operation contained in the PPM.
But this is very time-consuming and I would like to find another way to do this. Is there a special configuration from the R/3 side?
Thank you in advance,
FernandoHi Fernando,
If you are creating a PPM directly in APO then you need to follow these steps and maintain the UoM for Variable bucket consumption.This is required to arrive at resource capacity consumption based on the order quantity of the planned/Production order for that specific product.
But if you don't want to maintain this field manually everytime for all/new codes then you can have a setup where you define the PPM related data in R/3 i.e.BOM, Routing data and Production Version that,when CIFed to APO will automatically create the PPDS PPM with all consumption parameters you setup in routing. Then you can also automate the process to generate a SNP PPM with reference to this PPDS PPM.
Hope this helps to clarify..
Regards,
Digambar -
Rough cut planning in SOP for resource levelling
Hello,
I am trying rough cut planning in SOP.
I have created rough cut planning profile for a product group by using MC35.
The task list exits for the product group.
In task list , work centre is maintained and the same is given while creating the rough cut profile.
This product group contains some part number.
When I try Views--->capacity situation -> rough cut planning--->show.
System gives error that -
> No resource load found.
<u><b>Diagnosis given by system is as follows,</b></u>
This situation may be caused by one or more of the following:
No PP task list (rough-cut planning profile, rate routing, or routing) corresponding to the Customizing selection criteria has been defined for this material/product group at this scheduling level.
Resources planning has not been configured appropriately in Customizing.
The information structure which you are planning has not been configured appropriately in Customizing.
System Response
No capacity load could be determined.
Procedure
1. Check that a PP task list exists for this material/product group.
2. Check that a lot size range has been maintained in the PP task list.
3. Have your systems administrator check and, if necessary, change the resources planning settings in Customizing for Sales & Operations Planning (the steps "Scheduling levels" and "Routing selection").
4. Have your systems administrator check that capacity planning has been defined for this information structure in Customizing.
Please let me know how to maintain the resource planning settings for scheduling level and routing selection.
Waiting for your reply.
Regards,
Ravindra DeokuleHi Ravindra,
You can use t.code mc84 to create a product group.
Pl follow the following steps to do SOP with product group.
1. Create a Product group in tcode MC84.
Enter a percentage for material say X & Y to be produced. X 40% & Y 60%. Total qty will be split in the ratio as per the %.
2. Create Production Plan in tcode MC81.
Enter the sales plan qty.
Then goto menu,edit & choose create Prodn Plan synchronus to sales.
3. Create Rough cut Planning Profile in tcode MC35.
4. Enter the Status ,Usage & lot size.
5. Choose resources tab & then choose work center & enter work center name which you consider as a bottle neck & then enter the unit of measure as 'min'.
6. Use MC82,
Choose inactive version and select your version.
then goto menu->views> capacity situation>show
7.Check for over loads & adjust your qty acordingly.once the load becomes 100 % save.
8. Change the inactive version to Active version in tcode MC78.
ie in the version enter the inactive version & in the target version enter'A00'
9.In mc82 choose active version, you can see that the plan is activated now.
10.In the menu choose extras & transfer to demand mgmnt.
11.You can see the requirement in the md62 transaction.
12.Then run mrp.
Also regarding the unit of measure in t.code mc35,
If you select your resource as 'workcenter' & choose min or hour & then you enter 1(eg) in the first field against the work center. This means it takes 1 min or hour to produce 1pc(base quantity) in that particular workcenter(bottleneck workcenter)
Reg: Capacity calculation in mc87,follow the eg as above.Enter 1 min as time & base qty as 1 in mc35. Then check in mc87, you will get the load in %. ie if your work center is permitted to work for 8 hours/day & you pass a production plan of 480, then your load will match for 100%.
Regards,
Senthilkumar -
Custom Adapter for Inbound Operations never invoked
Hi,
I created a custom J2CA Resource Adapter for Inbound operations. I implemented javax.resource.spi.ResourceAdapter interface with all required methods - start(BootstrapContext), endpointActivation(MessageEndpointFactory, ActivationSpec), etc.
I created the appropriate WSDL file according Adapter Development Cookbook. Below is an excerpt from it:
<definitions targetNamespace="urn:Adapter" .......>
<message name="Notification">
<part name="Notification_Part" element="ns1:Svc"/>
</message>
<message name="Header">
<part name="Header_Part" element="ServiceHeader"/>
</message>
<portType name="EventsQueue_ptt">
<operation name="Dequeue">
<input message="tns:Notification"/>
</operation>
</portType>
<binding name="NotificationService" type="tns:EventsQueue_ptt">
<jca:binding/>
<operation name="Dequeue">
<jca:operation ActivationSpec="acme.ActivationSpec"/>
<input>
<jca:header message="tns:Header" part="Header_Part"/>
</input>
</operation>
</binding>
<service name="EventsQueue">
<port name="EventsQueue_pt" binding="tns:NotificationService">
<jca:address ResourceAdapter="acme.AdapterClass"/>
</port>
</service>
</definitions>I created a BPEL process with a partner link based on this WSDL and linked a Receive activity with it. According Adapter Concepts Guide here is what should happen when the BPEL process is deployed and started:
- The ResourceAdapter class name and the ActivationSpec parameter are captured in the WSDL extension section of the J2CA inbound interaction WSDL Adapter Integration with BPEL Process Manager Adapter Integration with Oracle Application Server Components 5-7 during design time and made available to BPEL Process Manager and Adapter Framework during run time.
- An instance of the J2CA 1.5 ResourceAdapter class is created and the Start method of the J2CA ResourceAdapter class is called.
- Each inbound interaction operation referenced by the BPEL Process Manager processes results in invoking the EndPointActivation method of the J2CA 1.5 ResourceAdapter instance. Adapter Framework creates the ActivationSpec class (Java Bean) based on the ActivationSpec details present in the WSDL extension section of the J2CA inbound interaction and activates the endpoint of the J2CA 1.5 resource adapter.
The problem is that start(BootstrapContext) and endpointActivation(MessageEndpointFactory, ActivationSpec) seem never to be invoked, the adapter is never initialized and at some point the Receive activity of the BPEL process times-out.
I put some debug System.out.println statements in these methods, but nothing appeared in the server log.
I'm using Oracle SOA Suite 10.1.3.1.
I will be grateful for any suggestions.
Regards,
Simeon
Message was edited by:
skirovuser9116351 wrote:
Hi,
I have written a custom connector for crud operations to a table in Oracle DB. The custom code class uses oracle.jdbc.OracleDriver which is present in ojdbc14.jar. I have placed this jar in ThirdParty folder of the OIM installation but I am still getting an SQLException while connecting to the DB as my custom code is unable to find the required class. Can someone guide me to some documentation that will help me get over this?
Thanks,
SaieshWhich version of OIM you are in? Also you dont need to copy ojdbc14.jar as this exists already in OIM. -
Hi All;
I am gettting the below messages on the CVP Call Server version 8 and actually the Call Server is out of service, any advise?
The CVP PG is up and activated, the CVP Call Server registered on the Gatekeeper and I saw this on the gatekeeper, also the VXML Server is UP. But when I try to browse the statistics page using the Operational and Consol Server, then it also gives a message that not able to reach it, also its status at the operational and consol is down. Below are the messages I see it:
At CVP Call Server:
14:26:41 Trace: INFO: H323CallMgr::sendRAI: Successfully sent RAI for resource unavailability
14:27:26 Trace: INFO: H323CallMgr::sendRAI: Successfully sent RAI for resource unavailability
Unable to retrieve statistics for Unified CVP Call Server with IP Address: 10.180.22.137 and Hostname: vivadrcvp at this time.
14:26:41 Trace: INFO: H323CallMgr::sendRAI: Successfully sent RAI for resource unavailability
14:27:26 Trace: INFO: H323CallMgr::sendRAI: Successfully sent RAI for resource unavailability
At the Operational Console:
Unable to retrieve statistics for Unified CVP Call Server with IP Address: 10.180.22.137 and Hostname: vivadrcvp at this time.
What could be the reason? Is it license?
How can I know if the license if not valid?
Regards
BilalDear Geoff;
I am facing the same thing, but in the OAMP is shown to be down and not able to get any statistics on this CVP Call Server.
Actually the PG01, PG02 and PG03 are enabled in the router registry (while PG01 for CUCM PG, PG02 for CVP Call Server PG and PG03 for the Media Routing PG).
It was working before and I was receiving calls on it, but suddenly this happened.
Actually, an upgrade happened from version 7 to version 8 and we imported new licenses for the VXML Server but did not import new licenses for the CVP Call Server. Could be a license issue because we have to import the new license to change from version 7 to version 8?
Thanks in advance for the help.
Regards
Bilal -
Problem in Wfetch client for Update operations
Hi,
I am using the Wfetch client for the 'Update' operation of an RFC Gateway consumption model . But it seems to always give this error : 'HTTP/1.0 400 Bad Request\r\n'
I have passed only those fields that have been exposed in the GW Data model. Is there anything else that needs to be taken care of ?
The read and query operations are executed successfully though.
Thanks,
ShubhadaHi Shubhada,
Just recheck two things in Wfetch with below details.
1. Check the path. It should be below formate for update operation
Verb : Put
Path: /sap/opu/sdata/sap/<CONSUM_MODEL>/<data_model>Collection(value=' ',scheme_id='<DATA_MODEL>',scheme_agency_id=' ')?sap-client=< >&$format=xml
Above formate you can take from Read operation which you already executed successfully.
2. Check the XML formate on right handside in Wfetch it should be header & body with below details.
Xml Formate:
x-requested-with: XMLHttpRequest\r\n
\r\n
<?xml version="1.0" encoding="utf-8" standalone="yes"?>\r\n
<entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices"xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom">\r\n
<content type="application/xml">\r\n
<m:properties>\r\n
<d:value> </d:value> \r\n
<d:scheme_id> </d:scheme_id> \r\n
<d:scheme_agency_id> </d:scheme_agency_id> \r\n
Just copy from read operation Read operation which you already executed successfully. And update the fields what you required.
</m:properties>\r\n
</content>\r\n
</entry>\r\n
Hope you this will help above information.
Thanks & Regards,
Mahesh Devershetty -
Hi all,
i am getting a below error whenever executing the below select query....
some times it will show dead lock detected while waiting for resource and terminated...
some times it executes and gives result..
but all the time it writes an alert to alert log
Plesae suggest how to resolve the issue..........
Thanks in advance
Env: Linux / Oracle 11.2.0.3.3
Error from alert log:
Errors in file /u01/oracle/oracle/diag/rdbms/bdrdb/bdrdb/trace/bdrdb_p017_6076.trc:
ORA-00060: deadlock detected while waiting for resource
ORA-10387: parallel query server interrupt (normal)
Trace file info... bdrdb_p017_6076.trc:
Trace file /u01/oracle/oracle/diag/rdbms/bdrdb/bdrdb/trace/bdrdb_p017_6076.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/oracle/oracle/product/11.2.0/dbhome_1
System name: Linux
Node name: bdrdb.cteplindia.com
Release: 2.6.18-308.el5PAE
Version: #1 SMP Fri Jan 27 17:40:09 EST 2012
Machine: i686
Instance name: bdrdb
Redo thread mounted by this instance: 1
Oracle process number: 92
Unix process pid: 6076, image: [email protected] (P017)
*** 2013-11-04 23:18:57.915
*** SESSION ID:(423.59970) 2013-11-04 23:18:57.915
*** CLIENT ID:() 2013-11-04 23:18:57.915
*** SERVICE NAME:(bdrdb) 2013-11-04 23:18:57.915
*** MODULE NAME:() 2013-11-04 23:18:57.915
*** ACTION NAME:() 2013-11-04 23:18:57.915
*** 2013-11-04 23:18:57.915
DEADLOCK DETECTED ( ORA-00060 )
[Transaction Deadlock]
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
PS-00000001-00000011 92 423 S 33 128 S X
BF-2ed08c01-00000000 33 128 S 92 423 S X
session 423: DID 0001-005C-00081126 session 128: DID 0001-0021-00067D23
session 128: DID 0001-0021-00067D23 session 423: DID 0001-005C-00081126
DEADLOCK DETECTED ( ORA-00060 )
[Transaction Deadlock]
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
PS-00000001-00000011 92 423 S 33 128 S X
BF-2ed08c01-00000000 33 128 S 92 423 S X
session 423: DID 0001-005C-00081126 session 128: DID 0001-0021-00067D23
session 128: DID 0001-0021-00067D23 session 423: DID 0001-005C-00081126
Rows waited on:
Session 423: no row
Session 128: obj - rowid = 00021DC1 - AAAh3BAAVAAAQL/AAA
(dictionary objn - 138689, file - 21, block - 66303, slot - 0)
----- Information for the OTHER waiting sessions -----
Session 128:
sid: 128 ser: 46176 audsid: 1836857 user: 102/DBLOCAL
flags: (0x8000041) USR/- flags_idl: (0x1) BSY/-/-/-/-/-
flags2: (0x40009) -/-/INC
pid: 33 O/S info: user: oracle, term: UNKNOWN, ospid: 31611
image: [email protected]
client details:
O/S info: user: masked, term: masked, ospid: 5924:568
machine: masked program: Toad.exe
application name: TOAD background query session, hash value=526966934
current SQL:
application name: TOAD background query session, hash value=526966934
current SQL:
SELECT DISTINCT B_FP_TEST.TEST_ID
FROM B_FP_TEST,
B_USER_INFO,
J_FP_INVESTIGATOR,
L_TEST_STATUS,
L_ATMS_TEST_TYPE,
j_op_test_anml
WHERE B_FP_TEST.TEST_ID = J_FP_INVESTIGATOR.TEST_ID
AND B_FP_TEST.TEST_TYPE_ID = L_ATMS_TEST_TYPE.ATMS_TEST_TYPE_ID
AND B_USER_INFO.B_USER_INFO_ID = J_FP_INVESTIGATOR.INVESTIGATOR_ID
AND B_FP_TEST.STATUS_ID = L_TEST_STATUS.STATUS_ID
AND B_FP_TEST.IS_DELETED = :"SYS_B_00"
AND B_FP_TEST.TEST_NUM NOT IN (:"SYS_B_01", :"SYS_B_02", :"SYS_B_03")
AND L_ATMS_TEST_TYPE.IS_DELETED = :"SYS_B_04"
AND J_FP_INVESTIGATOR.is_pi = :"SYS_B_05"
AND L_TEST_STATUS.STATUS IN (:"SYS_B_06", :"SYS_B_07", :"SYS_B_08")
AND j_op_test_anml.test_id = B_FP_TEST.TEST_ID
----- End of information for the OTHER waiting sessions -----
*** 2013-11-04 23:18:57.916
dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x0, level=3, mask=0x0)
----- Error Stack Dump -----
ORA-00060: deadlock detected while waiting for resource
ORA-10387: parallel query server interrupt (normal)
----- SQL Statement (None) -----
Current SQL information unavailable - no cursor.
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
More......
Query:
SELECT DISTINCT B_FP_TEST.TEST_ID
FROM B_FP_TEST,
B_USER_INFO,
J_FP_INVESTIGATOR,
L_TEST_STATUS,
L_ATMS_TEST_TYPE,
j_op_test_anml
WHERE B_FP_TEST.TEST_ID = J_FP_INVESTIGATOR.TEST_ID
AND B_FP_TEST.TEST_TYPE_ID = L_ATMS_TEST_TYPE.ATMS_TEST_TYPE_ID
AND B_USER_INFO.B_USER_INFO_ID = J_FP_INVESTIGATOR.INVESTIGATOR_ID
AND B_FP_TEST.STATUS_ID = L_TEST_STATUS.STATUS_ID
AND B_FP_TEST.IS_DELETED = 0
AND B_FP_TEST.TEST_NUM NOT IN (1, 2, 99)
AND L_ATMS_TEST_TYPE.IS_DELETED = 0
AND J_FP_INVESTIGATOR.is_pi = 1
AND L_TEST_STATUS.STATUS IN ('Scheduled', 'In-Progress', 'Completed')
AND j_op_test_anml.test_id = B_FP_TEST.TEST_ID
AND ( (j_op_test_anml.end_date BETWEEN TO_DATE ('28-Oct-2013') - 1
AND TO_DATE ('04-Nov-2013') + 1)
OR (j_op_test_anml.start_date BETWEEN TO_DATE ('28-Oct-2013') - 1
AND TO_DATE ('04-Nov-2013') + 1)
OR (TO_DATE ('28-Oct-2013') BETWEEN j_op_test_anml.start_date
AND j_op_test_anml.end_date)
OR (TO_DATE ('04-Nov-2013') BETWEEN j_op_test_anml.start_date
AND j_op_test_anml.end_date))
AND L_ATMS_TEST_TYPE.IS_DELETED = 0
AND B_FP_TEST.DATASOURCE_ID = 9
Query Exp plan:
Plan hash value: 3398228788
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1501 | 102K| 1929 (1)| 00:00:24 | | | | | |
| 1 | HASH UNIQUE | | 1501 | 102K| 1929 (1)| 00:00:24 | | | | | |
| 2 | CONCATENATION | | | | | | | | | | |
| 3 | PX COORDINATOR | | | | | | | | | | |
| 4 | PX SEND QC (RANDOM) | :TQ30005 | 241 | 16870 | 800 (1)| 00:00:10 | | | Q3,05 | P->S | QC (RAND) |
|* 5 | HASH JOIN | | 241 | 16870 | 800 (1)| 00:00:10 | | | Q3,05 | PCWP | |
| 6 | PX RECEIVE | | 246 | 15990 | 797 (1)| 00:00:10 | | | Q3,05 | PCWP | |
| 7 | PX SEND HASH | :TQ30004 | 246 | 15990 | 797 (1)| 00:00:10 | | | Q3,04 | P->P | HASH |
|* 8 | HASH JOIN | | 246 | 15990 | 797 (1)| 00:00:10 | | | Q3,04 | PCWP | |
| 9 | PX RECEIVE | | 573 | 29223 | 793 (1)| 00:00:10 | | | Q3,04 | PCWP | |
| 10 | PX SEND HASH | :TQ30003 | 573 | 29223 | 793 (1)| 00:00:10 | | | Q3,03 | P->P | HASH |
|* 11 | HASH JOIN | | 573 | 29223 | 793 (1)| 00:00:10 | | | Q3,03 | PCWP | |
| 12 | BUFFER SORT | | | | | | | | Q3,03 | PCWC | |
| 13 | PX RECEIVE | | | | | | | | Q3,03 | PCWP | |
| 14 | PX SEND BROADCAST | :TQ30000 | | | | | | | | S->P | BROADCAST |
| 15 | NESTED LOOPS | | | | | | | | | | |
| 16 | NESTED LOOPS | | 485 | 20855 | 781 (0)| 00:00:10 | | | | | |
| 17 | TABLE ACCESS BY GLOBAL INDEX ROWID| J_OP_TEST_ANML | 485 | 10185 | 296 (0)| 00:00:04 | ROWID | ROWID | | | |
|* 18 | INDEX RANGE SCAN | IDX$$_2D190001 | 485 | | 4 (0)| 00:00:01 | | | | | |
|* 19 | INDEX UNIQUE SCAN | FT_TEST_ID_PK | 1 | | 0 (0)| 00:00:01 | | | | | |
|* 20 | TABLE ACCESS BY GLOBAL INDEX ROWID | B_FP_TEST | 1 | 22 | 1 (0)| 00:00:01 | ROWID | ROWID | | | |
| 21 | PX BLOCK ITERATOR | | 70382 | 549K| 11 (0)| 00:00:01 | | | Q3,03 | PCWC | |
|* 22 | TABLE ACCESS FULL | J_FP_INVESTIGATOR | 70382 | 549K| 11 (0)| 00:00:01 | | | Q3,03 | PCWP | |
| 23 | BUFFER SORT | | | | | | | | Q3,04 | PCWC | |
| 24 | PX RECEIVE | | 3 | 42 | 3 (0)| 00:00:01 | | | Q3,04 | PCWP | |
| 25 | PX SEND HASH | :TQ30001 | 3 | 42 | 3 (0)| 00:00:01 | | | | S->P | HASH |
|* 26 | TABLE ACCESS FULL | L_TEST_STATUS | 3 | 42 | 3 (0)| 00:00:01 | | | | | |
| 27 | BUFFER SORT | | | | | | | | Q3,05 | PCWC | |
| 28 | PX RECEIVE | | 30 | 150 | 3 (0)| 00:00:01 | | | Q3,05 | PCWP | |
| 29 | PX SEND HASH | :TQ30002 | 30 | 150 | 3 (0)| 00:00:01 | | | | S->P | HASH |
|* 30 | TABLE ACCESS FULL | L_ATMS_TEST_TYPE | 30 | 150 | 3 (0)| 00:00:01 | | | | | |
| 31 | NESTED LOOPS | | | | | | | | | | |
| 32 | NESTED LOOPS | | 3 | 210 | 329 (1)| 00:00:04 | | | | | |
| 33 | NESTED LOOPS | | 3 | 195 | 329 (1)| 00:00:04 | | | | | |
|* 34 | HASH JOIN | | 2 | 114 | 325 (1)| 00:00:04 | | | | | |
| 35 | NESTED LOOPS | | | | | | | | | | |
| 36 | NESTED LOOPS | | 6 | 258 | 322 (1)| 00:00:04 | | | | | |
| 37 | PARTITION RANGE SINGLE | | 6 | 126 | 316 (1)| 00:00:04 | 7 | 7 | | | |
|* 38 | TABLE ACCESS FULL | J_OP_TEST_ANML | 6 | 126 | 316 (1)| 00:00:04 | 7 | 7 | | | |
|* 39 | INDEX UNIQUE SCAN | FT_TEST_ID_PK | 1 | | 0 (0)| 00:00:01 | | | | | |
|* 40 | TABLE ACCESS BY GLOBAL INDEX ROWID | B_FP_TEST | 1 | 22 | 1 (0)| 00:00:01 | ROWID | ROWID | | | |
|* 41 | TABLE ACCESS FULL | L_TEST_STATUS | 3 | 42 | 3 (0)| 00:00:01 | | | | | |
|* 42 | TABLE ACCESS BY INDEX ROWID | J_FP_INVESTIGATOR | 1 | 8 | 2 (0)| 00:00:01 | | | | | |
|* 43 | INDEX RANGE SCAN | FI_TEST_ID_PK | 1 | | 1 (0)| 00:00:01 | | | | | |
|* 44 | INDEX UNIQUE SCAN | L_ATMS_TEST_TYPE_PK | 1 | | 0 (0)| 00:00:01 | | | | | |
|* 45 | TABLE ACCESS BY INDEX ROWID | L_ATMS_TEST_TYPE | 1 | 5 | 1 (0)| 00:00:01 | | | | | |
| 46 | PX COORDINATOR | | | | | | | | | | |
| 47 | PX SEND QC (RANDOM) | :TQ20003 | | | | | | | Q2,03 | P->S | QC (RAND) |
| 48 | NESTED LOOPS | | | | | | | | Q2,03 | PCWP | |
| 49 | NESTED LOOPS | | 33 | 2310 | 399 (2)| 00:00:05 | | | Q2,03 | PCWP | |
|* 50 | HASH JOIN | | 33 | 2145 | 397 (2)| 00:00:05 | | | Q2,03 | PCWP | |
| 51 | PX RECEIVE | | 78 | 3978 | 393 (1)| 00:00:05 | | | Q2,03 | PCWP | |
| 52 | PX SEND HASH | :TQ20002 | 78 | 3978 | 393 (1)| 00:00:05 | | | Q2,02 | P->P | HASH |
|* 53 | HASH JOIN | | 78 | 3978 | 393 (1)| 00:00:05 | | | Q2,02 | PCWP | |
| 54 | BUFFER SORT | | | | | | | | Q2,02 | PCWC | |
| 55 | PX RECEIVE | | | | | | | | Q2,02 | PCWP | |
| 56 | PX SEND BROADCAST | :TQ20000 | | | | | | | | S->P | BROADCAST |
| 57 | NESTED LOOPS | | | | | | | | | | |
| 58 | NESTED LOOPS | | 66 | 2838 | 382 (1)| 00:00:05 | | | | | |
| 59 | PARTITION RANGE SINGLE | | 66 | 1386 | 316 (1)| 00:00:04 | 7 | 7 | | | |
|* 60 | TABLE ACCESS FULL | J_OP_TEST_ANML | 66 | 1386 | 316 (1)| 00:00:04 | 7 | 7 | | | |
|* 61 | INDEX UNIQUE SCAN | FT_TEST_ID_PK | 1 | | 0 (0)| 00:00:01 | | | | | |
|* 62 | TABLE ACCESS BY GLOBAL INDEX ROWID | B_FP_TEST | 1 | 22 | 1 (0)| 00:00:01 | ROWID | ROWID | | | |
| 63 | PX BLOCK ITERATOR | | 70382 | 549K| 11 (0)| 00:00:01 | | | Q2,02 | PCWC | |
|* 64 | TABLE ACCESS FULL | J_FP_INVESTIGATOR | 70382 | 549K| 11 (0)| 00:00:01 | | | Q2,02 | PCWP | |
| 65 | BUFFER SORT | | | | | | | | Q2,03 | PCWC | |
| 66 | PX RECEIVE | | 3 | 42 | 3 (0)| 00:00:01 | | | Q2,03 | PCWP | |
| 67 | PX SEND HASH | :TQ20001 | 3 | 42 | 3 (0)| 00:00:01 | | | | S->P | HASH |
|* 68 | TABLE ACCESS FULL | L_TEST_STATUS | 3 | 42 | 3 (0)| 00:00:01 | | | | | |
|* 69 | INDEX UNIQUE SCAN | L_ATMS_TEST_TYPE_PK | 1 | | 0 (0)| 00:00:01 | | | Q2,03 | PCWP | |
|* 70 | TABLE ACCESS BY INDEX ROWID | L_ATMS_TEST_TYPE | 1 | 5 | 1 (0)| 00:00:01 | | | Q2,03 | PCWP | |
| 71 | PX COORDINATOR | | | | | | | | | | |
| 72 | PX SEND QC (RANDOM) | :TQ10003 | | | | | | | Q1,03 | P->S | QC (RAND) |
| 73 | NESTED LOOPS | | | | | | | | Q1,03 | PCWP | |
| 74 | NESTED LOOPS | | 33 | 2310 | 399 (2)| 00:00:05 | | | Q1,03 | PCWP | |
|* 75 | HASH JOIN | | 34 | 2210 | 397 (2)| 00:00:05 | | | Q1,03 | PCWP | |
| 76 | PX RECEIVE | | 78 | 3978 | 393 (1)| 00:00:05 | | | Q1,03 | PCWP | |
| 77 | PX SEND HASH | :TQ10002 | 78 | 3978 | 393 (1)| 00:00:05 | | | Q1,02 | P->P | HASH |
|* 78 | HASH JOIN | | 78 | 3978 | 393 (1)| 00:00:05 | | | Q1,02 | PCWP | |
| 79 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | |
| 80 | PX RECEIVE | | | | | | | | Q1,02 | PCWP | |
| 81 | PX SEND BROADCAST | :TQ10000 | | | | | | | | S->P | BROADCAST |
| 82 | NESTED LOOPS | | | | | | | | | | |
| 83 | NESTED LOOPS | | 66 | 2838 | 382 (1)| 00:00:05 | | | | | |
| 84 | PARTITION RANGE SINGLE | | 66 | 1386 | 316 (1)| 00:00:04 | 7 | 7 | | | |
|* 85 | TABLE ACCESS FULL | J_OP_TEST_ANML | 66 | 1386 | 316 (1)| 00:00:04 | 7 | 7 | | | |
|* 86 | INDEX UNIQUE SCAN | FT_TEST_ID_PK | 1 | | 0 (0)| 00:00:01 | | | | | |
|* 87 | TABLE ACCESS BY GLOBAL INDEX ROWID | B_FP_TEST | 1 | 22 | 1 (0)| 00:00:01 | ROWID | ROWID | | | |
| 88 | PX BLOCK ITERATOR | | 70382 | 549K| 11 (0)| 00:00:01 | | | Q1,02 | PCWC | |
|* 89 | TABLE ACCESS FULL | J_FP_INVESTIGATOR | 70382 | 549K| 11 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 90 | BUFFER SORT | | | | | | | | Q1,03 | PCWC | |
| 91 | PX RECEIVE | | 3 | 42 | 3 (0)| 00:00:01 | | | Q1,03 | PCWP | |
| 92 | PX SEND HASH | :TQ10001 | 3 | 42 | 3 (0)| 00:00:01 | | | | S->P | HASH |
|* 93 | TABLE ACCESS FULL | L_TEST_STATUS | 3 | 42 | 3 (0)| 00:00:01 | | | | | |
|* 94 | INDEX UNIQUE SCAN | L_ATMS_TEST_TYPE_PK | 1 | | 0 (0)| 00:00:01 | | | Q1,03 | PCWP | |
|* 95 | TABLE ACCESS BY INDEX ROWID | L_ATMS_TEST_TYPE | 1 | 5 | 1 (0)| 00:00:01 | | | Q1,03 | PCWP | |
Predicate Information (identified by operation id):
5 - access("B_FP_TEST"."TEST_TYPE_ID"="L_ATMS_TEST_TYPE"."ATMS_TEST_TYPE_ID")
8 - access("B_FP_TEST"."STATUS_ID"="L_TEST_STATUS"."STATUS_ID")
11 - access("B_FP_TEST"."TEST_ID"="J_FP_INVESTIGATOR"."TEST_ID")
18 - access("J_OP_TEST_ANML"."START_DATE">=TO_DATE(' 2013-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "J_OP_TEST_ANML"."START_DATE"<=TO_DATE(' 2013-11-05
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
19 - access("J_OP_TEST_ANML"."TEST_ID"="B_FP_TEST"."TEST_ID")
20 - filter("B_FP_TEST"."DATASOURCE_ID"=9 AND "B_FP_TEST"."IS_DELETED"=0 AND "B_FP_TEST"."TEST_NUM"<>1 AND "B_FP_TEST"."TEST_NUM"<>2 AND
"B_FP_TEST"."TEST_NUM"<>99)
22 - filter("J_FP_INVESTIGATOR"."IS_PI"=1)
26 - filter("L_TEST_STATUS"."STATUS"='Completed' OR "L_TEST_STATUS"."STATUS"='In-Progress' OR "L_TEST_STATUS"."STATUS"='Scheduled')
30 - filter("L_ATMS_TEST_TYPE"."IS_DELETED"=0)
34 - access("B_FP_TEST"."STATUS_ID"="L_TEST_STATUS"."STATUS_ID")
38 - filter("J_OP_TEST_ANML"."END_DATE">=TO_DATE(' 2013-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "J_OP_TEST_ANML"."END_DATE"<=TO_DATE(' 2013-11-05
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND (LNNVL("J_OP_TEST_ANML"."START_DATE">=TO_DATE(' 2013-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR
LNNVL("J_OP_TEST_ANML"."START_DATE"<=TO_DATE(' 2013-11-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
39 - access("J_OP_TEST_ANML"."TEST_ID"="B_FP_TEST"."TEST_ID")
40 - filter("B_FP_TEST"."DATASOURCE_ID"=9 AND "B_FP_TEST"."IS_DELETED"=0 AND "B_FP_TEST"."TEST_NUM"<>1 AND "B_FP_TEST"."TEST_NUM"<>2 AND
"B_FP_TEST"."TEST_NUM"<>99)
41 - filter("L_TEST_STATUS"."STATUS"='Completed' OR "L_TEST_STATUS"."STATUS"='In-Progress' OR "L_TEST_STATUS"."STATUS"='Scheduled')
42 - filter("J_FP_INVESTIGATOR"."IS_PI"=1)
43 - access("B_FP_TEST"."TEST_ID"="J_FP_INVESTIGATOR"."TEST_ID")
44 - access("B_FP_TEST"."TEST_TYPE_ID"="L_ATMS_TEST_TYPE"."ATMS_TEST_TYPE_ID")
45 - filter("L_ATMS_TEST_TYPE"."IS_DELETED"=0)
50 - access("B_FP_TEST"."STATUS_ID"="L_TEST_STATUS"."STATUS_ID")
53 - access("B_FP_TEST"."TEST_ID"="J_FP_INVESTIGATOR"."TEST_ID")
60 - filter("J_OP_TEST_ANML"."END_DATE">=TO_DATE(' 2013-11-04 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "J_OP_TEST_ANML"."START_DATE"<=TO_DATE(' 2013-11-04
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND (LNNVL("J_OP_TEST_ANML"."END_DATE">=TO_DATE(' 2013-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR
LNNVL("J_OP_TEST_ANML"."END_DATE"<=TO_DATE(' 2013-11-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("J_OP_TEST_ANML"."START_DATE">=TO_DATE(' 2013-10-27
00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("J_OP_TEST_ANML"."START_DATE"<=TO_DATE(' 2013-11-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
61 - access("J_OP_TEST_ANML"."TEST_ID"="B_FP_TEST"."TEST_ID")
62 - filter("B_FP_TEST"."DATASOURCE_ID"=9 AND "B_FP_TEST"."IS_DELETED"=0 AND "B_FP_TEST"."TEST_NUM"<>1 AND "B_FP_TEST"."TEST_NUM"<>2 AND
"B_FP_TEST"."TEST_NUM"<>99)
64 - filter("J_FP_INVESTIGATOR"."IS_PI"=1)
68 - filter("L_TEST_STATUS"."STATUS"='Completed' OR "L_TEST_STATUS"."STATUS"='In-Progress' OR "L_TEST_STATUS"."STATUS"='Scheduled')
69 - access("B_FP_TEST"."TEST_TYPE_ID"="L_ATMS_TEST_TYPE"."ATMS_TEST_TYPE_ID")
70 - filter("L_ATMS_TEST_TYPE"."IS_DELETED"=0)
75 - access("B_FP_TEST"."STATUS_ID"="L_TEST_STATUS"."STATUS_ID")
78 - access("B_FP_TEST"."TEST_ID"="J_FP_INVESTIGATOR"."TEST_ID")
85 - filter("J_OP_TEST_ANML"."END_DATE">=TO_DATE(' 2013-10-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "J_OP_TEST_ANML"."START_DATE"<=TO_DATE(' 2013-10-28
00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND (LNNVL("J_OP_TEST_ANML"."END_DATE">=TO_DATE(' 2013-11-04 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR
LNNVL("J_OP_TEST_ANML"."START_DATE"<=TO_DATE(' 2013-11-04 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("J_OP_TEST_ANML"."END_DATE">=TO_DATE(' 2013-10-27
00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("J_OP_TEST_ANML"."END_DATE"<=TO_DATE(' 2013-11-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
(LNNVL("J_OP_TEST_ANML"."START_DATE">=TO_DATE(' 2013-10-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("J_OP_TEST_ANML"."START_DATE"<=TO_DATE(' 2013-11-05
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
86 - access("J_OP_TEST_ANML"."TEST_ID"="B_FP_TEST"."TEST_ID")
87 - filter("B_FP_TEST"."DATASOURCE_ID"=9 AND "B_FP_TEST"."IS_DELETED"=0 AND "B_FP_TEST"."TEST_NUM"<>1 AND "B_FP_TEST"."TEST_NUM"<>2 AND
"B_FP_TEST"."TEST_NUM"<>99)
89 - filter("J_FP_INVESTIGATOR"."IS_PI"=1)
93 - filter("L_TEST_STATUS"."STATUS"='Completed' OR "L_TEST_STATUS"."STATUS"='In-Progress' OR "L_TEST_STATUS"."STATUS"='Scheduled')
94 - access("B_FP_TEST"."TEST_TYPE_ID"="L_ATMS_TEST_TYPE"."ATMS_TEST_TYPE_ID")
95 - filter("L_ATMS_TEST_TYPE"."IS_DELETED"=0)Excellent piece of follow-up on my first suggestion.
I nearly made a comment about how the plan doesn't show Bloom filter pruning either - and then I realised why not. The plan you've shown us comes from Explain Plan with literal values present; the trace file shows bind variables with names that are generated when cursor_sharing is set to force or similar - so the run-time plan and the plan from explain plan are almost guaranteed to be different.
Oracle support will need you to supply the plan you get from trying to run the query and then making a call to dbms_xplan.display_cursor() - dbms_xplan in 10g | Oracle Scratchpad If you do this I think you'll find that the pstart/pstop columns contain entries like :BF0000, and you may even find operations link PX JOIN FILTER CREATE / PX JOIN FILTER USE
A couple of generic notes:
if a query does sufficient work to merit parallel execution, then it's usually better to supply the best possible information to the optimizer, which means using literals rather than bind variables - you could try executing the query with the hint /*+ cursor_sharing_exact */ to stop Oracle from turning your literals into binds; it might be the presence of bind variables that's making the optimizer choose a path that has to include bloom filter pruning in your case.
Where you have the to_date() call you've used a four-digit year - which is a very good thing and helps the optimizer - but it's also a good idea to include an explicit format string as well: with a four-digit year this probably won't make any difference, but it avoids any risk of ambiguity for the optimizer.
I made a comment about the P->S stage and bottlenecks - I spent a couple more minutes looking at the plan, and I see the optimizer has used concatentation: in effect it has run three query blocks one after the other and fed the results to the query co-ordinator - in this case the P->S would make no difference to the end-user response time there's always a final P->S to the coordinator, you just happen to have three of them.
Regards
Jonathan Lewis -
I encountered this problem on our SQL2012 and I have tried different scenarios (see below) to no avail. I have decided to give up and check if someone here has encountered this and resolved it.
One thing I know, it's not a memory issue. Both servers we're using has lots of memory to spare and we monitor the memory as the replication goes through it's steps.
I hope someone can help me on this. Thanks!
The Error:
The merge process could not allocate memory for an operation; your system may be running low on virtual memory. Restart the Merge Agent.
Our Scenario
We're using SQL Server 2012 SP1. All subscriptions are pull based.
We're using direct Merge Replication (not FTP or web sync)
We already have 10 active replications with larger databases. Only 1 has this issue.
Database size is less than 5 GB
Rebuilding the publisher is not an option.
What have I tried?
There is no memory problem --- we have lots to spare
I have tried re-initialization of the database. Same problem.
I tried deleting the database and reinitializing it. Same problem
New snapshot. Same problem.
I tried changing the subscriber server but still same issue.
MCP, MCSD, MCDBA (ANXA)Here is the result for the sp_configure on our subscriber. We're doing a pull on the server with the issue.
name
minimum
maximum
config_value
run_value
max server memory (MB)
128
2147483647
2147483647
2147483647
In addition, I made a comparison between the working servers and the one with the issue -- there seems to be a difference in the service pack. Publisher has none but the subscriber is operating on SP1. But still strange as only 1 database is affected.
MCP, MCSD, MCDBA (ANXA) -
PS2013 - When creating a new instance of Project Server hangs in 'Waiting for resources' status
Hi,
I have one instance of Project Server 2013 fully operational and I tried to duplicate the instance to make some tests. As I know the issue of using the same Content Database of Project Server in the same Server Farm, I used the powershell backup/restore/dismount
and mount the content database to change the site IDs to avoid index duplication. The Project server database was a regular SQL backup and restore in another database.
I created a new Web App in the port 90 as show below.
Then I included the Content database of Project Server as separated DB from SharePoint for this new SharePoint -acme90 and I tried to create the new instance. The creation hanged in "Waiting for Resources" status.
To make another check excluding the reuse of the SharePoint-80 I tried to create another instance both in the SharePoint-80 where is the working instance and in the SharePoint-90, everything default and again they all hanged in Waiting for Resource.
If I try to create the instance using PowerShell I got the following error:
PS C:\Users\epm_setup> Mount-SPProjectWebInstance -DatabaseName Test_EPM -SiteCo
llection http://acme02/epm -Lcid 1046
Mount-SPProjectWebInstance : Cannot find an SPSite object with Id or Url:
http://acme02/epm.
At line:1 char:1
+ Mount-SPProjectWebInstance -DatabaseName Test_EPM -SiteCollection
http://acme02/ ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~
+ CategoryInfo : InvalidData: (Microsoft.Offic...ountPwaInstance:
PSCmdletMountPwaInstance) [Mount-SPProjectWebInstance], SPCmdletPipeBindEx
ception
+ FullyQualifiedErrorId : Microsoft.Office.Project.Server.Cmdlet.PSCmdletM
ountPwaInstance
All SharePoint and Project Server services are running, all App Pools and sites are started at the IIS. I could not find a hanging timer job.
I cannot stop the hanged process or dismount the instances using Powershell since no instance created.
How should I solve the hanging status of creation of the instance? As they are in Hyper-V I can go back using one snapshot.
Thank you.
Best regards, Ricardo Segawa - Segawas Projetos / Microsoft PartnerHi Eric,
Thank your for your interest in this case.
I checked for running and crashed PWA jobs and deleted all of them just after restoring the snapshot and tried to create the new instance in the new web app in port 90 (besides the existing and working instance in port 80), but again it hanged in "waiting
for resources". There is not any timer job hanging, no error in event viewer or in log. So the error is well before working with cloned dbs.
Answering your questiion, I am working all on 2013. My intention is backup one instance of the port 80 and copy to the instance of port 90, changing of course the url and the index of the content db of SharePoint. The process I used was:
Create a new web app in port 90, creating a new SharePoint_Content_2 on a
http://server:90 site.
Created the top level site called Portal using the Team Site template.
Create a new content db for new instance of Project Server named EPM_Content_2 using Central Admin.
Backup content db from port 80 instance of Project Server and restore to this EPM_content_2 using PowerShell cmd.
Dismounted and mounted this Project Server content db to create new index for existing sites to avoid index conflicts.
Backup the Project Server DB from port 80 using SQL backup and restored as ProjectWebApp2 db for port 90 instance.
Tried to create a new instance of Project Server
http://server:90/pwa on web app of port 90 using the ProjectWebApp2 db and using the same app pool of the other instance. But as in the previous case, it hang out in "Waiting for resources".
Best regards, Ricardo Segawa - Segawas Projetos / Microsoft Partner -
Instructions for Downloading and Activating the JCOP Tools
In response to my question:
I refer to JCOP Tools available on Eclipse.
If the plug-in free for download?
If it is not, kindly provide a proper hyperlink from which those interested can place their order.Listed below is the reply from IBM. Can anyone who has successfully obtained the activation code this way confirm whether you need to foot the postage?
Cheers!!!!
There is no charge for the JCOP Tools. The tools are provided as-is with no warranty or support. Instructions for downloading and activating the tools follow.
Please note that because our Java Card operating system is now available for more than one silicon vendor, IBM will no longer act as a distributor for sample cards. If you require a sample card, you must now approach the silicon vendors directly . Our current silicon partners are Philips Semiconductor, Samsung and Sharp. In addition you may be able to obtain JCOP sample cards from suppliers on this list http://www.zurich.ibm.com/jcop/order/cards.html
JCOP Tools are subject to US Government Export Controls, and therefore each install has to be activated individually to ensure compliance. Please follow the instructions below for each copy of the tools that need to be activated.
Prerequisites:
1. If you do not already have a Java Runtime Environment (JRE) installed, download and install a JRE or JDK. You can do so from this website (http://java.sun.com/j2se/1.4.2/download.html). Please note that we recommend the use of JRE version 1.4.2.
2. If you do not already have the open source software development environment Eclipse installed, download and install Eclipse (http://www.eclipse.org/), You can do so from this website(http://www.eclipse.org/downloads/index.php). JCOP Tools require Eclipse 3.1
To download the tools and start install:
1. Download the current Update Site image from here (http://www.zurich.ibm.com/jcop/download/eclipse/image/tools.zip)
2. Unzip the downloaded file to a location of your choice
3. Start the Eclipse IDE
4. From the menu bar click on Help > Software Updates > Find and Install
5. In the Install/Update Dialog select Search for new features to install
6. Click Next
7. Click on New Archived Site . . . and browse to the location chosen in step 2
8. Select the file tools.zip
9. Click Open then OK then Finish
10. Eclipse Update Manager will start to install the plug-in, continue with the install as needed.
For an activation code please send me ([email protected]) the following information:
1. Your full postal address - the serial number will be sent via International Courier for US Export control reasons.
2. Your contact telephone number
3. The serial number of your JCOP tools install
4. Your planned usage/reason for needing the JCOP tools
5. If a student, a copy (fax or digital photo) of your student ID
For the serial number (item 3 above):
1. Ensure you have downloaded and installed the JCOP Tools
2. Start the Eclipse IDE
3. From the menu bar click on File > New > Project
4. In the New Project Dialog expand the Java Card folder
5. Select Java Card Project
6. Click Next
7. You should now see the JCOP Licensing Wizard
8. Click Next
9. Select Verify an Activation Code
10. Click Next
11. The Serial Number should appear on the next page, above the Activation Code entry fields.
12. Once you have that number click on Cancel then Abort (Note: the Java Card project choice will be disabled until the next time you restart Eclipse)For those who are interested to use the JCOP simulator for a start, I have checked that the plug-in and feature files (in tools.zip) for Eclipse are still available at IBM site, please download and install yourself.
I still keep a copy of user guide for JCOP tools (version 3.1.1a) and the contents are still relevant to Version 3.1.1b simulator in tools.zip.
If you are interested to have a copy of such document, kindly drop me an email to [email protected] -
How to analyse actual cost for each operation in Process/Production Order?
Hi all,
IN SAP, I know that we can analyze plan cost for Operation in Process Order/ Production Order.
However, I can not analyze Actual cost for Operation .
I know the object when I good issue material is ORDER, not OPERATION.
But, as requirement of users, they want to analyze actual cost for each Operation.
CAn SAP do this analyze?
If Yes, What should I do in SAP to have this reports?
THanks in advance,
Emily NguyenHi Emily,
as I see from one other message that you're using material ledger. Then your expectation is apparently to also see the actual costs of materials in your production order by operation.
This is not supported. Actual material costs are handed up in material ledger multi-level settlement from one material to the next, bypassing the production orders.
That means the production order will always contain the material costs valuated at plan price, not at actual price.
For activity prices that might be better, when you revalue the orders at actual activity prices with CON2. But even there I am not sure that it will always be assigned correctly to the operations.
There is currently an enhancement in development, the operation level costing. But that will affect more the planned cost side not the actuals. Might be interesting to learn more about your requirements and the use case behind.
best regards,
Udo
Maybe you are looking for
-
Hello, I have recently experienced this problem opening an .obj file using Photoshop Extended CS6. I have checked About Photoshop, and yes it is the extended version. I have went to the Performance settings, the GPU is not accessible. Scrolling over
-
Connecting tv to my t43 using s-vidio
hello I have aproblem connecting my t43 to the tv using s-video to rca cable. im using an intel grphic accelerator and my os is xp . on the tv I see flikeing lines. I've tried downloding drivers and I've tried to change display settings including vi
-
Adobe viewer on iPhone Sign in Problems
I can log in to Acrobat.com and digitalpublishing.acrobat.com successfully. I can see the folio that I made in InDesign CS6 in digitalpublishing.acrobat.com, so that suggests to me thatmy ID is verified. My problem is that when I attempt to log in to
-
FireWire no longer works using Thunderbolt-FW800 adaptor
Has anyone else experienced this? Running all the latest 10.8.2 plus correct firmware on 13" MB Pro Retina. Lacie Rugged 1Tb FW drives - verified as being fine as recent as yesterday and on a FW equipped machine they mount and verify fine. I've repai
-
Hi all, I want to do a Query like this: Select * from OINV where usersign = "The user logged on". I only want to show the invoices that the logged on user made. I dont want to use in a Formatted Search. How can i do this? Best regards, Augusto Silva