Programming in a Clustered Server Environment
Sorry if this is not the right group but I'm not sure of the best one, if you know please post where this message should have been posted.
We have a clustered WebSphere server running our J2EE apps. Everything is ok until you want to run scheduled jobs (in this case, using Quartz). The catch is I only want one of the servers in the cluster to run the process and at the moment when the scheduled time arrives, they both start. This runs an EJB service that can take several minutes to finish. Having two running at the same time on the same db aint the best.. i fact, as you can imagine, it's really, really bad.
What is the best way for synchronizing code between different JVM's on different servers (by the way, these clustered servers don't share discs)?
The two - not so good - solutions that I have come up with are:
1) The application knows the name of both the servers and at start up says "am I running on server x?" and only if true will call the EJB service. Yuk... (this is what I have done in the past)
2) Have the EJB run the transactions as a bean managed transaction so that they can use a special db table which they use to share a synchronize flag. Don't like this because of BMT and because using the table for this purpose doesn't seem right.
How does\would everyone else handle this type problem.
Thanks Heaps,
Owen.
Message was edited by:
owen@aus
I am not familior with anything called Quartz but I think this issue should be handled task scheduler itself.
In the place I work the task scheduler we use (I house developed one) has following approach
Once the task is posted it is in "posted" state and once a batch server (Thats what we call the service that executes it) picks a task up it changes the state to "executing". Once the execution is complete it change the state to "ready". If an exception occures it will abort the operation and set the state to "error".
Batch Server can pick up only the tasks with state "Posted" so two services will not pick up same task.
By the way the tasks with error state can be reset to posted state by the user.
probably you need a solution like this. Either you have to develop one or find one which considers the existance of multiple execution services
Similar Messages
-
JCO with poolmanager in a clustered server environment
Hi experts:
The issue I am having is the following: we are creating connection pools for each user that logs in. In other words, a connection pool would be created for both userids XXXX and YYYY if both logged into the system. As per best practices, all connections are being managed by one PoolManager. We release connections as we go.
This works fine for a single server. However, now that we are moving to the cluster, we are trying to figure out the best way to implement. The best option seems to be to implement the PoolManager as a cluster singleton, and then do a JNDI lookup every time we want to access. That way, if a managed server goes up or down, we don't have to worry about it. Any other option we could think of was sketchy, like trying to create the connection pool on each managed server with our code, since servers could (in theory) be added or removed at any time (including after the connection pools were established for a user), and as soon as a server is added to the cluster, the load balancers would begin routing requests to it.
Long story short, as much as JCO has been used, we can't really find many examples of people using the PoolManager as a singleton.
So can this be done? and if so are there any tricks to getting this accomplished?
thanks,
chris.I am not familior with anything called Quartz but I think this issue should be handled task scheduler itself.
In the place I work the task scheduler we use (I house developed one) has following approach
Once the task is posted it is in "posted" state and once a batch server (Thats what we call the service that executes it) picks a task up it changes the state to "executing". Once the execution is complete it change the state to "ready". If an exception occures it will abort the operation and set the state to "error".
Batch Server can pick up only the tasks with state "Posted" so two services will not pick up same task.
By the way the tasks with error state can be reset to posted state by the user.
probably you need a solution like this. Either you have to develop one or find one which considers the existance of multiple execution services -
Image expired error for CFChart on clustered server
Hello,
I noticed the following article that responds to a problem
we're having where our CFCharts occasionally aren't displaying due
to our clustered server environment forgetting where the chart was
created:
http://www.adobeauthorizations.com/cfusion/knowledgebase/index.cfm?id=tn_19328
Currently we're running CF 6.1. Does anyone know if an
upgrade to CF7 will fix this issue?
If not, can we reinstall CF with our CFIDE directory located
underneath the load balancing system? How would that work?
I know we can use CFFile to write the chart to another
directory and then display from there, but for now that solution
seems to be out, since we don't have write privileges to our
production server.
Anyway, I'm hoping that enabling sticky sessions on our web
server, or an updgrade to CF7 will fix.
Please, any comments or suggestions.
Thanks,
PeterHi Kalshetty,
Please check the following link for this error, it applies to CUPS 8.x
https://supportforums.cisco.com/document/109296/error-non-defined-ims-exception-cups-8x
HTH
Manish -
Workflow Custom Activity deploy in multi server environment
I have been working on a project that involves developing a custom workflow activity for SharePoint 2013. I am developing it in a single server environment working with http.
My problem occurs when deploying to multi-server environment with https (WFE, APP). My question is how to deploy my solution to the new environment.
The steps:
Create a project - c# activity library
Add a workflow activity, add .xml
Deploy the .dll and .xml of project to:
"C:\Program Files\Workflow%20Manager\1.0\Workflow\Artifacts" "C:\Program Files\Workflow Manager\1.0\Workflow\WFWebRoot\bin"
net sotp "Workflow Manager Backend"
net start "Workflow Manager Backend"
Deploy .DLL to GAC
- Created MSI using install shield in VS2010 to add .DLL to GAC
- Verify .DLL in GAC by going to c:\windows\assembly and %windir%\Microsoft.NET\assembly
iisrest
Deploy WSP to SharePoint, activate feature, open SharePoint Designer 2013 and choose the custom action that now appears when creating a 2013 workflow
To recap we have workflow manager on the APP server and and the workflow client on the WFE. We deployed the .DLL and .XML to the workflow manager (APP) only. The .DLL is deployed
to/in the GAC on the WFE and the APP. We are able to see and create the activity in Designer 2013 and we deploy the workflow to a normal SharePoint list. When we run the workflow we do not get any errors in the ULS logs, event viewer or Workflow Manager Debug
Logs (event viewer also). The site is not created though. We believe the issue is that the custom C# (.DLL) is not being ran.
This all works fine and dandy on my single server environment. Workflow is working like a charm. How can we trouble shoot what the issue is if we are not finding any errors?
Is there a step that we missed or some other place we need to look for logs? Would the ULS logs show the site creation or show running our custom code? Currently it does not show anything when we run the workflow.
Let me know if this is unclear or if anyone needs more information. ThanksHi,
Here is a workaround for your reference:
We can develop a custom WCF service instead of the Custom Activity in SharePoint. And then use the service from workflow. It use a separate dedicated server for workflow without having any reference to SharePoint DLLs from inside workflow.
Here is a similar thread for your reference:
https://social.technet.microsoft.com/Forums/systemcenter/en-US/d462ca07-9861-4133-948a-fc9771306cb1/custom-workflow-how-to-go-from-single-server-to-multiple?forum=sharepointdevelopment
Thanks,
Dennis Guo
TechNet Community Support
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
[email protected]
Dennis Guo
TechNet Community Support -
Just-In-Time (JIT) Error in Windows 2003 Server Environment
Hi,
I am using the EBS R12.1.1 environment for learning purpose. I am running EBS R12.1.1 on Windows 2003 Server environment on my personal laptop.
I am getting JIT error on FNDLIBR.exe file. This error opens a window and asks me to choose debuggers and by default the MS Visual Studio 2005 debugger is available. This error raises every some time randomly like some time it raises in every 1 min, sometime it raises in every 2 to 5 mins etc. How can I overcome on this error?
Thanks & Regards,
Waqas HassanThanks Mr. Hussein,
I think in learning process i do not need the NLS Patch. So I had a fresh copy of EBS R12.1.1 in VMWare on Windows 2003 Server and now, I am using it without NLS Patch.
Actually I am working on HRMS Setup, in fresh EBS environment there is need to install its legislative data, so I installed it and apply the hrglobal.drv patch according to this document "*Instructions for Running DataInstall/hrglobal.drv on R12 [ID 414434.1]*" but I just skipped only 5th and 6th Step because its about NLS Patch.
I run the 7th step to "*Generate Payroll Dynamic Database Item Translations*" its completed successfully but when I run the 8th Step about "*Bulk Compile Formulas*" its giving error and the error is given below:
FastFormula: Version : 12.0.0
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
BULKCOMPILE module: Bulk Compile Formulas
Current system time is 15-APR-2012 00:51:16
Starting verify: 109 formula instances to process.
Accrual:PTO_HD_ANNIVERSARY_BALANCE:0001/01/01 verified OK
Accrual:PTO_PAYROLL_BALANCE_CALCULATION:0001/01/01 verified OK
Accrual:PTO_PAYROLL_CALCULATION:0001/01/01 verified OK
Accrual:PTO_ROLLING_ACCRUAL:0001/01/01 verified OK
Accrual:PTO_SIMPLE_BALANCE_MULTIPLIER:0001/01/01 verified OK
Accrual:PTO_SIMPLE_MULTIPLIER:0001/01/01 verified OK
Accrual Carryover:PTO_HD_ANNIVERSARY_CARRYOVER:0001/01/01 verified OK
Accrual Carryover:PTO_PAYROLL_CARRYOVER:0001/01/01 verified OK
Accrual Carryover:PTO_ROLLING_CARRYOVER:0001/01/01 verified OK
Accrual Carryover:PTO_SIMPLE_CARRYOVER:0001/01/01 verified OK
Accrual Ineligibility:PTO_INELIGIBILITY_CALCULATION:0001/01/01 verified OK
Accrual Subformula:PTO_HD_ANNIVERSARY_PERIOD_ACCRUAL:0001/01/01 verified OK
Accrual Subformula:PTO_PAYROLL_PERIOD_ACCRUAL:0001/01/01 verified OK
Accrual Subformula:PTO_ROLLING_PERIOD_ACCRUAL:0001/01/01 verified OK
Accrual Subformula:PTO_SIMPLE_PERIOD_ACCRUAL:0001/01/01 verified OK
Appraisal Competency Line Scoring:PERF_X_PROF:0001/01/01 verified OK
Appraisal Competency Line Scoring:PERF_X_WEIGHTING:0001/01/01 verified OK
Appraisal Competency Line Scoring:PROF_X_WEIGHTING:0001/01/01 verified OK
Appraisal Objective Line Scoring:PERF:0001/01/01 verified OK
Appraisal Objective Line Scoring:PERF_X_WEIGHTING:0001/01/01 verified OK
Appraisal Total Scoring:AVG_COMP_AND_OBJ:0001/01/01 verified OK
Appraisal Total Scoring:SUM_COMP_AND_OBJ:0001/01/01 verified OK
CAGR:HR_CAGR_PYS_TEMPLATE:1951/01/01 verified OK
CAGR:HR_CAGR_TEMPLATE:1951/01/01 verified OK
Element Skip:SA_ONCE_EACH_PERIOD:0001/01/01 verified OK
APP-FF-34004: Element Skip:SA_ONCE_EACH_YEAR:0001/01/01 FAILED
APP-FF-33005: The local variable GOSI_REFERENCE_EARNINGS_ASG_YTD was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
Element Skip:PTO_ORACLE_SKIP_RULE:0001/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_CONSOL_SET:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_ELEMENT_NAME:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_ELEMENT_SET:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_EXTRACT_ENDDT:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_EXTRACT_NAME:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_EXTRACT_STARTDT:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_LOCATION:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_ORG_NAME:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_PAYROLL_NAME:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_PERSON_TYPE:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_REPORT_OPTION:1900/01/01 verified OK
Extract Header/Trailer Data Element:PAY_GLOBAL_PEXT_SELECTION_CRITERIA:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_CHK_ASG_ACTIONS:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_FID_EMP_STATUS:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_ACTUAL_SALARY:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_ANN_COMP:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_BAL_VAL:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_DDF_DF_VALUE:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_ELE_ETRY_VALUE:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_EMP_CATEGORY:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_EMP_STATUS:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_NORMAL_HOURS:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_PAY_VALUE:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_PLAN_CONTR_VALUE:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_SIT_VALUE:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_GET_TERMI_DATE:1900/01/01 verified OK
Extract Person Data Element:PAY_GLOBAL_PEXT_PAYROLL_DATE:1900/01/01 verified OK
Extract Person Inclusion:PAY_GLOBAL_PEXT_CRITERIA_FP:1900/01/01 verified OK
Extract Post Process:PAY_GLOBAL_PEXT_POST_PROCESS:1900/01/01 verified OK
Net to Gross:DEFAULT_GROSSUP:0001/01/01 verified OK
OTL Approvals:HXC_OVERRIDE_APPROVER_WF_PERSON:0001/01/01 verified OK
OTL Time Entry Rules:HXC_ASG_STD_HRS_COMPARISON:0001/01/01 verified OK
OTL Time Entry Rules:HXC_APPROVAL_ASG_STATUS:0001/01/01 verified OK
OTL Time Entry Rules:HXC_APPROVAL_MAXIMUM:0001/01/01 verified OK
OTL Time Entry Rules:HXC_CLA_CHANGE_FORMULA:1901/01/01 verified OK
OTL Time Entry Rules:HXC_CLA_LATE_FORMULA:1901/01/01 verified OK
OTL Time Entry Rules:HXC_ELP:0001/01/01 verified OK
OTL Time Entry Rules:HXC_FIELD_COMBO_EXCLUSIVE:0001/01/01 verified OK
OTL Time Entry Rules:HXC_FIELD_COMBO_INCLUSIVE:0001/01/01 verified OK
OTL Time Entry Rules:HXC_PERIOD_MAXIMUM:0001/01/01 verified OK
OTL Time Entry Rules:HXC_PTO_ACCRUAL_COMPARISON:0001/01/01 verified OK
OTL Time Entry Rules:HXC_TIME_CATEGORY_COMPARISON:0001/01/01 verified OK
Oracle Payroll:NI_VALIDATION:0001/01/01 verified OK
APP-FF-34004: Oracle Payroll:SA_GOSI_CALCULATION:0001/01/01 FAILED
APP-FF-33005: The local variable GOSI_REFERENCE_EARNINGS_ASG_YTD was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
APP-FF-34004: Oracle Payroll:SA_GOSI_CALCULATION:2006/01/01 FAILED
APP-FF-33005: The local variable GOSI_REFERENCE_EARNINGS_ASG_YTD was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
Oracle Payroll:US_EVS:0001/01/01 verified OK
Oracle Payroll:CALC_GROSSUP_PAY_VALUE:0001/01/01 verified OK
Oracle Payroll:HR_CWK_MOVE_TYPE_TEMPLATE:0001/01/01 verified OK
Oracle Payroll:HR_MOVE_TYPE_TEMPLATE:0001/01/01 verified OK
Oracle Payroll:HR_PERSON_TYPE_TEMPLATE:0001/01/01 verified OK
Oracle Payroll:PTO_TAGGING_FORMULA:0001/01/01 verified OK
Payment:SA_EFT_BODY:0001/01/01 verified OK
APP-FF-34004: Payment:SA_EFT_BODY_CUSTOMER:0001/01/01 FAILED
APP-FF-33005: The local variable ORG_SA_BANK_NAME was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
Payment:SA_EFT_FOOTER:0001/01/01 verified OK
APP-FF-34004: Payment:SA_EFT_FOOTER_CUSTOMER:0001/01/01 FAILED
APP-FF-33005: The local variable ORG_SA_ACCOUNT_NUMBER was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
Payment:SA_EFT_HEADER:0001/01/01 verified OK
APP-FF-34004: Payment:SA_EFT_HEADER_CUSTOMER:0001/01/01 FAILED
APP-FF-33005: The local variable ORG_SA_ACCOUNT_NUMBER was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
Payroll Run Proration:SA_USER_PRORATION_FORMULA:0001/01/01 verified OK
People Management Message:US_EMPLOYEE_TRANSFER:0001/01/01 verified OK
People Management Message:US_NEW_STARTER:0001/01/01 verified OK
People Management Message:US_PAYROLL_INT_HIRE:0001/01/01 verified OK
People Management Message:US_PAYROLL_NEW_STARTER:0001/01/01 verified OK
People Management Message:US_RELOCATION_NOTIFICATION:0001/01/01 verified OK
People Management Message:QH_ASSIGNMENT_NAME:0001/01/01 verified OK
Promotion:PROMOTION_TEMPLATE:0001/01/01 verified OK
APP-FF-34004: QuickPaint:US_EMPLOYEE_TRANSFER:1000/01/01 FAILED
APP-FF-33005: The local variable SCL_ASG_US_TAX_UNIT was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
APP-FF-34004: QuickPaint:US_NEW_STARTER:1000/01/01 FAILED
APP-FF-33005: The local variable SCL_ASG_US_TAX_UNIT was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
APP-FF-34004: QuickPaint:US_PAYROLL_INT_HIRE:1000/01/01 FAILED
APP-FF-33005: The local variable SCL_ASG_US_TAX_UNIT was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
APP-FF-34004: QuickPaint:US_PAYROLL_NEW_STARTER:1000/01/01 FAILED
APP-FF-33005: The local variable SCL_ASG_US_TAX_UNIT was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
APP-FF-34004: QuickPaint:US_RELOCATION_NOTIFICATION:1000/01/01 FAILED
APP-FF-33005: The local variable SCL_ASG_US_TAX_UNIT was used before being initialized
Cause: The variable named in the error message is being used before any value has been assigned to it, so it has no meaningful value.
Action: Please ensure variables have been assigned to before using them.
QuickPaint:EXAMPLE_BIS_OT_BAND1:0001/01/01 verified OK
QuickPaint:QH_ASSIGNMENT_NAME:0001/01/01 verified OK
QuickPaint:TEMPLATE_ABSENCE_DURATION:0001/01/01 verified OK
QuickPaint:TEMPLATE_BIS_DAYS_TO_HOURS:0001/01/01 verified OK
QuickPaint:TEMPLATE_BIS_TRAINING_CONVERT_DURATION:0001/01/01 verified OK
QuickPaint:TEMPLATE_FTE:0001/01/01 verified OK
QuickPaint:TEMPLATE_HEAD:0001/01/01 verified OK
Template Information:PER_QH_JOB_EI:0001/01/01 verified OK
Template Information:PER_QH_ORG_EI:0001/01/01 verified OK
Template Information:PER_QH_POSITION_EI:0001/01/01 verified OK
Template Information:PER_QH_SUPERVISOR_CONTACT_EI:0001/01/01 verified OK
User Table Validation:CHECK_RATE_TYPE:0001/01/01 verified OK
The formula wrapper package was created successfully.
Executing request completion options...
Finished executing request completion options.
Concurrent program returned no reason for failure.
Exceptions posted by this request:
Concurrent Request for "Bulk Compile Formulas" has completed with error.
Concurrent request completed
Current system time is 15-APR-2012 00:51:23
Please help me out.
Thanks & Best Regards,
Waqas Hassan -
Kernel Upgrade on clustered server
Hi Gurus,
Please let me know if there exists any difference in the process in which a kernel upgrade is done on a standalone server and a clustered server.
This has to be done on our production server so if any one has steps then pls list out.
Thanks & Regards
ShrenikHi,
In clustered environment the kernel upgrade is same as that of the normal kernel upgrade b'cause both the nodes (active & passive) access the same data which normally resides in the external storage.
Regards,
sam -
How a clustered weblogic environment handles orders balanced-wise??
Hi,
In a 2 managed clustered weblogic environment with OSM 7.0.3 and an external load balancer to balance the incoming traffic it is noticed from the managed servers that whichever server is scanning for orders and that is understood by the server's logs:
####<Oct 16, 2012 2:54:33 PM EEST> <Info> <oms> <> <osm_ms01> <Timer-9> <oms-internal> <> <fab6ae59fd53672b:704b5627:13a64686216:-8000-0000000000000010> <1350388473505> <BEA-000000> <cluster.ClusteredHandlerFactory: Querying for high activity orders across the cluster>
is the server that will serve a new order.
Is there a way to achieve a perfect load balance? In a test case of 200 orders all orders where processes by one node and it is the one that scans for new orders.
We configured the external load balancer to split the traffic..But nothing!! Is there an internal mechanism that gathers all orders that are send to multiple servers and executes them in the server that is currently scanning for orders ???
Is there in any manual or Oracle Support Document/note on how is decided in a multiple-clustered environment which server will execute orders???
Thx in advance!Hi Alexandros,
Here's some general information on load balancing:
1. With OSM order affinity, the managed server instance that receives the order (precisely, creates the OSM order) has sole ownership of the order. Other than specific circumstances, the ownership is not transferred, and thus processing of that order stays with that instance till completion.
2. The OSM web service (createOrder API) has no load balancing mechanism internally if HTTP is used as the transport. So if you only send orders to one managed server, that instance will create and thus own all these orders. In contrast, if you use JMS as the transport, it is load-balanced by the JMS distributed destination (provided you are not sending instead to member queues of the distribution destination).
Now, assuming you are using HTTP, you need to ensure that the Load Balancer is really round-robining on the 2 managed servers among HTTP messages of order submissions. Monitor your TCP pipes to verify.
A problem we've seen, is if you are using SoapUI with pre-emptive authentication disabled, the SOAP request without pre-emptive authentication will be rejected, causing a re-send. Because of LB, all orders ended up in one managed server, as the reject-then-accept SOAP message sequence becomes cyclic with odd-even round-robin. So, enable pre-emptive authentication to avoid that.
Btw, is your cartridge handling high-activity orders? If not, I have a suspicion that your pasted log message may be a red-herring.
Cheers,
Daniel Ho
OSM Product Management -
How to install Oracle 9i EE on a clustered server (Windows 2000)
I need to install Oracle 9i on Clustered server (Windows 2000 Server)with Oracle Fail safe. There are 2 Nodes and one Clustered disk with RAID 5.
The documentation from Oracle is NOT very helpful as it does not specify the installation of Oracle DB on the 2 nodes.Where will the control files reside etc for each node.
Any inputs will be highly appreciated
Thanks
MasihHi,
reply to my own question:
In the docs provided with the downloads for Oracle Database Server 9i x64, I found that this version requires either
- Windows Server 2003 Datacenter Edition for 64-Bit Itanium 2 Systems
or
- Windows Server 2003 Enterprise Edition for 64-Bit Itanium 2 Systems
So I guess, the only X64 platform being supported under Windows Server 2003 (both Datacenter / Enterprise Editions) is Itanium CPUs ...
... though this is not (!!!) clearly stated in the download link (other/newer versions are listed more precisely - I guess it's a matter of an old link: no Intel Xeon / AMD Opterons were available (???) when Oracle 9i database server was published).
So I'll give it a try with 10g database server ;-)))
Regards,
Thomas Heß -
We recently switched hardware and server software Win SBS 2008 to 2012R2 for a small network roughly 40 clients (Win7 Pro / Win 8.1 Pro) about 16 running concurrently at a given time and one network printer with the printer queue residing on the DC as well.
I read that a single server environment might not be ideal in particular no fail-over but that is an accepted risk in this particular network here.
Errors:
Error 1043: Timeout during name resolution request
Error 1129: Group policy updates could not be processed due to DC not available
Error 5719: Could not establish secure connection to DC, DC not available
Occasionally but disappears after a while
Error 134: As a result of a DNS resolution timeout could not reach time server
Symptoms
On Win 7 Clients
Network shares added through Group Policy will not show sometimes
Network shares disconnect (red X) and when accessed return access authorization error after one or two clicks on the share finally grant access again
When the issue with accessing network shares occurs, it usually also affects Internet access meaning a 'server not responding' error appears in the browser windows when trying to open just any web page
nslookup during the incident returns cannot resolve error
ipconfig on client shows correct default router (VDSL Router) and DHCP / DNS Domain Controller
Also, the Win system log shows the above errors during these incidents, however, the nuimber of incidents vary from 20-30
On Win 8.1 Clients
Same as above with the slight variation for network shares apparently due to Server 2012 and Win 8.1 clients managing drive shares differently. However, network share refresh does not work with this clients. In most cases only a gpupdate /force returns
drive shares but usually only for the active session. After logoff / logon the shares are gone again.
The issue does appear to be load related since it occurs even if there are only one or two workstations active.
Server Configuration
Dell R320 PowerEdge 16GB / 4TB 7200RPM RAID10 / GBitEthernet
Zyxel 1910-48 Port Switch
VDSL 50Mbps Down / 20Mbps Up
Since the DC is the only local DNS and there are no plans to add another one or move DNS to another server, the DNS server is configured with this own address as preferred DNS with three DNS forwarders 1) VDSL Router 2) ISP DNS1 3) ISP DNS2
Currently only one Network card is active for problem determination reasons.
There appears to be no consensus concerning IPV6 enabled or disabled, I tried both with no apparent effect
I have set all network cards server and client to Full Duplex and the same speed, also disabled Offload functions within the adapter settings. Some but no consistent improvements.
Best Practice Analyzer Results
DNS server scavening not enabled
Root hint server XYZ must respond to NS queries for the root zone
More than one forwarding server should be configured (although 3 are configured)
NIC1 should be configured to use both a preferred and alternate DNS (there is only one DNS in this network)
I have found some instructions to apply changes to the clients through a host file but I would rather like to understand whether this DNS response time issue can be resolved on the server for example timing setting perhaps. Currently the DNS forwarders are
set to 3 second.
Since a few people have reported issues with DNS but most are working with multi DNS, DC environment I could not really apply any suggestions made there. perhaps there is anyone like me who is running a single server who has overcome or experience the same
issues. Any help would be appreciatedHello Milos thx for your reply.. my comments below
1. What does it "switched"? You may mean migration or new installation. We do not know...
>> Switched is probably the incorrect term, replaced would be the appropriate wording. Before, there was a HP Proliant Server with SBS 2008 with distinct domain and now there is a Dell Server with MS 2012 R2 with a distinct domain. Client were
removed from one (SBS) domain and added to the new Server 2012 domain. Other components did not change for example same Network Switch or VDSL Router, Workstations and Printer
2. Two DCs are better alternative. Or backup very frequently. There are two groups of administrators. Those who have lost DC and those who will experience this disaster in near future.
>> Correct, and I am aware of that
3. NIC settings in W 7 and W 8.1, namely DNS points to DC (...and NOTHING else. No public IP or that of router DNS.))
>> Correct, this is how it's currently implemented. Clients point to DC for DHCP and DNS and Default Router, no public IP or DNS. The only references to ISP DNS exist on the VDSL Router itself as provided through ISP when establishing VDSL
Link and the list of Forwarders in the DNS Server configuration. However, I have just recently added the ISPs DNS as forwarders for test purposes and will probably learn tomorrow morning whether this had any effect for better or worse.
4. Do nslookup to RR on clients. RR branch is saying client basic info on LDAP parameters of AD.
>> Will post as soon as available
5. I do not use forwarders and the system works
>> Ok, does this mean it works for you in a similar or the same infrastructure setup or are you saying it is not required at all and I can remove any forwarder in a scenario like mine? If not required can you explain a bit more why it is not
required apart from that it does work for you that way?
6. DHCP should sit on DC (DHCP on router is disabled)
>> Correct, no other device is configured to provide DHCP service other than DC and DHCP is currently running on DC
7. NIC settings in DC points to itself (loopback address 127.0.0.1)
>> Are you sure this is still correct and does apply to Server 2012? I am reading articles stating that it should be the servers own IP but local loop or should this be added as alternate DNS in addition to the servers own IP?
8. Use IPCONFIG /FLUSHDNS whenever you change DNS settings.
>> OK, that was not done every time I changed some settings but I can do that next week. Reboot alone would not suffice, correct?
9. Test your system with dcdiag.
>> See result below
10. Share your findings.
Regards
Milos
Directory Server Diagnosis
Performing initial setup:
Trying to find home server...
Home Server = GSERVER2
* Identified AD Forest.
Done gathering initial info.
Doing initial required tests
Testing server: Default-First-Site-Name\GSERVER2
Starting test: Connectivity
......................... GSERVER2 passed test Connectivity
Doing primary tests
Testing server: Default-First-Site-Name\GSERVER2
Starting test: Advertising
......................... GSERVER2 passed test Advertising
Starting test: FrsEvent
......................... GSERVER2 passed test FrsEvent
Starting test: DFSREvent
......................... GSERVER2 passed test DFSREvent
Starting test: SysVolCheck
......................... GSERVER2 passed test SysVolCheck
Starting test: KccEvent
......................... GSERVER2 passed test KccEvent
Starting test: KnowsOfRoleHolders
......................... GSERVER2 passed test
KnowsOfRoleHolders
Starting test: MachineAccount
......................... GSERVER2 passed test MachineAccount
Starting test: NCSecDesc
......................... GSERVER2 passed test NCSecDesc
Starting test: NetLogons
......................... GSERVER2 passed test NetLogons
Starting test: ObjectsReplicated
......................... GSERVER2 passed test
ObjectsReplicated
Starting test: Replications
......................... GSERVER2 passed test Replications
Starting test: RidManager
......................... GSERVER2 passed test RidManager
Starting test: Services
......................... GSERVER2 passed test Services
Starting test: SystemLog
......................... GSERVER2 passed test SystemLog
Starting test: VerifyReferences
......................... GSERVER2 passed test VerifyReferences
Running partition tests on : ForestDnsZones
Starting test: CheckSDRefDom
......................... ForestDnsZones passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... ForestDnsZones passed test
CrossRefValidation
Running partition tests on : DomainDnsZones
Starting test: CheckSDRefDom
......................... DomainDnsZones passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... DomainDnsZones passed test
CrossRefValidation
Running partition tests on : Schema
Starting test: CheckSDRefDom
......................... Schema passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... Schema passed test CrossRefValidation
Running partition tests on : Configuration
Starting test: CheckSDRefDom
......................... Configuration passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... Configuration passed test CrossRefValidation
Running partition tests on : GS2
Starting test: CheckSDRefDom
......................... GS2 passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... GS2 passed test CrossRefValidation
Running enterprise tests on : GS2.intra
Starting test: LocatorCheck
......................... GS2.intra passed test LocatorCheck
Starting test: Intersite
......................... GS2.intra passed test Intersite
Server: gserver2.g2.intra
Address: 192.168.240.6
*** gserver2.g2.intra can't find g2: Non-existent domain
> gserver2
Server: gserver2.g2.intra
Address: 192.168.240.6
g2.intra
primary name server = gserver2.g2.intra
responsible mail addr = hostmaster.g2.intra
serial = 443
refresh = 900 (15 mins)
retry = 600 (10 mins)
expire = 86400 (1 day)
default TTL = 3600 (1 hour)
> wikipedia.org
Server: gserver2.g2.intra
Address: 192.168.240.6
Non-authoritative answer:
wikipedia.org MX preference = 10, mail exchanger = polonium.wikimedia.org
wikipedia.org MX preference = 50, mail exchanger = lead.wikimedia.org
polonium.wikimedia.org internet address = 208.80.154.90
polonium.wikimedia.org AAAA IPv6 address = 2620:0:861:3:208:80:154:90
lead.wikimedia.org internet address = 208.80.154.89
lead.wikimedia.org AAAA IPv6 address = 2620:0:861:3:208:80:154:89
Final benchmark results, sorted by nameserver performance:
(average cached name retrieval speed, fastest to slowest)
192.168.240. 6 | Min | Avg | Max |Std.Dev|Reliab%|
----------------+-------+-------+-------+-------+-------+
+ Cached Name | 0,001 | 0,002 | 0,003 | 0,001 | 100,0 |
+ Uncached Name | 0,027 | 0,076 | 0,298 | 0,069 | 100,0 |
+ DotCom Lookup | 0,041 | 0,048 | 0,079 | 0,009 | 100,0 |
---<-------->---+-------+-------+-------+-------+-------+
gserver2.g2.intra
Local Network Nameserver
195.186. 4.162 | Min | Avg | Max |Std.Dev|Reliab%|
----------------+-------+-------+-------+-------+-------+
- Cached Name | 0,022 | 0,023 | 0,025 | 0,000 | 100,0 |
- Uncached Name | 0,025 | 0,071 | 0,274 | 0,065 | 100,0 |
- DotCom Lookup | 0,039 | 0,040 | 0,043 | 0,001 | 100,0 |
---<-------->---+-------+-------+-------+-------+-------+
cns8.bluewin.ch
BLUEWIN-AS Swisscom (Schweiz) AG,CH
195.186. 1.162 | Min | Avg | Max |Std.Dev|Reliab%|
----------------+-------+-------+-------+-------+-------+
- Cached Name | 0,022 | 0,023 | 0,026 | 0,001 | 100,0 |
- Uncached Name | 0,025 | 0,072 | 0,299 | 0,066 | 100,0 |
- DotCom Lookup | 0,039 | 0,042 | 0,049 | 0,003 | 100,0 |
---<-------->---+-------+-------+-------+-------+-------+
cns7.bluewin.ch
BLUEWIN-AS Swisscom (Schweiz) AG,CH
8. 8. 8. 8 | Min | Avg | Max |Std.Dev|Reliab%|
----------------+-------+-------+-------+-------+-------+
- Cached Name | 0,033 | 0,040 | 0,079 | 0,011 | 100,0 |
- Uncached Name | 0,042 | 0,113 | 0,482 | 0,097 | 100,0 |
- DotCom Lookup | 0,049 | 0,079 | 0,192 | 0,039 | 100,0 |
---<-------->---+-------+-------+-------+-------+-------+
google-public-dns-a.google.com
GOOGLE - Google Inc.,US
UTC: 2014-11-03, from 14:33:12 to 14:33:29, for 00:17,648
15: 40
192.168.240. 6 | Min | Avg | Max |Std.Dev|Reliab%|
----------------+-------+-------+-------+-------+-------+
+ Cached Name | 0,001 | 0,002 | 0,004 | 0,000 | 100,0 |
+ Uncached Name | 0,025 | 0,074 | 0,266 | 0,063 | 100,0 |
+ DotCom Lookup | 0,042 | 0,048 | 0,075 | 0,007 | 100,0 |
---<-------->---+-------+-------+-------+-------+-------+
gserver2.g2.intra
Local Network Nameserver
195.186. 1.162 | Min | Avg | Max |Std.Dev|Reliab%|
----------------+-------+-------+-------+-------+-------+
- Cached Name | 0,022 | 0,024 | 0,029 | 0,001 | 100,0 |
- Uncached Name | 0,024 | 0,073 | 0,289 | 0,067 | 100,0 |
- DotCom Lookup | 0,039 | 0,041 | 0,043 | 0,001 | 100,0 |
---<-------->---+-------+-------+-------+-------+-------+
cns7.bluewin.ch
BLUEWIN-AS Swisscom (Schweiz) AG,CH
195.186. 4.162 | Min | Avg | Max |Std.Dev|Reliab%|
----------------+-------+-------+-------+-------+-------+
- Cached Name | 0,022 | 0,024 | 0,029 | 0,001 | 100,0 |
- Uncached Name | 0,025 | 0,073 | 0,286 | 0,065 | 100,0 |
- DotCom Lookup | 0,041 | 0,066 | 0,180 | 0,037 | 100,0 |
---<-------->---+-------+-------+-------+-------+-------+
cns8.bluewin.ch
BLUEWIN-AS Swisscom (Schweiz) AG,CH
8. 8. 8. 8 | Min | Avg | Max |Std.Dev|Reliab%|
----------------+-------+-------+-------+-------+-------+
- Cached Name | 0,033 | 0,038 | 0,077 | 0,009 | 100,0 |
- Uncached Name | 0,042 | 0,105 | 0,398 | 0,091 | 100,0 |
- DotCom Lookup | 0,049 | 0,066 | 0,141 | 0,025 | 100,0 |
---<-------->---+-------+-------+-------+-------+-------+
google-public-dns-a.google.com
GOOGLE - Google Inc.,US
UTC: 2014-11-03, from 14:39:59 to 14:40:12, for 00:13,363 -
hi,
am running the below command for moving sql serevr mdf and ldf files from one drive to another : c drive to d drive:
but am getting the below error
SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\abc.mdf". Operating system error 2: "2(The system cannot find the file specified.)".
use master
DECLARE @DBName nvarchar(50)
SET @DBName = 'CMP_143'
DECLARE @RC int
EXEC @RC = sp_detach_db @DBName
DECLARE @NewPath nvarchar(1000)
--SET @NewPath = 'E:\Data\Microsoft SQL Server\Data\';
SET @NewPath = 'D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\';
DECLARE @OldPath nvarchar(1000)
SET @OldPath = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\';
DECLARE @DBFileName nvarchar(100)
SET @DBFileName = @DBName + '.mdf';
DECLARE @LogFileName nvarchar(100)
SET @LogFileName = @DBName + '_log.ldf';
DECLARE @SRCData nvarchar(1000)
SET @SRCData = @OldPath + @DBFileName;
DECLARE @SRCLog nvarchar(1000)
SET @SRCLog = @OldPath + @LogFileName;
DECLARE @DESTData nvarchar(1000)
SET @DESTData = @NewPath + @DBFileName;
DECLARE @DESTLog nvarchar(1000)
SET @DESTLog = @NewPath + @LogFileName;
DECLARE @FILEPATH nvarchar(1000);
DECLARE @LOGPATH nvarchar(1000);
SET @FILEPATH = N'xcopy /Y "' + @SRCData + N'" "' + @NewPath + '"';
SET @LOGPATH = N'xcopy /Y "' + @SRCLog + N'" "' + @NewPath + '"';
exec xp_cmdshell @FILEPATH;
exec xp_cmdshell @LOGPATH;
EXEC @RC = sp_attach_db @DBName, @DESTData, @DESTLog
go
can anyone pls help how to set the db offline. currently i stopped the sql server services from services.msc and started the sql server agent.
should i stop both services for moving from one drive to another?
note: I tried teh below solution but this didint work:
ALTER DATABASE <DBName> SET OFFLINE WITH ROLLBACK IMMEDIATE
Update:
now am getting the message :
Msg 15010, Level 16, State 1, Procedure sp_detach_db, Line 40
The database 'CMP_143' does not exist. Supply a valid database name. To see available databases, use sys.databases.
(3 row(s) affected)
(3 row(s) affected)
Msg 5120, Level 16, State 101, Line 1
Unable to open the physical file "D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\CMP_143.mdf". Operating system error 2: "2(The system cannot find the file specified.)".First you should have checked the database mdf/ldf name and location by using the command
Use CMP_143
Go
Sp_helpfile
Looks like your database CMP_143 was successfully detached but mdf/ldf location or name was different that is why it did not get copied to target location.
Database is already detached that’s why db offline failed
Msg 15010, Level 16, State 1, Procedure sp_detach_db, Line 40
The database 'CMP_143' does not exist. Supply a valid database name. To see available databases, use sys.databases.
EXEC @RC = sp_attach_db @DBName, @DESTData, @DESTLog
Attached step is failing as there is no mdf file
Msg 5120, Level 16, State 101, Line 1
Unable to open the physical file "D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\CMP_143.mdf". Operating system error 2: "2(The system cannot find the file specified.)"
Solution:
Search for the physical files(mdf/ldf) in the OS and copy to target location and the re-run sp_attach_db with right location and name of mdf/ldf. -
Do I need DFSR in a single server environment?
I have a 2012 Host, running a single 2012 Guest. Guest is running as a DC with AD, DNS, DHCP, and File Services. DFSR is running, and it gives a warning every time my back runs (Backup is running on Host). Warning is The DFS Replication
service stopped replication on volume F:......and long message about Database, yada yada yada.
Do I need to run DFSR? Again, single server, no file replication to different offices. I'm not finding a clear answer to that question.
Second, Server Manager should, according to TechNet, have under the Tools option the ability to turn off DFSR. I cannot find that option. So, IF I can turn it off, can I simply disable the DFS Namespace and DFS Replication services?
I would prefer eliminating rather than ignoring warnings.
ThanksSorry, one more time. I have a single server environment, there is NO upstream domain controller, no replication between DC's. There is ONE DC. So, this is digressing into two questions. One, why do I need to run DFSR (again, lots
of articles talking about how to turn it off, and not as a discussion of temporary turn off https://msdn.microsoft.com/en-us/library/cc753144.aspx) in a single server, single domain, non-replicating environment.
Second, how do I address the warning I receive during my backup? It appears to be caused by a replication error to downstream servers, since there is no downstream server, I should be able to resolve it by turning DFSR off. I would like some
documentation discussing the issue of turning it off in a non-DFS environment.
The DFS Replication service stopped replication on volume F:. This occurs when a DFSR JET database is not
shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication.
Additional Information:
Volume: F:
GUID: 65E46942-B9D6-11E3-9400-00155D325402
Recovery Steps
1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due
to unexpected conflict resolution during the recovery of the replicated folders.
2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig
class. For example, from an elevated command prompt, type the following command:
wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="65E46942-B9D6-11E3-9400-00155D325402"
call ResumeReplication
For more information, see http://support.microsoft.com/kb/2663685.
Jeff Ferris -
Best Practice to generate UUIDs in a Cluster-Server Environment
Hi all,
I just need some inputs over the best practices to generate UUIDs in typical internet world where there are multiple servers/JVMs involved for load balancing or traffic distribution etc. I know JAVA is shipped with very efficient UUID generator API.
But still that doesn't solve the issue in multiple server environment.
For the discussion sake lets assume I need it to be unique over the setup than a near unique.
How do you guys approach it?
Thanks you all in advance.codeNombre wrote:
jverd wrote:
codeNombre wrote:
Thanks jverd,
So adding to the theory of "distinguishing all possible servers" in addition to UUID over each server would be the way to go.If you're unreasonably paranoid, sure.I think its a common problem and there is a big number of folks who might still be bugged about the "relative uniqueness" of UUID in long run. People who don't understand probability and scale, sure.
Again coming back to my original problem in an "internet world", shouldn't the requirement like unique id between different servers be dealt with generating the UUID's at a layer before entering into the multi-server setup. Where would that be? I don't have the answer..Again, that is the POINT of the UUID class--so that you can generate as many IDs as you want and still be confident that nobody anywhere is in the world has ever generated any of those same IDs. However, if your requirements say UUID is not good enough, then you need to define what is, and that means having a lot of foresight as to how this system will evolve and how long it will live, AND having total control over some aspect of your servers, AND having a process that is so good that it's LESS LIKELY for a human to screw up and re-use a "unique" server ID than the probabilities I presented in my previous post. -
Can not add new VMS into existing clustered server pool
For some reason, we reinstall VMM (3.1.1-416) with existing UUID and rediscover all VMSs. The process went smooth and the whole system looks clean without dead objects any more. The guest VMs are all working fine. However, the new added VMS in unassigned servers are not displayed in the "Available Servers" when we want to add into existing clustered server pool, but it is shown when we add into unclustered server pool. Anyone has clue or possible way to debug what's wrong.
Thanks,
Shun-Jee LiuThe issue is resolved. The access group in shared iSCSI is not properly configured.
-
Installing Acrobat Pro 9.2 in a Terminal Server Environment
Hi everyone,
we're facing massive problems in our attempts to install Acrobat Pro 9.2 in a terminal server environment. It seems to be impossible to convert for example a .doc-file into a pdf via the adobe pdf-printer using a restricted user-account. It causes an "Access denied"-popup. We solved this subproblem by granting the user write-permission in the respective registry-key (HKLM\Software\Adobe). By now everytime we try, the application freezes after the "copy-status-bar" is filled. Using a power user instead of a normal user solves this problem. So the question is what's the relevant difference between the rights of those user groups.
Thanks for your help!You can also try to update your Acrobat by applying patches one by one.
to update upto latest version of Acrobat i.e 9.5, you need to apply the following updates
9.3.0 download location: < http://www.adobe.com/support/downloads/detail.jsp?ftpID=4605 >
9.3.2 download location < http://www.adobe.com/support/downloads/detail.jsp?ftpID=4654 >
9.3.3 download location < http://www.adobe.com/support/downloads/detail.jsp?ftpID=4695 >
9.4 download location < http://www.adobe.com/support/downloads/detail.jsp?ftpID=4851 >
9.4.2 download location < http://www.adobe.com/support/downloads/detail.jsp?ftpID=4931 >
9.4.5 download location < http://www.adobe.com/support/downloads/detail.jsp?ftpID=5118 >
9.4.6 download location < http://www.adobe.com/support/downloads/detail.jsp?ftpID=5237 >
9.5 download location < http://www.adobe.com/support/downloads/detail.jsp?ftpID=5330 >
You can download the above mentioned patches from the corrosponding download link and apply one by one in same order as mentioned above.
This is a bit lengthy process, but I hope you will be able to update your Acrobat.
Thanks. -
Document.passivate() giving problem in the server environment
I am having two J2EE web applications accessing PDF forms using LiveCycle API. They both are invoking a short-lived process with RenderPDFForm service that takes form and formdata and renders a PDF form. The code where these applications communicate with Livecycle server is same for both the apps. The applications work fine in the local environment. But in the server environment, one of them work and the other fails with security certificate exception. These applications are deployed to the same server. The application that fails is failing during document.passivate() call. Here is the log message.
com.adobe.idp.DocumentError: javax.net.ssl.SSLHandshakeException: com.ibm.jsse2.util.h: PKIX path validation failed: java.security.cert.CertPathValidatorException: The certificate expired at Wed Jul 01 15:28:21 CDT 2009; internal cause is:
java.security.cert.CertificateExpiredException: NotAfter: Wed Jul 01 15:28:21 CDT 2009
at com.adobe.idp.Document.passivateInitData(Document.java:1562)
at com.adobe.idp.Document.passivate(Document.java:1241)
at com.adobe.idp.Document.passivate(Document.java:1185)
at com.adobe.idp.DocumentManagerClient.passivate(DocumentManagerClient.java:236)
at com.adobe.idp.dsc.provider.impl.base.RequestOutputStream.defaultPassivate(RequestOutputSt ream.java:40)
at com.adobe.idp.DocumentRequestOutputStream.passivate(DocumentRequestOutputStream.java:56)
at com.adobe.idp.Document.writeObject(Document.java:872)
I know it looks like a security certificate exception. But the rest of the application work just fine. The application fails only during document.passivate().
Any ideas ?
Thanks,
JytohiI am having two J2EE web applications accessing PDF forms using LiveCycle API. They both are invoking a short-lived process with RenderPDFForm service that takes form and formdata and renders a PDF form. The code where these applications communicate with Livecycle server is same for both the apps. The applications work fine in the local environment. But in the server environment, one of them work and the other fails with security certificate exception. These applications are deployed to the same server. The application that fails is failing during document.passivate() call. Here is the log message.
com.adobe.idp.DocumentError: javax.net.ssl.SSLHandshakeException: com.ibm.jsse2.util.h: PKIX path validation failed: java.security.cert.CertPathValidatorException: The certificate expired at Wed Jul 01 15:28:21 CDT 2009; internal cause is:
java.security.cert.CertificateExpiredException: NotAfter: Wed Jul 01 15:28:21 CDT 2009
at com.adobe.idp.Document.passivateInitData(Document.java:1562)
at com.adobe.idp.Document.passivate(Document.java:1241)
at com.adobe.idp.Document.passivate(Document.java:1185)
at com.adobe.idp.DocumentManagerClient.passivate(DocumentManagerClient.java:236)
at com.adobe.idp.dsc.provider.impl.base.RequestOutputStream.defaultPassivate(RequestOutputSt ream.java:40)
at com.adobe.idp.DocumentRequestOutputStream.passivate(DocumentRequestOutputStream.java:56)
at com.adobe.idp.Document.writeObject(Document.java:872)
I know it looks like a security certificate exception. But the rest of the application work just fine. The application fails only during document.passivate().
Any ideas ?
Thanks,
Jytohi
Maybe you are looking for
-
I apologize if this is not the correct forum. Can anyone please describe Portal's current use of XML. Also, what are the future plans for XML in Portal. Thanks!
-
Hi I'm using a matrox triplehead2go external card to connect my macbook pro to 2 LCD projectors. This allow me to have a panoramic projection. I'm trying to find a way to overlay the 2 external screens. When I use the OS X display menu, I can see the
-
Hi, i try open the videos i bought and this message pops up i do every thing it says but it keeps on happening There was an error storing your authorization information on this computer The requirement directory was not found or has a permissions err
-
I have a requirement in PPM , where i need to add custom fields and when I click on F4 button , I need the data to be fetched from table of different server(ECC and few other) and to store it in custom fields. Kindly guide me how to proceed on this.
-
Hi, We are loading Inbound IDOC message type WPUBON and we got an error from transaction code WPER, it is saying : Generation successful, but you must call up the function again. IDOC was successfully generated and I attempted to re-process it and OK