Wfsdupld fails when run against 8.1.7 database on 64 bit Sun SPARC Solaris 8
Attempting to run in the seed data.
Platform is 64 bit Solaris v8
Database is 8.1.7.0
Workflow 2.6
Everything is fine until we run the wfsdupld script. The script
fails with
ORA-29516 Aurora Assertion Failed: Assertion failure at
joncomp.c:127
jtc_active_clint_ncomp_slots (oracle/xml/parser/v2/DTD, 0)
returned 0
ORA-6512 at OWF_MGR.WF_EVENT_SYNCHRONIZE_PKG line 373
Any ideas?
I raised this as a TAR #1838186.995 and received the following
workaround which does seem to have worked on 8.1.7.0 >>
I have taken a look at bug 2034596 which suggests a problem with
the 8.1.7.0 database. However, by upgrading the database to
8.1.7.2 the error was replaced by a nullpointerexception.
As a possible workaround it may be worth trying the following
(on a 8.1.7.0 database):
$ svrmgrl
SVRMGR>connect internal
SVRMGR>alter system flush shared_pool;
SVRMGR>alter system flush shared_pool;
SVRMGR>alter system flush shared_pool;
SVRMGR>alter java class "java/io/Serializable" check;
SVRMGR>alter system flush shared_pool;
SVRMGR>alter system flush shared_pool;
SVRMGR>alter system flush shared_pool;
SVRMGR>shutdown immediate;
SVRMGR> startup
Similar Messages
-
Reports fail when run against a different data source
Hello,
We have a VB.NET 2008 WinForms application running on Microsoft .NET 3.5. We are using Crystal Reports 2008 runtime, service pack 3 -- using the CrystalDecisions.Windows.Forms.CrystalReportViewer in the app to view reports. In the GAC on all our client computers, we have versions 12.0.1100.0 and 12.0.2000.0 of CrystalDecisions.CrystalReports.Engine, CrystalDecisions.Shared, and CrystalDecisions.Windows.Forms.
Please refer to another one of our posted forum issues, u201CCritical issue since upgrading from CR9 to CR2008u201D, as these issues seem to be related:
Critical issue since upgrading from CR9 to CR2008
We were concerned with report display slow down, and we seemed to have solved this by using the Oracle Server driver (instead of either Microsoft's or Oracle's OLEDB driver). But now we must find a resolution to another piece of the puzzle, which is: why does a report break if one data source is embedded in the .rpt file is different than the one you are trying to run the report against, in the .NET Viewer?
Problem:
If you have a production database name (e.g. "ProdDB") embedded in your .rpt file that you built your report from and try to run that report against a development database (e.g. "DevDB") (OR VICE VERSA -- it is the switch that is the important concept here), the report fails with a list of messages such as this:
Failed to retrieve data from the database
Details: [Database vendor code: 6550 ]
This only seems to happen if the source of the report data (i.e. the underlying query) is an Oracle stored procedure or a Crystal Reports SQL Command -- the reports run fine against all data sources if the source is a table or a view). In trying different things to troubleshoot this, including adding a ReportDocument.VerifyDatabase() call after setting the connection information, the Crystal Reports viewer will spit out other nonsensical errers regarding being unable to find certain fields (e.g. "The field name is not known), or not able to find the table (even though the source data should be coming from an Oracle stored procedure, not a table).
When the reports are run in the Crystal Reports Designer, they run fine no matter what database is being used; but the problem only happens while being run in the .NET viewer. It's almost as if something internally isn't getting fully "set" to the new data source, or something -- we're really grasping at straws here.
For the sake of completeness of information, here is how we're setting the connection information
'-- Set database connection info for the main report
For Each oConnectionInfo In oCrystalReport.DataSourceConnections
oConnectionInfo.SetConnection(gsDBDataSource, "", gsDBUserID, gsDBPassword)
Next oConnectionInfo
'-- Set database connection info for each subreport
For Each oSubreport In oCrystalReport.Subreports
For Each oConnectionInfo In oSubreport.DataSourceConnections
oConnectionInfo.SetConnection(gsDBDataSource, "", gsDBUserID, gsDBPassword)
Next oConnectionInfo
Next oSubreport
... but in troubleshooting, we've even tried an "overkill" approach and added this code as well:
'-- Set database connection info for each table in the main report
For Each oTable In oCrystalReport.Database.Tables
With oTable.LogOnInfo.ConnectionInfo
.ServerName = gsDBDataSource
.UserID = gsDBUserID
.Password = gsDBPassword
For Each oPair In .LogonProperties
If UCase(CStr(oPair.Name)) = "DATA SOURCE" Then
oPair.Value = gsDBDataSource
Exit For
End If
Next oPair
End With
oTable.ApplyLogOnInfo(oTable.LogOnInfo)
Next oTable
'-- Set database connection info for each table in each subreport
For Each oSubreport In oCrystalReport.Subreports
For Each oTable In oSubreport.Database.Tables
With oTable.LogOnInfo.ConnectionInfo
.ServerName = gsDBDataSource
.UserID = gsDBUserID
.Password = gsDBPassword
For Each oPair In .LogonProperties
If UCase(CStr(oPair.Name)) = "DATA SOURCE" Then
oPair.Value = gsDBDataSource
Exit For
End If
Next oPair
End With
oTable.ApplyLogOnInfo(oTable.LogOnInfo)
Next oTable
Next oSubreport
... alas, it makes no difference. If we run the report against a database that is different than the one specified with "Set Datasource Location" in Crystal, it fails with nonsense errorsThanks for the reply, Ludek. We have made some breakthroughs, uncovered some Crystal bugs and workarounds, and we're probably 90% there I hope.
For your first point, unfortunately the information on the Oracle 6550 error was generic, and not much help in our case. And for your second point, the errors didn't have anything to do with subreports at that time -- the error would manifest itself even in a simple, one-level report.
However, your third point (pointing us to KB 1553921) helped move us forward quite a bit more. For the benefit of all, here is a link to that KB article:
Link: [KB 1553921|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333533353333333933323331%7D.do]
We downloaded the tool referenced there, and pointed it at a couple of our reports. The bottom line is that the code it generated uses a completely new area of the Crystal Reports .NET API which we had not used before -- in the CrystalDecisions.ReportAppServer namespace. Using code based on what that RasConnectionInfo tool generated, we were able gain greater visibility into some of the objects in the API and to uncover what I think qualifies as a genuine bug in Crystal Reports.
The CrystalDecisions.ReportAppServer.DataDefModel.ISCRTable class exposes a property called QualifiedName, something that isn't exposed by the more commonly-used CrystalDecisions.CrystalReports.Engine.Table class. When changing the data source with our old code referenced above (CrystalDecisions.Shared.ConnectionInfo.SetConnection), I saw that Crystal would actually change the Table.QualifiedName from something like "SCHEMAOWNER.PACKAGENAME.PROCNAME" to just "PROCNAME" (essentially stripping off the schema and package name). Bad, Crystal... VERY BAD! IMHO, Crystal potentially deserves to be swatted on the a** with the proverbial rolled-up newspaper.
I believe this explains why we were also able to generate errors indicating that field names or tables were not found -- because Crystal had gone and changed the QualifiedName to remove some key info identifying the database object! So, knowing this and using the code generated by the RasConnectionInfo tool, we were able to work around this bug with code that worked for most of our reports ("most" is the key word here -- more on that in a bit).
So, first of all, I'll post our new code. Here is the main area where we loop through all of the tables in the report and subreports:
'-- Replace each table in the main report with new connection info
For Each oTable In oCrystalReport.ReportClientDocument.DatabaseController.Database.Tables
oNewTable = oTable.Clone()
oNewTable.ConnectionInfo = GetNewConnectionInfo(oTable)
oCrystalReport.ReportClientDocument.DatabaseController.SetTableLocation(oTable, oNewTable)
Next oTable
'-- Replace each table in any subreports with new connection info
For iLoop = 0 To oCrystalReport.Subreports.Count - 1
sSubreportName = oCrystalReport.Subreports(iLoop).Name
For Each oTable In oCrystalReport.ReportClientDocument.SubreportController.GetSubreportDatabase(sSubreportName).Tables
oNewTable = oTable.Clone()
oNewTable.ConnectionInfo = GetNewConnectionInfo(oTable)
oCrystalReport.ReportClientDocument.SubreportController.SetTableLocation(sSubreportName, oTable, oNewTable)
Next oTable
Next iLoop
'-- Call VerifyDatabase() to ensure that the tables update properly
oCrystalReport.VerifyDatabase()
(Thanks to Colin Stynes for his post in the following thread, which describes how to handle the subreports):
Setting subreport connection info at runtime
There seems to be a limitation on the number of characters in a post on this forum (before all formatting gets lost), so please see my next post for the rest.... -
Sdo_filter fail when query against a spatial view in different schema
We have a table with X,Y coordinates and would like to run spatial query against it. We do not want to change the table structure, so we opt to use a function based index. USER_SDO_GEOM_METADATA is updated and index is built. Then we created a view with spatial column from the table. Everything works fine with the user who owns the table and view.
When we try to run a spatial query against the view from a different user, it failed with error. However, if we substitute the select from my_view* with the actual SQL statement that created the view, it works. So it looks like Oracle refuse to acknowledge the spatial index if accessed via view. Here is some simplified scripts:
--- connect as USER1.
--update meta data
INSERT INTO USER_SDO_GEOM_METADATA ( TABLE_NAME, COLUMN_NAME, DIMINFO, SRID ) VALUES
('LOCATIONS', 'MDSYS.SDO_GEOMETRY(2001,2264,SDO_POINT_TYPE(NVL(X_COORD,0),NVL(Y_COORD,0),NULL),NULL,NULL)',
SDO_DIM_ARRAY( SDO_DIM_ELEMENT('X', 1300000, 1600000, 1), SDO_DIM_ELEMENT('Y', 400000, 700000, 1) ), 2264 );
--created index
CREATE INDEX LOCA_XYGEOM_IDX ON LOCATIONS
( SDO_GEOMETRY(2001,2264,SDO_POINT_TYPE(NVL(X_COORD,0),NVL(Y_COORD,0),NULL),NULL,NULL)
) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
--create view
CREATE VIEW USER1.MY_VIEW AS SELECT ID ,X_COORD,Y_COORD, SDO_GEOMETRY(2001,2264,SDO_POINT_TYPE(NVL(X_COORD,0),NVL(Y_COORD,0),NULL),NULL,NULL) SHAPE
FROM USER1.LOCATIONS WHERE X_COORD>0 AND Y_COORD>0;
-- run spatial query from view, works fine by user1 by failed on user2.
SELECT SHAPE FROM (
SELECT * FROM USER1.MY_VIEW
) a WHERE sdo_filter (shape, sdo_geometry ('POLYGON ((1447000 540000, 1453000 540000, 1453000 545000, 1447000 545000, 1447000 540000))', 2264), 'querytype=window') = 'TRUE';
-- run spatial query from table directly, simply replace the view with actual statements that created the view. works fine by user1 AND user2.
SELECT SHAPE FROM (
SELECT ID ,X_COORD,Y_COORD, SDO_GEOMETRY(2001,2264,SDO_POINT_TYPE(NVL(X_COORD,0),NVL(Y_COORD,0),NULL),NULL,NULL) SHAPE
FROM USER1.LOCATIONS WHERE X_COORD>0 AND Y_COORD>0
) a WHERE sdo_filter (shape, sdo_geometry ('POLYGON ((1447000 540000, 1453000 540000, 1453000 545000, 1447000 545000, 1447000 540000))', 2264), 'querytype=window') = 'TRUE';
When run against the view by user2, the error is:
ORA-13226: interface not supported without a spatial index
ORA-06512: at "MDSYS.MD", line 1723
ORA-06512: at "MDSYS.MDERR", line 8
ORA-06512: at "MDSYS.SDO_3GL", line 1173
13226. 00000 - "interface not supported without a spatial index"
*Cause: The geometry table does not have a spatial index.
*Action: Verify that the geometry table referenced in the spatial operator
has a spatial index on it.
Note, the SELECT SHAPE FROM (****) A WHERE SDO_FILTER(....) syntax is a third party application, all we have control is the part inside "(select ...)".
So it appears Oracle is treating view differently. Have attempted fake the view name into USER_SDO_GEOM_METADATA, did not work. Also, granted select on the index table to user2, did not work.
if we re-created the view in user2 schema, it worked for user2 but not user1, so it's not something we can do for every user.
Searched the forum, no good match found. A few posts talked about "union all" in view caused the problem but I do not have the union.
We are only use Oracle 10g Locator, not full spatial edition.
Any ideas?
Thanks!
Edited by: liu.284 on Oct 4, 2011 12:08 PMIt seems a bug, where a function-based spatial index is not correctly handled in a view query transformation.
Not sure if the following works for you or not.
add a new column "shape" (mdsys.sdo_geometry) in table locations, use a trigger and x_coord/y_coord
to set values for this new column, and just create a normal spatial index on this new column. (drop the
function-based spatial index). And create a view like:
CREATE VIEW USER1.MY_VIEW2 AS SELECT ID , X_COORD, Y_COORD, SHAPE
FROM USER1.LOCATIONS WHERE X_COORD>0 AND Y_COORD>0; -
I have a production mobile Flex app that uses RemoteObject calls for all data access, and it's working well, except for a new remote call I just added that only fails when running with a release build. The same call works fine when running on the device (iPhone) using debug build. When running with a release build, the result handler is never called (nor is the fault handler called). Viewing the BlazeDS logs in debug mode, the call is received and send back with data. I've narrowed it down to what seems to be a data size issue.
I have targeted one specific data call that returns in the String value a string length of 44kb, which fails in the release build (result or fault handler never called), but the result handler is called as expected in debug build. When I do not populate the String value (in server side Java code) on the object (just set it empty string), the result handler is then called, and the object is returned (release build).
The custom object being returned in the call is a very a simple object, with getters/setters for simple types boolean, int, String, and one org.23c.dom.Document type. This same object type is used on other other RemoteObject calls (different data) and works fine (release and debug builds). I originally was returning as a Document, but, just to make sure this wasn't the problem, changed the value to be returned to a String, just to rule out XML/Dom issues in serialization.
I don't understand 1) why the release build vs. debug build behavior is different for a RemoteObject call, 2) why the calls work in debug build when sending over a somewhat large (but, not unreasonable) amount of data in a String object, but not in release build.
I have't tried to find out exactly where the failure point in size is, but, not sure that's even relevant, since 44kb isn't an unreasonable size to expect.
By turning on the Debug mode in BlazeDS, I can see the object and it's attributes being serialized and everything looks good there. The calls are received and processed appropriately in BlazeDS for both debug and release build testing.
Anyone have an idea on other things to try to debug/resolve this?
Platform testing is BlazeDS 4, Flashbuilder 4.7, Websphere 8 server, iPhone (iOS 7.1.2). Tried using multiple Flex SDK's 4.12 to the latest 4.13, with no change in behavior.
Thanks!After a week's worth of debugging, I found the issue.
The Java type returned from the call was defined as ArrayList. Changing it to List resolved the problem.
I'm not sure why ArrayList isn't a valid return type, I've been looking at the Adobe docs, and still can't see why this isn't valid. And, why it works in Debug mode and not in Release build is even stranger. Maybe someone can shed some light on the logic here to me. -
11gR2 RAC install fail when running root.sh script on second node
I get the errors:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only 0
ORA-15080: synchronous I/O operation to a disk failed
[main] [ 2012-04-10 16:44:12.564 EDT ] [UsmcaLogger.logException:175] oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only 0
ORA-15080: synchronous I/O operation to a disk failed
I have tried the fix solutions from metalink note, but did not fix issue
11GR2 GRID INFRASTRUCTURE INSTALLATION FAILS WHEN RUNNING ROOT.SH ON NODE 2 OF RAC USING ASMLIB [ID 1059847.1Hi,
it looks like, that your "shared device" you are using is not really shared.
The second node does "create an ASM diskgroup" and create OCR and Voting disks. If this indeed would be a shared device, he should have recognized, that your disk is shared.
So as a result your VMware configuration must be wrong, and the disk you presented as shared disk is not really shared.
Which VMWare version did you use? It will not work correctly with the workstation or player edition, since shared disks are only really working with the server version.
If you indeed using the server, could you paste your vm configurations?
Furthermore I recommend using Virtual Box. There is a nice how-to:
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
Sebastian -
LSMW Fails when run in B/G but works fine in Front end..why?
Hi All,
i am trying to run a batch process by LSMW, my files are accurate, no problem with them, everything works fine but it fails when run in BG..works absolutely fine in front end. whats the diff with running in B/G?
same thing happens when i am trying to execute an RFC thru SAP JCO, it works when debugger is on (i guess switching on debugger is similar to running in B/G) but it doesnt work when debugger is off. but when i execute that RFC directly in se37 from SAP gui it works fine..fails when connected to JCO..
i am not having this issue with r/3 4.6c or mySAP ECC.6.0 i have this issue only in r/3 4.7.
has anyone faced the similar situtaion? pls help.
thanks.
p.s if this may help. the RFC and LSMW error both are pertaining to change in address of US employees.( infotype 0006)Applying SAP note 928273 Solved this issue.
-
LSMW fails when run in B/G works fine in Frontend..why?
Hi All,
i am trying to runa batch process by LSMW, my files are accurate, no problem with them, everything works fine but it fails when run in BG..works absolutely fine in front end. whats the diff with running in B/G?
same thing happens when i am trying to execute an RFC thru SAP JCO, it works when debugger is on 9i guess switching on debugger is similar to running in B/G) but it doesnt work when debugger is off. but when i execute that RFC directly in se37 from SAP gui it works fine..fails when connected to JCO..
i am not having this issue with r/3 4.6c or mySAP ECC.6.0 i have this issue only in r/3 4.7.
has anyone faced the similar situtaion? pls help.
thanks.
p.s if this may help. the RFC and LSMW error both are pertaining to change in address of US employees.( infotype 0006)for LSMW its the recording of transaction PA40 (employee hire fails when filling address details) and PA30 (change address) and same is the case with RFC..well its a BAPI_ADDRESSEMPUS_CHANGE.
To eloborate more..the error is..Fill in all the mandatory fields.
which i am very much doing..there are no hidden fields or anything..i have seen the screens etc..I AM filling all mandatory fields. infact i am not leaving anything unfilled., same scrren is going fine when in front end..i am just clicking ok..ok..ok and boom transaction complete..no complaints. but running B/G is killing me.
i have to run batch for 100,000 employees
What fails my logic is..its working fine in 4.6c and mySAP ECC.6.0 but not in 4.7
Hruser
Message was edited by:
Hruser -
LSMW fails when run in B/G but works in Frontend..why?
Hi All,
i am trying to runa batch process by LSMW, my files are accurate, no problem with them, everything works fine but it fails when run in BG..works absolutely fine in front end. whats the diff with running in B/G?
same thing happens when i am trying to execute an RFC thru SAP JCO, it works when debugger is on 9i guess switching on debugger is similar to running in B/G) but it doesnt work when debugger is off. but when i execute that RFC directly in se37 from SAP gui it works fine..fails when connected to JCO..
i am not having this issue with r/3 4.6c or mySAP ECC.6.0 i have this issue only in r/3 4.7.
has anyone faced the similar situtaion? pls help.
thanks.
p.s if this may help. the RFC and LSMW error both are pertaining to change in address of US employees.( infotype 0006)Applying SAP note 928273 Solved this issue.
thank you. -
Opening Excel Workbook Fails when run from Scheduled Task on Windows Server 2008 Rw
Hi,
I have a little vbs script that instantiates the Excel.Application object and then opens a work book to perform some tasks on it. The script runs fine when run from the command line. When I attempt to run it as a scheduled task (it is supposed to update
data that is pulled from a SQL Server at regular intervals), it fails with the following error:
Microsoft Office Excel cannot access the file 'c:\test\SampleWorkbook.xlsm'. There are several possible reasons: .....
The file does exist. The path reported in the error is correct. The account under which the task is running is the same account I use to run it from the command line. User Account Control is not enabled, and the task is set up to run with highest privileges.
When I run the same script through the Task Scheduler from a Windows Server 2003 machine, it works without issue.
I was just wondering if somebody on this forum has run into a similar issue in connection with Windows Server 2008 R2 and figured out what the magic trick is to make it work. I'm sure it is rights related, but I haven't quite figured out what which rights
are missing.
Thanks in advance for any advice you may have.This is truly killing me ... trying to get it working on Windows Server 2012 without success.
I desperately need to automate running Excel macros in a "headless" environment, that is non-interactive, non-GUI, etc.
I can get it to work using Excel.Application COM, either via VBScript or Powershell, successfully on many other Windows systems in our environment - Windows Server 2008 R2, Windows 7 (32-bit), etc., -BUT-
The two servers we built out for running our automation process are Windows Server 2012 (SE) - and it just refuses to run on the 2012 servers - it gives the messages below from VBScript and PowerShell, respectively-
I have tried uninstalling and re-installing several different versions of Microsoft Excel (2007 Standard, 2010 Standard, 2010 Professional Plus, 32-bit vs. 64-bit, etc.), but it makes no difference.
Would be extremely grateful if any one out there has had any success in running Excel automation on Server 2012 in a non-interactive environment that they could share.
( I have tried adding the "%windir%\syswow64\config\systemprofile\desktop"
folder, which did fix the issue for me when testing on Windows Server 2008 R2, but sadly did not resolve it on Windows Server 2012 )
[VBScript error msg]
Z:\TestExcelMacro.vbs(35, 1) Microsoft Office Excel: Microsoft Office Excel cannot
access the file 'Z:\TestExcelMacro.xlsm'. There are several possible reasons:
• The file name or path does not exist.
• The file is being used by another program.
• The workbook you are trying to save has the same name as a currently open work
[Powershell error msg]
Exception calling "Add" with "0" argument(s): "Microsoft Office Excel cannot open or save any more documents because th
ere is not enough available memory or disk space.
To make more memory available, close workbooks or programs you no longer need.
To free disk space, delete files you no longer need from the disk you are saving to."
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : ComMethodTargetInvocation
You cannot call a method on a null-valued expression.
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : InvokeMethodOnNull -
Unit test runs perfectly fine with NUnit but fails when run from TestExplorer
Hello all,
I have a TestProject, Harmony.Tests. In there, I have a method AddApplicationEvent()
which calls another method Send(InvokeRequestMessage requestMessage) which calls a webservice (OperationHandlerBrokerWebService).
The code snippet looks like this. This is not the complete code but a part where we are calling the web service. It fails on the underlined Italic line of code.
OperationHandlerBrokerWebService brokerService = new OperationHandlerBrokerWebService();
brokerService.UseDefaultCredentials = true;
brokerService.Url = address;
brokerService.Timeout = timeoutInMilliseconds;
byte[] serializedResponseMessage = brokerService.InvokeOperationHandler(serializedRequestMessage);
The same test works and passed fine when I ran it with NUnit, but failed with following exception when I tried to run it from TestExplorer.
Test Name: AddApplicationEvent
Test FullName: N4S.Harmony.Tests.CaseManagement.HarmonyFacadeTests.AddApplicationEvent
Test Source: d:\TFS\TMW\Dev\TMWOnline\Harmony\N4S.Harmony.Tests\CaseManagement\HarmonyFacadeTests.cs : line 665
Test Outcome: Failed
Test Duration: 0:00:00.296
Result Message:
SetUp : Message returned System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: Invalid token for impersonation - it cannot be duplicated.
at System.Security.Principal.WindowsIdentity.CreateFromToken(IntPtr userToken)
at System.Security.Principal.WindowsIdentity..ctor(SerializationInfo info)
at System.Security.Principal.WindowsIdentity..ctor(SerializationInfo info, StreamingContext context)
--- End of inner exception stack trace ---
at System.RuntimeMethodHandle._SerializationInvoke(Object target, SignatureStruct& declaringTypeSig, SerializationInfo info, StreamingContext context)
at System.Reflection.RuntimeConstructorInfo.SerializationInvoke(Object target, SerializationInfo info, StreamingContext context)
at System.Runtime.Serialization.ObjectManager.CompleteISerializableObject(Object obj, SerializationInfo info, StreamingContext context)
at System.Runtime.Serialization.ObjectManager.FixupSpecialObject(ObjectHolder holder)
at System.Runtime.Serialization.ObjectManager.DoFixups()
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream)
at N4S.Forms.OperationHandlerBroker.AMessage.DeserializeMessage(Byte[] serializedMessage)
at N4S.Forms.OperationHandlerBroker.WebServiceServer.BrokerService.InvokeOperationHandler(Byte[] serializedInvokeRequestMessage)
--- End of inner exception stack trace ---
expected: <0>
but was: <1>
Result StackTrace:
at N4S.Harmony.Tests.TestHelper.InvokeOperation(OperationHandler handler, OperationHandlerInput input, Boolean expectedToWork) in d:\TFS\TMW\Dev\TMWOnline\Harmony\N4S.Harmony.Tests\TestHelper.cs:line 136
at N4S.Harmony.Tests.TestHelper.LoginAsUser(String username, String password) in d:\TFS\TMW\Dev\TMWOnline\Harmony\N4S.Harmony.Tests\TestHelper.cs:line 394
at N4S.Harmony.Tests.TestHelper.Login(TestUserName requestedUser) in d:\TFS\TMW\Dev\TMWOnline\Harmony\N4S.Harmony.Tests\TestHelper.cs:line 377
at N4S.Harmony.Tests.TestHelper.LoginAsAdvisor() in d:\TFS\TMW\Dev\TMWOnline\Harmony\N4S.Harmony.Tests\TestHelper.cs:line 230
at N4S.Harmony.Tests.CaseManagement.HarmonyFacadeTests.Login() in d:\TFS\TMW\Dev\TMWOnline\Harmony\N4S.Harmony.Tests\CaseManagement\HarmonyFacadeTests.cs:line 76
at N4S.Harmony.Tests.CaseManagement.HarmonyFacadeTests.SetupTest() in d:\TFS\TMW\Dev\TMWOnline\Harmony\N4S.Harmony.Tests\CaseManagement\HarmonyFacadeTests.cs:line 67
I am not sure what causing the issue. I checked the Credentials, Windows Identity during both the test run and there is no difference. Please advise !!
Thanks,
DeepakHi Tina,
Thanks for your reply.
I do have NUnit adapter installed. I even noticed that the test runs fine with NUnit GUI and also if I run it through Resharper Test Explorer window.
As you can see in the image above the same test is passed when I ran it from Resharper Unit Test Explorer window but fails when I ran it from Test Explorer window. I also captured the information on fiddler.
There was a significant difference in the Header Content length. Also under the User-Agent property the protocol versions are different.
Not sure why VSTest ExecutionEngine is picking a different version.
The UnitTest in question is calling a webservice method which in turn calls a method from another referenced project.
Web Service class
using System;
using System.Web.Services;
using N4S.Forms.OperationHandlerBroker.Server;
using NLog;
namespace N4S.Forms.OperationHandlerBroker.WebServiceServer
/// <summary>
/// The operaton-handler broker service.
/// </summary>
[WebService(Description = "The N4S Forms Operation-Handler Broker Web-Service.", Name = "OperationHandlerBrokerWebService",
Namespace = "N4S.Forms.OperationHandlerBroker.WebServiceServer")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class BrokerService : WebService
{ /// <summary>
/// Calls <see cref="HandleRequest"/>. Updates performance-counters.
/// </summary>
/// <param name="serializedInvokeRequestMessage">the binary-serialized <see cref="InvokeRequestMessage"/></param>
/// <returns>the binary-serialized response message</returns>
[WebMethod(BufferResponse = true, CacheDuration = 0, Description = "Invokes the requested operation-handler and returns a binary-serialized response-message.", EnableSession = false)]
public byte[] InvokeOperationHandler(byte[] serializedInvokeRequestMessage)
logger.Trace(Strings.TraceMethodEntered);
PerformanceMonitor.RecordRequestStarted();
InvokeRequestMessage requestMessage = (InvokeRequestMessage) AMessage.DeserializeMessage(serializedInvokeRequestMessage);
InvokeResponseMessage responseMessage;
try
responseMessage = HandleRequest(requestMessage);
PerformanceMonitor.RecordSuccessfulRequest();
catch (Exception)
PerformanceMonitor.RecordFailedRequest();
throw;
finally
PerformanceMonitor.RecordRequestEnded();
logger.Trace(Strings.TraceMethodExiting);
return AMessage.SerializeMessage(responseMessage);
UnitTest snippet
OperationHandlerBrokerWebService brokerService = new OperationHandlerBrokerWebService();
brokerService.UseDefaultCredentials = true;
byte[] serializedResponseMessage = brokerService.InvokeOperationHandler(serializedRequestMessage);
Please advise.
Thanks,
Deepak -
Cloning context file on db tier fails when run non-interactively
Hi all,
I have a problem where cloning the DB tier context file using adclonectx.pl non-interactively using a pairsfile and noprompt fails in certain circumstances and continually returns this error
Target System Port Pool [0-99] : RC-00201: Error: Not a valid port pool number
If we clone from prod or uat (which are on different nodes) to non-prod it works fine, if we clone from a different non-prod environment (on the same node) it fails.
We can run adclonectx.pl interactively with the same pairsfile and it works, so I copied all of the variables in the log into the pairsfile and ran it non-interactively and it failed with the same error.
My understanding is that adclonectx.pl uses the source DB context file and the pairsfile to create the new context file. I've tried cloning the context file non-interactively from several different non-prod envs with the same error, so I don't think it's specific to 1 source env.
It seems to want to prompt for the portpool when it's on the same node as the source environment.
EBS 12.1.3, DB 11.2.0.3 RAC 2 node on Oracle Linux 5.
I've raised 3-9540409031 : adclonectx.pl on db Tier errors with RC-00201 when run non-interactively using pairsfile - but haven't got an answer yet.
This is the pairsfile with everthing in it - generated from an interactive session that worked.
s_db_ons_remoteport = 6411
s_cmanport = 1532
s_clusterInterConnects = dxd1db01-ib
s_dbhost = dxd1db01-ib
s_dbSidLower = ebscnv1
s_dbhome4 = +DATA_DXD1
s_dbhome3 = +DATA_DXD1
s_dbSid = EBSCNV1
s_dbhome2 = +DATA_DXD1
s_dbhome1 = +DATA_DXD1
s_isAdmin = YES
s_clonestage = /u01/EBSDEV/product/11.2.0/appsutil/clone
s_jretop = /u01/EBSDEV/product/11.2.0/jdk/jre
s_db_rollback_segs = NOROLLBACK
s_db_util_filedir = /u01/EBSCNV/tmp
s_isForms = YES
s_undo_tablespace = APPS_UNDOTS1
s_temp = /u01/EBSDEV/product/11.2.0/appsutil/temp
s_database_type = RAC
s_dbuser = orebscnv
s_instName = EBSCNV1
s_dbGlnam = EBSCNV
s_domainname = mgmt.shared.health.nz
s_dbgroup = oinstall
s_hostname = dxd1db01-ib
s_jdktop = /u01/EBSDEV/product/11.2.0/jdk/jre
s_isConc = YES
s_instThread = 1
s_dbport = 1532
s_isWeb = YES
s_dbCluster = true
s_contextname = EBSCNV1_dxd1db01-ib
s_dbClusterInst = 2
s_dbdomain = mgmt.shared.health.nz
s_base = /u01/EBSCNV
s_db_ons_localport = 6311
s_contextfile = /u01/EBSCNV/tmp/EBSCNV1_dxd1db01-ib.xml
s_db_oh = /u01/EBSDEV/product/11.2.0
s_instNumber = 1
s_virtual_hostname = dxd1db01-ib
s_display = y
this is the adclonectx.pl comand i use - I've checked all the env variables before running and they're all good
perl ./adclonectx.pl \
contextfile=$SRCCTX \
template=$ORACLE_HOME/appsutil/template/adxdbctx.tmp \
outfile=$NEWCTX \
pairsfile=$PAIRSFILE \
initialnode
when run non-interactively i use this command - and as mentioned above this works under certain circumstances
dummypw=dummypw
echo \$dummypw | perl ./adclonectx.pl \
contextfile=$SRCCTX \
template=$ORACLE_HOME/appsutil/template/adxdbctx.tmp \
outfile=$NEWCTX \
pairsfile=$PAIRSFILE \
initialnode noprompt
Any ideas, it's got 3 DBAs stumped...That makes sense to me ... the pool parameter should be in the pairsfile, .. or in the parameters when calling it at command line
You can force the pool to be changed, even when you're on the same server. Actually, I would use different pools for any environment, different server or not. And, avoid the default pool as well. If you do it this way, one of the advantages is that you spot issues - like the one you have - much much earlier. In that case, you would need that extra parameter on every run. -
Logon Failed when running BSP from Portal
Hello,
We are getting the following error for users when running BSP Application from Portal :
Logon Failed
What has happened ?
Call of URL http://<hostname>:<Portnumber>/sap/bc/bsp/sap/<BSP Application name> terminated
Note:
-Logon performed in system
What can I do ?
Check the validity of your SSO ticket for this system.
HTTP 401 : Unauthorized
Any help would be highly appreciated.
ThanksActually when I run the BSP Application in SICF Tcode I get the following error:
BSP Error :
Calling the BSP Page was terminated due to an error.
Which is different from the one I already posted.
And as mentioned in the earlier error , I checked the Validity of the SSO Ticket on the Portal which is till 2038.
Thanks
Edited by: PortalPerson on Aug 24, 2011 10:22 PM -
Execute Applescript in Automator: fails when running Automator action
I am trying to get a simple Automator action to switch spaces every so often. I'm following directions found on MacScripter. The Applescript runs and works on it's own, or when I run the applescript from inside the Execute Applescript window of Automator. But when I run the automator action, the Execute Applescript action fails. Here's the Applescript code I've entered into the Execute Applescript window:
tell application "System Events"
keystroke "2" using control down
end tell
Again, this runs and does what it should if I click the Run button in the Execute Applescript window of the automator action. It fails when I run the whole automator action. This Execute Applescript is the first action in the automator sequence.
What am I missing?
Thanks!Are you using the Run AppleScript action's run handler? The parameters are used to connect the action to Automator:
<pre style="
font-family: Monaco, 'Courier New', Courier, monospace;
font-size: 10px;
margin: 0px;
padding: 5px;
border: 1px solid #000000;
width: 720px;
color: #000000;
background-color: #FFDDFF;
overflow: auto;"
title="this text can be pasted into an Automator 'Run AppleScript' action">
on run {input, parameters}
tell application "System Events"
keystroke "2" using control down
end tell
return input
end run
</pre> -
Deployment failing when running .bat script or command line file package
HI Guys,
I am trying to run a .bat file on a client using a program. My data source point to the script's folder.
I keeps failing with error 1.
I have trying making run in 64-bit using this without luck: http://madluka.wordpress.com/2012/09/24/configmgr-2012-64bit-file-system-redirection-bites-again/
Here is the content of my .bat file:
@ECHO OFF
IF NOT "%PROCESSOR_ARCHITEW6432%"=="AMD64" GOTO native
ECHO "Re-launching Script in Native Command Processor..."
%SystemRoot%\Sysnative\cmd.exe /c %0 %*
EXIT
:native
ECHO "Running Script in Native Command Processor..."
c:\
cd c:\windows\System32
start cmd.exe /c shutdown -l
stop
I will get the same error when running a simply command line as well. Instead of a .bat file.
any ideas?Maybe this will help you:
http://blog.coretech.dk/kea/configuration-manager-shutdown-utility
This instead of using "shutdown.exe -l" command. This tool can be used for logoff also. Hope it helps!
My blogs: Henk's blog and
Virtuall | Follow Me on:
Twitter | View My Profile on:
LinkedIn -
OLE Program works in debug mode fails when run from F8
Hello,
I have implemented a code from this forum for sending documents to printer, as below, but although it works well in debug mode, it fails when I execute directly from SE38? Any idea?
Best Regards,
Didem GUNDOGDU
CREATE OBJECT gs_word 'WORD.APPLICATION'.
SET PROPERTY OF gs_word 'Visible' = '0'.
CALL METHOD OF gs_word 'Documents' = gs_documents .
CALL METHOD OF gs_documents 'Open' = gs_newdoc
EXPORTING
#1 = p_filep .
CALL METHOD OF gs_word 'ActiveDocument' = gs_actdoc.
CALL METHOD OF gs_actdoc 'PrintOut' .
CALL METHOD OF gs_word 'Quit' .Hi Didem,
Just a suggestion: could you print syst-subrc after each method call? Perhaps that can give you a clue.
Regards,
John.
Maybe you are looking for
-
Flash Catalyst CS5 resizable application with webpage zoom?
Hi guys, I've come back to Catalyst cs5 after a long while of not using it and am wondering if it is possible to create something and make it resizable? I seem to remember reading a while back about a way to do it in flash builder but can't for the l
-
Hi, I have a new T520 that I have purchased about 30 days ago and it already has some deak pixels on the screen. How can I go about getting these fixed? Thanks, Josh
-
Keynote is not responding. I have already worked on my slides for hours, but did not save; suddenly the application stops responding. What's the solution?
-
While import data through ff_5 i am getting error
hello while import data through ff_5 i am getting error as below message error fv150 and fv151 'Termination in statement no. 00009 of acct 1101200570116; closing record 62F missing' so please give solution thank inadvance SIRI
-
File Associations and 64 vs 32 bit
Since installing CS4, I've noticed the majority of my 3rd party plugins don't work. I do like keeping the 64 bit version - for when I need maximum RAM useage on huge files, but: how can I set my files to open under the 32 bit version and not 64? Ever