#Overflow error due to division with 1.#INF
Hi All,
I am working with BO 4.0 i want to divide two measure objects, where as A/B , the B 's one of the values is 1.#INF so i am getting
#overflow error so i want to change 1.#INF(measure) to value 1. how can i achive this.
Thanks
Use NOERRO() or NODIV() methods in BEx or use this formula in WEBI
=If(FormatNumber([KF];"#,#")="1.#INF") Then 0 Else [KF]
Thanks
Similar Messages
-
Stack Overflow Error for JNI program with Jdk1.3
I wrote a JNI wrapper for a third party sofware (written in C) to use some exported functions provided. My program runs fine when using Sun JDK1.2.2, but I got the following error when using Jdk1.3 to run the program (It's a runtime error, only the version of runtime virtual machine matters.)
# An EXCEPTION_STACK_OVERFLOW exception has been detected in native code outside
the VM.
# Program counter=0x9073337
A stack overflow was encountered at address 0x09073337.
I tried IBM jdk 1.2.2, it gave me a similar error complaining about the stack overflow error.
The vendor of the third party software denies any wrong doing in their code and I don't have their source code. A test client (simulate the Java client) I wrote in C works perfectly fine and as I mentioned earlier the same java progarm runs OK with jdk 1.2.2, without any change to my system stack size. Does any body know what this is about and the solution for this?
Thanks!
My email: [email protected]I had the same exception occur in my JNI code and I have some advice on things to look for.
Symptoms: The C++ code runs fine when called in an native executable but when it is wrapped by a JNI call inside a DLL you get the following exception:
An unexpected exception has been detected in native code outside the VM.
Unexpected Signal : EXCEPTION_STACK_OVERFLOW occurred at PC=0x100d72e5
Function name=_chkstk
The address will be different of course.
In my tests I isolated the problem to an allocation of a char array like so at the top of one of my wrapped C++ methods:
char buf[650000];
As you see this code is requesting 650000 bytes of stack memory. When run in a native executable there was no problem but when I ran it wrapped in the JNI call it blew up.
Conclusion: There is a much smaller stack space when using JNI OR the added overhead of my JNI wrapper exhausted the available stack space OR this is a stack space issue related to DLLs.
Hope this helps. Anyone with insight on this please put in your 2 cents. -
Document control failed due to error in [DOCMGR-CANCEL] with a return code
i am getting this error while calceling the purchase order "Document control failed due to error in [DOCMGR-CANCEL] with a return code of [OTHER]. "
I am doing the calcel like this
Nav: buyer work centre --> order
searche the po and then
select order --> cancel then click GO button
i gave the Reason as Cancel
communication method: email
Cancel Requisition: Yes
Please help on this, its urgent.
Thanks,
Vijay.Hi Vijay,
Check this MOS note.
Cancelling A Standard Purchase Order In Buyer Work Center Results In Error [Docmgr-Cancel] With A Return Code [ID 1338826.1]
Thanks
-Arif. -
Dealing with errors due to newly added/dropped columns
DB version:11g
I am not sure if i have created an unnecessarily large post to explain a simple issue. Anway, here it is.
I have been asked to code a package for Archiving .
We'll have two schemas;The original schema and an Archive schema (connected via a DB Link)
ORIGINAL Schema -------------------------> ARCHIVE Schema
via DB Link When records of certain tables in the ORIGINAL schema meet the archiving criteria (based on Number of Days Old, Status Code etc), it will be moved ('archived') to the ARCHIVE schema using the INSERT syntax
insert into arch_original@dblink
col1,
col2,
col3,
select col1,
col2,
col3,
from original_tableThe original table and its archive table has the same structure, except that the Archive table has an additional column called archived_date which just records when a record got archived.
create table original
col1 varchar2(33),
col2 varchar2(35),
empid number
create table arch_original
col1 varchar2(33),
col2 varchar2(35),
empid number,
archived_date date default sysdate not null
);We have tables with lots of columns(there are lots of tables with more than 100 columns) and when all column names are explicitly listed like the above syntax, the code becomes huge.
Alternative Syntax:
So i thougt of using the syntax
insert into arch_original select original.*,sysdate from original; -- sysdate will populate archived_date columnEventhough the code looks simple and short, i've noticed a drawback to this approach.
Drawback:
For the next release, if developers decide to add/drop a column in the ORIGINAL table in the Original Schema, that change should be reflected in the archive_table's (ARCHIVE schema) DDL script as well. It is practically impossible to keep track of all these changes during the development phase.
If i use
insert into arch_original select original.*,sysdate from original; syntax, you will realise that there is change in the table structure only when you encounter an error(due to missing/new column) in the Runtime. But, if you have all the column names listed explicitly like
insert into arch_original@dblink
(col1,
col2,
col3,
select col1,
col2,
col3,
from original_tablethen you'll encounter this error during the Compilation itself. I prefer the error due to a missing/new column during the Compilation itself rather than in Runtime.
So what do you guys think? I shouldn't go for
insert into arch_original select original.*,sysdate from original; syntax because of the above Drawback. Right?What advantage would it bring if i make ARCHIVED_DATE as the first column in the ARCHIVE tables?The advantage is that if you'll add a column in the future on both original and archived tables the insert statement will work anyway...
SQL> create table x (a number, b number);
Table created.
SQL> create table y (arc_date date, a number, b number);
Table created.
SQL> insert into x values (1,1);
1 row created.
SQL> insert into x values (2,2);
1 row created.
SQL> select * from x;
A B
1 1
2 2
SQL> insert into y select sysdate, x.* from x;
2 rows created.
SQL> alter table x add (c number);
Table altered.
SQL> alter table y add (c number);
Table altered.
SQL> alter table x drop column b;
Table altered.
SQL> alter table y drop column b;
Table altered.
SQL> insert into x values (3,3);
1 row created.
SQL> insert into y select sysdate, x.* from x
2 where a=3;
1 row created.
SQL> select * from x;
A C
1
2
3 3
SQL> select * from y;
ARC_DATE A C
25-JAN-10 1
25-JAN-10 2
25-JAN-10 3 3Max
[My Italian Oracle blog|http://oracleitalia.wordpress.com/2010/01/23/la-forza-del-foglio-di-calcolo-in-una-query-la-clausola-model/]
Edited by: Massimo Ruocchio on Jan 25, 2010 12:44 PM
Added more explicative example -
On some sites we get sec_error_unknown_issuer SSL error due to missing root certificate TC TrustCenter Class 2 L1 CA XI. Firefox is the only browser having this issue. Why is that certificate not preinstalled and shipped with Firefox?
Check sales.sauer-danfoss.com for details with Firefox 7.
Thanks
StefanYou are not sending the TC TrustCenter Class 2 L1 CA XI intermediate certificate
*http://sales.sauer-danfoss.com/
Web servers need to send all required intermediate certificates to build the chain to build-in root certificates.
You need to install that intermediate certificate on your server.
*http://www.trustcenter.de/en/infocenter/root_certificates.htm#3479
You can test the certificate chain via a site like this:
*http://www.networking4all.com/en/support/tools/site+check/ -
Bit stumped; data overflow error with DATETIME vs DATE or DATETIME2
I find myself in a slightly perplexing situation. In trying to replicate data to a SQLServer 2008 database I have no problems doing so for a date column on the Oracle side to either a DATE or DATETIME2 datatype on the SQLServer side. However, upon trying a DATETIME column I'm given the errors below. Essentially a -2147217887 but Goldengate marks it as a data overflow error. The thing is, a datetime2 is more like a TIMESTAMP column in Oracle and the DATETIME is essentially a DATE. Why it would work with a DATE (less precise) or DATETIME2 (more precise) yet not a DATETIME (same precision) is a bit of a head scratcher. The same defs file is used for each of the options.
Before anyone suggests using either destination datatype that works, I've no choice; it has to be a DATETIME column. The customer is always right, even when they are infuriatingly wrong.
Anyone seen this before or have any suggestions?
Thanks very much in advance!!
Cheers,
Chris
trace
10:55:36.538 (366244) * --- entering READ_EXTRACT_RECORD --- *
10:55:36.538 (366244) exited READ_EXTRACT_RECORD (stat=0, seqno=-1, rba=-1156485006)
10:55:36.538 (366244) processing record for QA1_DW_MS_MAY04.LIEN
10:55:36.538 (366244) mapping record
10:55:36.538 (366244) entering perform_sql_statements (normal)
10:55:36.538 (366244) entering execute_statement (op_type=5,AWO_CUBE.LIEN)
10:55:36.599 (366305) executed stmt (sql_err=-2147217887)
10:55:36.599 (366305) exited perform_sql_statements (sql_err=-2147217887,recs output=6018)
10:55:36.599 (366305) aborting grouped transaction
10:55:36.619 (366325) aborted grouped transaction
10:55:36.619 (366325) committing work
10:55:36.619 (366325) Successfully committed transaction, status = 0
10:55:36.619 (366325) work committed
10:55:36.619 (366325) writing checkpoint
10:55:36.619 (366325) * --- entering READ_EXTRACT_RECORD --- *
10:55:36.619 (366325) exited READ_EXTRACT_RECORD (stat=400, seqno=-1, rba=-1156490736)
ggserr.log:
2012-06-02 10:55:36 WARNING OGG-00869 Oracle GoldenGate Delivery for ODBC, lien.prm: Parameter #: 1 Data Type: 129 DB Part: 5 Length: 9 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 2 Data Type: 129 DB Part: 5 Length: 9 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 3 Data Type: 129 DB Part: 7 Length: 5 Max Length: 128 Status: 8 Precision: 128 Scale: 0 Unavailable Parameter #: 4 Data Type: 129 DB Part: 5 Length: 6 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 5 Data Type: 129 DB Part: 7 Length: 8 Max Length: 56 Status: 8 Precision: 56 Scale: 0 Unavailable Parameter #: 6 Data Type: 129 DB Part: 5 Length: 6 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 7 Data Type: 129 DB Part: 7 Length: 9 Max Length: 128 Status: 8 Precision: 128 Scale: 0 Unavailable Parameter #: 8 Data Type: 129 DB Part: 7 Length: 8 Max Length: 15 Status: 8 Precision: 15 Scale: 0 Unavailable Parameter #: 9 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 8 Precision: 23 Scale: 3 Unavailable Parameter #: 10 Data Type: 129 DB Part: 5 Length: 5 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 11 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 6 Precision: 23 Scale: 3 Data Overflow Parameter #: 12 Data Type: 129 DB Part: 7 Length: 13 Max Length: 512 Status: 8 Precision: 0 Scale: 0 Unavailable Parameter #: 13 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 8 Precision: 23 Scale: 3 Unavailable Parameter #: 14 Data Type: 129 DB Part: 7 Length: 1 Max Length: 1 Status: 8 Precision: 1 Scale: 0 Unavailable Native Error: 0, 0 State: 0, 22007 Class: 0 Source: Line Number: 0 Description: Invalid date format.
2012-06-02 10:55:36 WARNING OGG-01004 Oracle GoldenGate Delivery for ODBC, lien.prm: Aborted grouped transaction on 'AWO_CUBE.LIEN', Database error -2147217887 ([SQL error -2147217887 (0x80040e21)] Parameter #: 1 Data Type: 129 DB Part: 5 Length: 9 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 2 Data Type: 129 DB Part: 5 Length: 9 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 3 Data Type: 129 DB Part: 7 Length: 5 Max Length: 128 Status: 8 Precision: 128 Scale: 0 Unavailable Parameter #: 4 Data Type: 129 DB Part: 5 Length: 6 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 5 Data Type: 129 DB Part: 7 Length: 8 Max Length: 56 Status: 8 Precision: 56 Scale: 0 Unavailable Parameter #: 6 Data Type: 129 DB Part: 5 Length: 6 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 7 Data Type: 129 DB Part: 7 Length: 9 Max Length: 128 Status: 8 Precision: 128 Scale: 0 Unavailable Parameter #: 8 Data Type: 129 DB Part: 7 Length: 8 Max Length: 15 Status: 8 Precision: 15 Scale: 0 Unavailable Parameter #: 9 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 8 Precision: 23 Scale: 3 Unavailable Parameter #: 10 Data Type: 129 DB Part: 5 Length: 5 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable Parameter #: 11 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 6 Precision: 23 Scale: 3 Data Overflow Parameter #: 12 Data Type: 129 DB Part: 7 Length: 13 Max Length: 512 Status: 8 Precision: 0 Scale: 0 Unavailable Parameter #: 13 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 8 Precision: 23 Scale: 3 Unavailable Parameter #: 14 Data Type: 129 DB Part: 7 Length: 1 Max Length: 1 Status: 8 Precision: 1 Scale: 0 Unavailable Native Error: 0, 0 State: 0, 22007 Class: 0 Source: Line Number: 0 Description: Invalid date format ).
report:
2012-06-02 10:55:36 WARNING OGG-01004 Aborted grouped transaction on 'AWO_CUBE.LIEN', Database error -2147217887 ([SQL error -2147217887 (0x80040e21)]
Parameter #: 1 Data Type: 129 DB Part: 5 Length: 9 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable
Parameter #: 2 Data Type: 129 DB Part: 5 Length: 9 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable
Parameter #: 3 Data Type: 129 DB Part: 7 Length: 5 Max Length: 128 Status: 8 Precision: 128 Scale: 0 Unavailable
Parameter #: 4 Data Type: 129 DB Part: 5 Length: 6 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable
Parameter #: 5 Data Type: 129 DB Part: 7 Length: 8 Max Length: 56 Status: 8 Precision: 56 Scale: 0 Unavailable
Parameter #: 6 Data Type: 129 DB Part: 5 Length: 6 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable
Parameter #: 7 Data Type: 129 DB Part: 7 Length: 9 Max Length: 128 Status: 8 Precision: 128 Scale: 0 Unavailable
Parameter #: 8 Data Type: 129 DB Part: 7 Length: 8 Max Length: 15 Status: 8 Precision: 15 Scale: 0 Unavailable
Parameter #: 9 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 8 Precision: 23 Scale: 3 Unavailable
Parameter #: 10 Data Type: 129 DB Part: 5 Length: 5 Max Length: 21 Status: 8 Precision: 20 Scale: 0 Unavailable
Parameter #: 11 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 6 Precision: 23 Scale: 3 Data Overflow
Parameter #: 12 Data Type: 129 DB Part: 7 Length: 13 Max Length: 512 Status: 8 Precision: 0 Scale: 0 Unavailable
Parameter #: 13 Data Type: 129 DB Part: 5 Length: 23 Max Length: 29 Status: 8 Precision: 23 Scale: 3 Unavailable
Parameter #: 14 Data Type: 129 DB Part: 7 Length: 1 Max Length: 1 Status: 8 Precision: 1 Scale: 0 Unavailable
Native Error: 0, 0
State: 0, 22007
Class: 0
Source: Line Number: 0
Description: Invalid date format
Edited by: chris.baron on Jun 3, 2012 10:36 AMNot sure if this helps at all...
Datetime Pairs in Oracle BI (OBIEE) - Days, Hours, Minutes, Seconds
http://www.kpipartners.com/blog/bid/83328/Datetime-Pairs-in-Oracle-BI-OBIEE-Days-Hours-Minutes-Seconds
UPDATE: Sorry... didn't see this was for GoldenGate.
Edited by: 829166 on Jun 22, 2012 7:36 AM -
Iam doing Data acquisition using NI-PXI 4472 and buffered period Measurement using NI-PXI 6602 simultaneously,my program gives an buffer overflow error
murali_vml,
There are two common buffer overflow and overwrite errors.
Overflow error -10845 occurs when the NI-DAQ driver cannot read data from the DAQ device's FIFO buffer fast enough to keep up with the acquired data as it flows to the buffer (i.e., the FIFO buffer overflows before all the original data can be read from it). This is usually due to limitations of your computer system, most commonly the result of slow processor speeds (< 200 MHz) in conjunction with PCMCIA DAQ boards, which have small FIFO buffers (e.g., the DAQCard-500). Sometimes using a DAQCard with a larger FIFO can solve the problem, but a better solution is to lower the acquisition rate or move to a faster system. Another cause of the -10845 error could be due to an interrupt-driven
acquisition. For example, PCMCIA bus does not support Direct Memory Access (DMA). If the system is tied up processing another interrupt (like performing a screen refresh or responding to a mouse movement) when it is time to move data from the board, then that data may get overwritten.
Overwrite error -10846 occurs when the data in the software buffer that you created for an analog input operation gets overwritten by new data before you can retrieve the existing data from the buffer. This problem can be solved by adjusting the parameters of your data acquisition, such as the lowering the scan rate, increasing the buffer size, and/or increasing the number of scans to read from the buffer on each buffer read. Additionally, performing less processing in the loop can help avoid the -10846 error.
See the NI-DAQ Function Reference Manual for a listing of all NI-DAQ error codes.
Have a great day. -
Stack overflow error while creating connection using Oracle10G dirver
Hi,
Our web application built on Servlets runs on the iPlanet web server (In solaris machine). Earlier we used JDK 1.5 update 6 with oracle 9i driver and now got migrated to JDK 1.5 update 10 with same driver. Everything went fine until we started testing the environment with oracle 10G driver. Java 1.5 update 10 with oracle 10G driver throws "Stack overflow error".
Driver version is - 10.2.0.2.0
This occurs only when I access the portal. When I created the single main program and try to run it in the solaris machine it works fine. But wen I try to access the portal keeping this driver under classpath, it fails. Please let me know if anyone have any clues.
When I looked into it, I am able to find that the "stack overflow error" occurs at the point the code line DriverManager.Getconnection("url", "username", "pwd") executes.
Thanks in advance
below the stacktrace of the exception from webserver error log..
06/Mar/2007:04:20:40] failure (10198):
for host 202.54.182.136 trying to POST /wr/servlet/WorkRequest, service-j2ee reports: StandardWrapperValve[WorkRequest]: WEB2769: Allocate exception for servlet WorkRequest
javax.servlet.ServletException: WEB2778: Servlet.init() for servlet WorkRequest threw exception
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:949)
at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:658)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:244)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:218)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:209)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
at com.iplanet.ias.web.connector.nsapi.NSAPIProcessor.process(NSAPIProcessor.java:161)
at com.iplanet.ias.web.WebContainer.service(WebContainer.java:580)
----- Root Cause -----
java.lang.StackOverflowError
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:620)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
at java.net.URLClassLoader.access$100(URLClassLoader.java:56)
at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:268)
at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:620)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
at java.net.URLClassLoader.access$100(URLClassLoader.java:56)
at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:268)
at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:620)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
at java.net.URLClassLoader.access$100(URLClassLoader.java:56)
at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:268)
at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:620)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
at java.net.URLClassLoader.access$100(URLClassLoader.java:56)
at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:268)
at java.lang.ClassLoader.loadClass(
[06/Mar/2007:04:22:20] info (10198):
CORE5073: Web server shutdown in progress
[06/Mar/2007:04:22:21] info (14506):
CORE1116: Sun ONE Web Server 6.1SP5 (64-Bit) B12/02/2005 04:37
[06/Mar/2007:04:22:21] warning (14513):
CORE1251: On group ls1, servername pstst42.pedc.sbc.com does not match subject "" of certificate Server-Cert.
[06/Mar/2007:04:22:21] warning (14513):
CORE1250: In secure virtual server https-vts, urlhost does not match subject "" of certificate Server-Cert.
[06/Mar/2007:04:22:21] info (14513):
CORE5076: Using [Java HotSpot(TM) 64-Bit Server VM, Version 1.5.0_06] from [Sun Microsystems Inc.]
[06/Mar/2007:04:22:21] info (14513):
WEB0100: Loading web module in virtual server [https-vts] at [servlet]
[06/Mar/2007:04:22:21] info (14513):
WEB0100: Loading web module in virtual server [https-vts] at [vts/servlet]
[06/Mar/2007:04:22:21] info (14513):
WEB0100: Loading web module in virtual server [https-vts] at [cron/servlet]
[06/Mar/2007:04:22:21] info (14513):
WEB0100: Loading web module in virtual server [https-vts] at [find/servlet]
[06/Mar/2007:04:22:21] info (14513):
WEB0100: Loading web module in virtual server [https-vts] at [cb/servlet]
[06/Mar/2007:04:22:21] info (14513):
WEB0100: Loading web module in virtual server [https-vts] at [wr/servlet]
[06/Mar/2007:04:22:21] info (14513):
WEB0100: Loading web module in virtual server [https-vts] at [search]
[06/Mar/2007:04:22:25] info (14513):
HTTP3072: [LS ls1] ready to accept requests
[06/Mar/2007:04:22:25] info (14513):
CORE3274: successful server startup
Message was edited by:
Nandakumar_s
Message was edited by:
Nandakumar_sYes, request goes through connection pool but weird
thing is application throws stack excatly where
DriverManager.getConnection gets executed.
Not weird at all.
This is what I am guessing that you did. You changed the driver and some other stuff, like configuration information.
Then when it blew up you tracked down in your code where you see the stack overflow. That happens to be on the connection line. That is not where the overflow 'occurs' - it merely represents where you saw it.
That line however isn't using the oracle driver. What it is using is a connection pool of some sort. That connection pool is configured somewhere. And that configuration is self-referential (or maybe refers to a another driver which refers back to the original.)
And that causes you stack over flow. -
Error in generating form with 6i
I have installed designer 6i rel 2 with form developer 6i on NT
4.0.
When in design editor, I want to generate the form with generate
module, the system generate "CDR-21600: A running Generator or
Utility has failed."
Also in action column writes: " It is possible that the internal
cache is now in an inconsistent state. You are therefore
recommended to close and restart the application."
Could anyone tell me what is the problem and how to solve it.
thanksHere is an document which describes some known causes of CDR-
21600 errors. I hope it will help you.
PURPOSE
To describe some known causes of CDI-21600 errors and to
suggest possible solutions and workarounds.
SCOPE & APPLICATION
This note was written for users of Oracle Designer releases 2.1.x
and 6.0.
CDI-21600 errors occur most frequently during Design Capture and
when generating forms with the Forms and WebServer generators.
Investigating CDI-21600 errors
In Oracle Designer Release 2.1.2 and Release 6.0, this error has
the form:
CDI-21600 'A running generator or utility has failed'
The Release 2.1.1 error message was: 'Generator or Utility throw
an Exception'
The CDI-21600 error message means that the generator is hitting
an unhandled exception, also known as a GPF (general protection
fault). The CDI-21600 error masks the underlying exception error.
To see the real error do the following:
1. Go into the Registry Editor (REGEDIT).
2. Navigate to HKEY_LOCAL_MACHINE\software\oracle\des2_70
3. Set EXCEPT_HANDLING to 0 (by default it is 1).
Repeat the action that resulted in the error.
Known Causes of CDI-21600 Errors and Possible Solutions
Some of the reasons why CDI-21600 errors occur are listed below.
1. A common cause of CDI-21600 errors is failure to install the
necessary
Developer patches.
See [NOTE:64630.1] Developer Patches required to run
Designer with Developer
2. Check that Designer is running on a supported database. Also
check that the
TNS connection is correct.
See [NOTE:60705.1] Designer Certification Matrix (HTML)
3. Check for 'dangling' foreign keys, in other words FKs no longer
owned by any
table in the repository. Delete all invalid constraints.
Invalid constraints may be created if you use the repository
dump utility to
dump and restore external foreign keys referencing tables
shared into the
application system, without dumping and restoring the tables
that own them.
If you restore a complete dump (rather than a 'skeleton' one),
and then use
the 'Reconnect Share Links' option when restoring, you may be
able to
resolve this problem.
To get a complete list of 'dangling' constraints in your
repository, connect
using SQL*Plus and use the following query:
SELECT app.name, key.name
FROM ci_application_systems app, ci_constraints key
WHERE key.table_reference IS NULL
AND key.application_system_owned_by = app.id;
You can also run CKAZANAL.ANAL_REFERENCES on your
repository and delete all
the invalid constraints that it finds. You can run the Repository
Analyzer
from: Front Panel -> Repository Administration Utility -> Utilities.
NOTE: There may be inconsistencies in the repository that the
Repository
Analyzer cannot fix. You might solve such problems by
dropping all the
tables of your application, recreate them from the ERD,
then use the
DDT and recreate your modules.
[BUG:847190] CDI-21600 during forms generation: 'dangling'
foreign key
"Since the generator is running on a repository that contains
invalid
constraints and the Repository Analyzer solves the problem,
bug closed as
unfeasible to fix."
4. Check your modules for invalid or missing references such as
missing window
placements.
5. Try generating your module against default templates and
object libraries.
6. When capturing forms or libraries, try capturing the form or
library without
application logic, then capture the application logic on its own.
See [NOTE:1064690.6] CDI-21600 when capturing design of
form with
application logic
[BUG:757541] DESCAP: CDI-21600 error reported when
capturing with
application logic
Fixed In Ver: 6.0
[BUG:926383] Duplicate of [BUG:757541] This has been fixed in
2.1.2 patch
779559. However you would be advised to apply a later patch
such as 855635
which fixes more bugs in this area.
7. Make sure that all objects that are referenced by the form have
been
captured into the repository before capturing the form.
8. A CDI-21600 will occur if a lookup usage displays only one
column of
datatype DATE or if the column of datatype DATE is displayed
as the first
item in the block.
Workaround
Add more column usages to the lookup block and do not
display the DATE data
type column usage as the first item in the block.
9. [BUG:810472] CDI-21600 when 'Argument in Caller' is set
Fixed In Ver: 6.5.3.0
Workaround
Make sure that you have an argument in the called module that
is mapped to
the "Argument Passed Value" in the calling module. The only
way to get this
mapping back once the APV has the <Module Argument> label
is to delete it
and recreate it.
10. [BUG:801736] CDI-21600 on design capture of a form with
subclassed object
Fixed In Ver: 6.0.3.1.0 (backport)
Fixed In Ver: 6.5
You have an item that has been subclassed to an object.
Checking the Design
Capture option 'Capture Control Blocks' causes the CDI-21600
error. Uncheck
'Capture Control Blocks' and the problem does not occur. Open
the FMB in
Forms*Builder and look at Data Blocks -> Items. Break the link
to the
object, save the FMB, and the form will capture (similar to
[BUG:794872]).
Alternatively, ensure the link can be established.
11. [BUG:850436] CDI-21600 on generation of a form with template
having
subclassed object group
You try to generate a form out of Designer that uses a user-
defined
template. If a collection of objects in the template is grouped
into an
object group, dragged into the object library and then either
copied or
subclassed into a form, when the form is generated you get a
CDI-21600
error.
12. [BUG:822659] Module generation fails (CDI-21600) with multi-
column PK having
long prompt text
Fixed In Ver: 6.5.3.2
Module generation with multi-column primary key having long
prompt text
causes CDI-21600 with preference MSGSFT set.
Workaround
Shorten the prompt text of PKs may not be not applicable. You
may loose end
user information.
You may have the same problem with a mandatory compound
FK. CASEOFG tries to
generate a message '<P1> must be entered', where <P1>
contains all the
prompts of the bound items from the FK. If you reduce the
length of the
prompts, or set MSGSFT = NULL or WEDI = S or property
Mandatory?=No, it
works correctly.
13. [BUG:792542] Capturing application logic causes CDI-21600
(V2 style
triggers)
Fixed In Ver: 6.5.5
After removal of the v2 triggers, the form captures/merges OK
on 5.0.24.8,
provided patch 875027 has not been applied.
14. [BUG:790877] CDI-21600 if the primary/foreign keys have no
key components
Fixed In Ver: 6.5.11
Generating a module with tables having a primary key not
correctly defined
(no PK component) will cause a CDI-21600 error. This can
occur when
unloading a module from the RON. If you pick up the module
(and only the
module) in the unload set, the table and its PK are unloaded as
a skeleton.
Loading the .DAT file into a new application will create a PK
without a
component.
15. [BUG:771549] CDI-21600 if cannot connect to the DB with
connect string in
Options (Compile)
Fixed In Ver: 6.5.13
If you cannot connect to the DB with the connect string
specified in options
(Compile), the forms generator will fail with CDI-21600.
This problem occurs when you cannot connect to the DB
because:
- the username or password is wrong;
- or the SQL*Net alias is not defined in the TNSNAMES.ORA
file;
- or the SQL*Net listener is not started;
- or the DB is down.
16. [BUG:785106] CDI-21600 when generate master detail form
with preserve layout
[BUG:855812] is a duplicate of this bug.
Fixed In Ver: 5.0.24.6.0 (Bug:860426 Backport request for 2.1.2)
Fixed In Ver: 6.0
Fixed In Ver: 6.5.3
You have a master-detail Form with the Master having items
partly on a TAB
Canvas. Generate Module works OK. You enter Forms Builder
and move some
items on the tabs (just small changes, items are still on the
same tabs).
You change the look of the Detail and change Records
Displayed. Now in
Designer you generate the Module with Preserve Layout. You
get a CDI-21600
error. The problem might reproduce without doing any changes
in Forms
Builder, just by generating with Preserve Layout.
17. [BUG:891306] If primary key column of lookup in check
constraint comment of
base table
Fixed In Ver: 6.5.5
Workaround
Do not use the name of the bound item that is based on the
primary key
column of the lookup table in a check constraint comment of
the base table.
18. [BUG:896026] Forms gen throws assertion failure in
CVINI/BUILDACTIONITEM@/CV/CVI/CVIBNI.CPP
Fixed In Ver: 6.5.7
A problem is caused by a PL/SQL definition (function, package,
procedure)
being defined as a called module for the module you are trying
to generate.
To resolve the problem and enable the module to be generated,
remove all
Called Modules that are PL/SQL definitions (functions,
procedures or
packages).
See [NOTE:2107207.6] CDI-21600 during generation of module
or Assertion
Failure \cv\cvi\cvibni.cpp
19. [BUG:812333] CDI-21600 generating a web module after
adding an unbound item
Fixed In Ver: 6.5.3.0
Backport [BUG:1280667] raised to fix by 6.0.3.9
You add an unbound item (SQL expression) to a Web module.
When you try to
generate the module you get a CDI-21600 error. If you delete the
unbound
item the Web module generates correctly.
In a test case the problem occurred during validation of the
derivation
text, if the master module component was in a different module.
A workaround
was to rearrange module components so that this was not the
case.
20. [BUG:1627963] CCVDIAG::TRACEGENERATORMESSAGE
WHEN GENERATING INCORRECT
DERIVATION EXPRESSION
Message
CDR-21605: Failed while processing Module <mod> in function
CCVDiag::TraceGeneratorMessage BOF
Cause
The generator failed due to an unexpected error - the
error indicates the object the generator was processing
when it failed.
Helena -
I just upgraded von Adobe Acrobat 10 to 11.0.7, because i wanted to use the pdfmaker in Ibm Notes 9.0.1. But after that every time i try to delete a mail document in my mail-database, i get an overflow error message. Deleting the line with the link to the local adobe maker file for notes in the notes.ini (AddInMenus=C:\PROGRA~2\Adobe\ACROBA~1.0\PDFMaker\Mail\LOTUSN~1\PDFMLO~1.DLL) solves the issue, but then you can not use the pdfmaker anymore.
I don't know, if this problem exists with 11.0.06, the first Version which should support ibm Notes 9.x.
Any suggestion how to fix this problem?
Many thanks
HaraldFYI
Adobe released Acrobat 11.0.11
Source: 11.0.11 Planned update, May 12, 2015 — Acrobat and Adobe Reader Release Notes
Bug fixeshttp://www.adobe.com/devnet-docs/acrobatetk/tools/ReleaseNotes/11/11.0.11.html#bug-fixes
PDF creationhttp://www.adobe.com/devnet-docs/acrobatetk/tools/ReleaseNotes/11/11.0.11.html#pdf-creatio n
3775740: Lotus Notes 9.0.1 gives overflow error when Acrobat pdfmaker 11.0.06 is installed. -
Error due to out of memory condition
Hi,
system: Windows XP, 2 GM RAM, Indesign CS3
I placed two files in Indesign: the first is .ai (1.1 MB, probably exported form a CAD program), the second is .psd (53 MB). When I try to print to .ps file, Indesign displays the message: "Export error: error due to out of memory condition". If I: 1. export to PDF, everything is OK, 2. rasterize .ai in Photoshop, then import the resulting .psd to Indesign, print to .ps file is OK.
Does somebody has an answer to my problem?My gues is "mistakes" was referred to simply not outputting as intended by the designer. I had output issues when going directly to PDF and had to also continue the old way of doing thinkgs by outputting to .ps then distill to PDF. The "mistakes" that occured for me were nearly unnoticed. The publication was a 128+4 product catalog that had a header at the top of each page. I never had any problems outputting in the past. At the time, I just had updated to CS3 (XP Pro SP3 Dual Core 4 gigs ram). I was advised that outputting directly to PDF had been durastically improved. I output my files without "mistakes" or errors, or so I thought. After proofing, I didn't realize that all of the drop shadows for the header headline were not applied. The problem though, wasn't that they were applied to all headers, just beginning at page 60 or so, halfway through. After getting the finished publication back, I didn't notice it for a few months. After noticing it, I investigated. Several troubleshooting hours later (after forum posts and other expert help), I/we concluded that it was simply a program deficiency. During my trouble shooting, I looked at several areas mainly focusing on the transparency settings. I output the document several times. I finally exported each page separately and found that that was the only way that I could output every page to include the dropshadow in each header page (past page 60 or so).
I have been past that issue for a while now. Now I am on Win 7, Quad core, 6 gigs of ram, and don't have the outputting problem that I used to. Now I occasionally get this "Error due to out of memory condition" which is BS because my hardware specs are beyond reasonable for what I typically create in my workflow. Most of the time, my memory is only at about half of its capacity when I get this error. Not when I have a PS open with a large multipage file that I am working on. My out of memory error happens during file output or just when performing ordinary layout functions in InDesign.
When on my personal computer (Aluminum iMac 2.4 Core2 duo, 4 gigs of ram, 512 video, CS4 & CS5) I have neither of the above issues. -
Batch Processing error: Object variable or With block variable not set - 91
We are experiencing the following error when trying to execute the FDM Batch Processing of files in our UAT environment. This error is not occuring in our DEV environment. I have seen this error before when the data file had been left open and FDM could not access the file, so it appears this error is usually due to file permissions. However, this time none of the files are open, and as far as we can see, FDM should have full access to the OpenBatch and Inbox folders etc.
Does anyone please have any suggestions, particularly on what account FDM will carry out the various tasks? Would it use a system account?
Error:
"Object variable or With block variable not set - 91"
FDM Log:
** Begin FDM Runtime Error Log Entry [2012-07-06 16:07:09] **
ERROR:
Code............................................. 75
Description...................................... Path/File access error
Procedure........................................ clsBatchLoad.fFileCollectionCreate
Component........................................ upsWBatchLoaderDM
Version.......................................... 1112
Thread........................................... 5828
IDENTIFICATION:
User............................................. admin
Computer Name.................................... *******
App Name......................................... *******
Client App....................................... WorkBench
CONNECTION:
Provider......................................... ORAOLEDB.ORACLE
Data Server......................................
Database Name.................................... *******
Trusted Connect.................................. False
Connect Status.. Connection Open
GLOBALS:
Location......................................... *******
Location ID...................................... 748
Location Seg..................................... 2
Category......................................... *******
Category ID...................................... 14
Period........................................... *******
Period ID........................................ 02/07/2011
POV Local........................................ False
Language......................................... 1033
User Level....................................... 1
All Partitions................................... True
Is Auditor....................................... FalseI can confirm that there is definitely data present in our data files in this case.
Please note that this error only occurs when using the Batch Processing functionality of FDM Workbench (which requires files to be placed in the OpenBatch subfolder of the Inbox). I can load individual files fine when using the FDM Web Client.
As part of the first step of the batch load process, FDM Workbench moves files from the OpenBatch folder to a new folder which it creates in the Inbox\Batches directory. However, it is not even managing to do this, and gives the error below.
We have tried to share the OpenBatch folder, to allow specific users access to drop files into this folder. Consequently, I believe suggests a security problem on the OpenBatch folder itself (please see original post). I have been told privileges should be sufficient for FDM to make use of this folder too, however I suspect this is not the case at present.
In the meantime, please let me know if this could be due to other causes. -
Numeric Overflow error on NPer function
I am trying to use this NPer function and it's acting real weird. If I use the same numbers that my database fields hold, it works and when I switch to database fields it works fine one time and next time I try to refresh data, it gives me this "numeric overflow" error..
In Crystal help example, they shows that the payment has to be a negative number. So I am following that. If i switch it to the positive number, then it works bit it gives me negative # of months plus it's wrong number.
Not sure whats going on and would really appreciate any help...
Thanks
RajPlease re-post if this is still an issue or purchase a case and have a dedicated support engineer work with your directly
-
Numeric overflow error using binary integer
Hi experts,
I am facing issue while solving a numeric overflow error. after analyzing we came to know that in the below code BINARY_INTEGER is causing the issue as input is exceeding its range. I tried to replace BINARY_INTEGER by varchar2(20) but its saying
"Error(580,20): PLS-00657: Implementation restriction: bulk SQL with associative arrays with VARCHAR2 key is not supported."
We need to remove this binary_integer. I dont know how to do this. Can anybody give some idea or what code change required here ? thanks in advance. Cheers.. Below is the code,
===================================================
PROCEDURE UpdateCost_
p_Cost_typ IN OUT NOCOPY CM_t,
IS
TYPE ObjektIdTab_itabt IS TABLE OF ObjektId_tabt INDEX BY BINARY_INTEGER;
v_cost_IdTab_itab ObjektIdTab_itabt;
v_CM_ID INTEGER := p_Cost_typ.costm.CM_ID;
BEGIN
SELECT CAST(MULTISET
(SELECT Costwps.CMKostId
FROM CM_Pos_r NRPos,
CMK_z_r costzpps,
CMG_Cost_v Costwps
WHERE NRPos.CM_ID = v_CM_ID
AND NRPos.SNRId_G = SNRCT.SNRPos.SNRId_G
AND costzpps.CM_ID = NRPos.CM_ID
AND costzpps.CMSNRPosId = NRPos.CMSNRPosId
AND costzpps.Kost_s = Kost.Costnzl.Kost_s
AND Costwps.CMKz_Id = costzpps.CMKz_Id
AND Costwps.TypCode NOT IN
(SELECT kw.TypCode
FROM TABLE(Kost.Kostwt_tab) kw
) AS ObjektId_tabt )
BULK COLLECT
INTO v_cost_IdTab_itab
FROM TABLE(p_Cost_typ.SNR_tab) SNRCT,
TABLE(SNRCT.Kost_tab) Kost
FOR v_i IN 1 .. v_cost_IdTab_itab.COUNT LOOP
FOR v_j IN 1 .. v_cost_IdTab_itab(v_i).COUNT LOOP
DELETE FROM CMG_Cost_v WHERE CMKostId = v_cost_IdTab_itab(v_i)(v_j);
END LOOP;
END LOOP;
END;
===================================================Thanks for your reply. I tried with INDEX by NUMBER. but oracle says its not a valid use of index by thing. and moreover I also tried with by removing INDEX BY clause. but in that case we are not at all getting any data in for loop. some people says to use extend clause. But again I am not sure How to do so. Can you please let me know code for this.
I know you are trying to help by you need to STOP telling us what problem you have and SHOW US. Saying 'Oracle says' is useless. Post EXACTLY what code you are using, the EXACT steps you are using to compile that code and the EXACT result that you are getting.
You also made no comment about the 'overflow' issue. A BINARY_INTEGER (PLS_INTEGER) has a very large range of values:
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/datatypes.htm#i10726
>
The PLS_INTEGER data type stores signed integers in the range -2,147,483,648 through 2,147,483,647, represented in 32 bits.
>
If you are trying to create a collection of more than 2 BILLION of anything you have a serious problem with either WHAT you are trying to do or HOW you are trying to do it. Your 'overflow' issue is more likely a symptom that you are really running out of memory. You should ALWAYS have a LIMIT clause when you do BULK COLLECT statements.
Also see this section in that doc: SIMPLE_INTEGER Subtype of PLS_INTEGER
You need to address your LIMIT issue first and then address any other issues that arise from actually executing the code.
Then see the section 'SELECT INTO Statement with BULK COLLECT Clause' in that doc
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/tuning.htm#BABEIACI
That section has an example that shows you do NOT need to use an INDEX BY clause to create collections as you are trying to do. So your not 'getting any data in for loop' is NOT related to the lack of that clause.
That example also shows you that you do NOT use 'extends' when doing BULK COLLECT. The bulk collection automatically extends the collection as needed to hold the entire results (assuming you don't run out of memory for 2 BILLION things).
Example 12-22 in that same doc shows the proper way to use a double loop and a BULK COLLECT with a LIMIT clause
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/tuning.htm#BABCCJCB
Here is very simple sample code you can use in the SCOTT schema to understand how the double loop and LIMIT clauses work together.
>
The FETCH does a BULK COLLECT of all data into 'v'. It will either get all the data or none if there isn't any.
The LOOP construct would be used when you have a LIMIT clause so that Oracle would 'loop' back to
get the next set of records. Run this example in the SCOTT schema and you will see how the LIMIT clause works.
I have 14 records in my EMP table.
DECLARE
CURSOR c1 IS (SELECT * FROM emp);
TYPE typ_tbl IS TABLE OF c1%rowtype;
v typ_tbl;
BEGIN
OPEN c1;
LOOP --Loop added
FETCH c1 BULK COLLECT INTO v LIMIT 3; -- process 3 records at a time
-- process the first 3 max records
DBMS_OUTPUT.PUT_LINE('Processing ' || v.COUNT || ' records.');
FOR i IN v.first..v.last LOOP
DBMS_OUTPUT.PUT_LINE(v(i).empno);
END LOOP;
EXIT WHEN c1%NOTFOUND;
END LOOP;
DBMS_OUTPUT.PUT_LINE('All done');
END;
In the FOR loop you would do any processing of the nested table you want to do
and could use a FORALL to do an INSERT into another table.
>
I strongly suggest that you modify your code to work with a VERY SMALL set of data until it works properly. Then expand it to work with all of the data needed, preferably by using an appropriate LIMIT clause of no more than 1000. -
Hi..
I have problem with this code.
class test
test tt=new test(); //1
String name1;
test() {}
test(String i)
name1=i;
//tt=new test(); //2
public static void main(String arg[]){
test t1=new test("kj"); //3
} When I use line 2 (nstead of line 1 ) for initializing the ref variable iam not having any problem.
But if i use as in line 1 iam getting stack overflow error..
I thought tht calling a constructor recursivley results in a stack overflow error..
But iam instantiating t1 with a one arg constructor (line 3) for which tt (line 1)is intialized,then where is the recursion happening..
can any one pls clear..
Thnx.
mysha..please use [code][/code] tags around your code - makes it much easier to read.
I think you have it - consider this code:public class StackOverflower {
private StackOverflower so = new StackOverflower();
public static void main(String[] args) {
StackOverflower mySO = new StackOverflower();
}Running this will overflow the stack since creation of an instance of StackOverflower requires creation of an instance of StackOverflower. This code though:public class NonStackOverflower {
private NonStackOverflower nso = null;
public NonStackOverflower() {
public NonStackOverflower(String s) {
this.nso = new NonStackOverflower();
public static void main(String[] args) {
NonStackOverflower myNSO = new NonStackOverflower();
}Won't, since the creation of a new NonStackOverflower is not required to create a new NonStackOverflower instance.
Did that make sense? I may have gotten confused and failed to illustrate your situation with my code...
Good Luck
Lee
Maybe you are looking for
-
Help: question on send XML file from java client to java server
Hi, I am now to Java, and now I am going to set up a simple network in the lab. I have created a random array of data and transferred to XML file on my client. Now, I would like to send it to the server. I am wondering how I can put the XML file into
-
Says it needs an update, but says it's up to date...makes no sense!
First of all, why does the new version of Garageband seem to have less options in the software instrument section? Also, any time I go to look at the loops, they aren't accessible, you can see the "ghost" versions of them, and if you click on them it
-
I wanted to get an excel file of my itemised call log - no problem with the most recent bill - I got it through the view bill option. However the view bill option is not shown for the previous bill - is it possible to make it available or to get the
-
Standard document interface is not available
Hi All, I am using the method "get_spreadsheet_interface" from class "i_oi_document_proxy" to get the interface from an 2007 Excel File with multiple worksheets. I have successfully opened the file using "open_document" but when I execute the "get_s
-
Finding the average between 2 cursors
Hi. I am using LabVIEW to produce a graph of force against time, and I am using cursors to locate specific points on this graph while the vi is running. I want to find the average force value between selected cursor points (either during or after the