PreparedStatement execution cause OutOfMemoryException
Hello,
I use PreparedStatement for getting data from database. It works well unless there is much data to get. If the result set has many rows (over 1000000) I get OutOfMemory error. I tried use Statement instead of PreparedStatement and tried to set my parameters value manually, but it doesn't work (in postgre database):
sqlQuery = select * from employee_tmp where hire_date > ?; PreparedStatement st = connection.prepareStatement(sqlQuery); st.setDate(1, myDate); ResultSet rs = st.executeQuery();
Result is OK, but for much data I get OutOfMemory error.
sqlQuery = select * from employee_tmp where hire_date > myDate.toString(); Statement st = connection.createStatement(); ResultSet rs = st.executeQuery(sqlQuery);
Getting data from database is fast, but I get all records, not only these from the first case.
Thank you for any help in advanced.
Agata
agad wrote:
Hello,
I use PreparedStatement for getting data from database. It works well unless there is much data to get. If the result set has many rows (over 1000000) I get OutOfMemory error. I tried use Statement instead of PreparedStatement and tried to set my parameters value manually, but it doesn't work (in postgre database):
sqlQuery = select * from employee_tmp where hire_date > ?;
PreparedStatement st = connection.prepareStatement(sqlQuery);
st.setDate(1, myDate);
ResultSet rs = st.executeQuery();Result is OK, but for much data I get OutOfMemory error.
sqlQuery = select * from employee_tmp where hire_date > myDate.toString();
Statement st = connection.createStatement();
ResultSet rs = st.executeQuery(sqlQuery);Getting data from database is fast, but I get all records, not only these from the first case.
Thank you for any help in advanced.
AgataThe PreparedStatement itself should definately not cause an OutOfMemoryException. Are you maybe looping through the ResultSet and storing the rows in an List or something? If it is truely the Driver throwing it, then ask at a PostGreSQL Driver forum, and probably post a bug report (to PostGreSQL, not Sun).
Similar Messages
-
Can creating new object without object reference cause OutOfMemoryException
I am getting OutOfMemoryException in my application. After looking at the logs and doing some analysis I think creating a new object and not attaching it to a reference is causing the issue.
The simplified code is as below:
void valuate(int tradeNum){
new CaptureTrade().process(tradeNum); //1
Will the above code called n number of times cause OutOfMemoryException?
Should I use something like this instead:
void valuate(int tradeNum){
CaptureTrade ct = new CaptureTrade();
ct.process(tradeNum); //2
Can the first program cause OutOfMemoryException which can be rectified using the second piece of code?ashay wrote:
I am getting OutOfMemoryException in my application. After looking at the logs and doing some analysis I think creating a new object and not attaching it to a reference is causing the issue.
The simplified code is as below:
void valuate(int tradeNum){
new CaptureTrade().process(tradeNum); //1
Will the above code called n number of times cause OutOfMemoryException?
Should I use something like this instead:
void valuate(int tradeNum){
CaptureTrade ct = new CaptureTrade();
ct.process(tradeNum); //2
Can the first program cause OutOfMemoryException which can be rectified using the second piece of code?What happened when you tried it? -
Select PreparedStatement execution slowdown overtime - Oracle 10.2.0.1
This is a very confusing issue. We have a batch type application built on Hibernate using Oracle JDBC 10.2.0.1 drivers. Currently using a Spring configured connection pool (don't think this is the issue). Basically, the first step of the process involved looking up Subscriber information and related objects. This results in Hibernate pumping out lots and lots of sql, executing using PreparedStatement. I've profiled the code and the time is being spent in the execution of the SQL, post executeQuery call.
At the beginning performance is great, but over time, after about 50,000 invocations of the Subscriber search, the execution time starts to increase eventually to the point where it is taking 5 times as long to do a search as it did at the beginning. I have checked the usual suspects (polling for a resource, memory consumption etc..) and they all appear to be fine. And as I said, it is actually on the call to PreparedStatement.executeQuery that the time seems to grow. I created a test harness using a set of 100,000 Subscriber ids, replicated 40 times, and I can really get the application to slow down.
I've had a thought that it was maybe the size of TCP/IP buffer that maybe Hibernate was saturating the buffer.. I would prefer not to go playing with this, but I will if it's the way to go.
Has anyone come across this before. Is there some Oracle database kernal setting that I should be looking at here? I could really do with a little here so any advise given is much appreciated.
JayAt the risk of stating the obvious...
UAT and Prod both use 10.2.0.3 and both work correctly. The 10.2.0.3 patchset has not yet been applied on dev, so dev is using an outdated version of Oracle. Why wouldn't you apply the 10.2.0.3 patchset to the dev environment and see if that solves the problem?
More generally, it seems very wrong to be running a later version of the database in UAT and in Prod than in dev. It seems particularly wrong to have a 10.1 development environment for a 10.2 production environment. It's also rather odd to have the development environment using Windows when the production environment is on Unix since that means any operating system dependent issues have to be discovered during UAT, which isn't particularly helpful.
Justin -
Thread execution causes Swing GUI to become unpresponsive
Hello, I'm trying to refresh my memory of multithreaded Swing applications and have run into a wall. I have my main class, SimpleGUI, which creates the GUI interface and instantiates SimplePB and starts the SimplePB thread. The problem is that the SimpleGUI main class, which has a button to start the SimplePB thread so that a progress bar is updated, will result in a unresponsive GUI when the button is clicked. The button action code follows:
public void actionPerformed(ActionEvent ae)
System.out.println("Got action! "+ae);
if(ae.getActionCommand().equals("Start")) {
jb.setEnabled(false);
spbt.run();
}Do I need to do the Swing interface in a multithreaded manner as well? If so, how? If not, what am I doing wrong? Any pointers are greatly appreciated, thanks.sbpt.run();sbpt.start();
At the moment you're not doing 'thread execution' at all, you are just running its code inline in the AWT thread, which freezes the GUI, which is why you needed the thread in the first place. -
Huge stringbuffer causing OutOfMemoryException
Hello,
I'm using an IntegerBuffer to keep some values, the Buffer sometimes contains up to 1000000 values and I have to append these values to a text file. I tried to add the values by using a loop, appending the values to a StringBuffer, the I use a BufferedWriter to append the stuff to the file.
This works fine, for less than 500000 values, above that I recieve a OutOfMemoryException.
So I though about splitting the StringBuffer, appending the first part and then the second one to the file. But I don't know how to this in an efficient way...I wrote a little test program to see if mgumbs and others and I were totally talking out of our arses. I thought that not using the StringBuffer would be faster, since SB would just be adding another layer. The test shows that without SB is a little slower surpisingly.
However...
1) It's only a little slower--roughly 6 seconds vs. 8 for 1,000,000 ints written out ten times.
2) It may be skewed slightly in favor of SB, since I gave the SB a large enough initial size to hold the entire String without having to expand its internal buffer.
3) For the non-SB case, I was lazy and wrapped the BufferedWriter in a PrintWriter. This made my coding easier, but the extra layer may have slowed things down a bit.
To me this is precisely a case where you write clean code that matches your intent, profile it, and then if it's too slow, look for bottlenecks, rather than assuming that the better design will be significantly slower.
If the I/O is really a bottleneck here, you might benefit from multithreading--one thread to turn the ints into Strings and shove them into a queue and another thread to read the queue and write it out to disk. I doubt it would really by worth that complexity here though.
First, the output, then the code.
arg0: number of ints to write in one iteration
arg1: size of BufferedWriter used both for the SB case and the non-SB case
arg2: number of iterations
// 1-byte buffer, of course it's slow
:; java BufTest 1000000 1 10
41,740 ms with StringBuffer
44,564 ms without StringBuffer
:; java BufTest 1000000 8 10
10,555 ms with StringBuffer
12,778 ms without StringBuffer
:; java BufTest 1000000 128 10
5,878 ms with StringBuffer
8,292 ms without StringBuffer
:; java BufTest 1000000 32768 10
5,778 ms with StringBuffer
8,121 ms without StringBuffer
:; java BufTest 1000000 4000000 10
5,628 ms with StringBuffer
7,871 ms without StringBuffer
import java.io.*;
import java.text.*;
public class BufTest {
private final int numInts_;
private final int bufSize_;
private final int numTries_;
private BufferedWriter bwForWith_;
private PrintWriter pwForWithout_;
public static void main(String args[]) throws Exception {
new BufTest(args).go();
public BufTest(String[] args) throws Exception {
numInts_ = Integer.parseInt(args[0]);
bufSize_ = Integer.parseInt(args[1]);
numTries_ = Integer.parseInt(args[2]);
private void go() throws Exception {
long withSBTotal = 0;
long withoutSBTotal = 0;
DecimalFormat df = new DecimalFormat("#,##0");
try {
bwForWith_ = new BufferedWriter(new FileWriter("with.txt"), bufSize_);
long start = System.currentTimeMillis();
for (int ix = 0; ix < numTries_; ix++) {
withStringBuffer();
bwForWith_.flush()
System.out.println(df.format(System.currentTimeMillis() - start) + " ms with StringBuffer");
finally {
close(bwForWith_);
try {
pwForWithout_ = new PrintWriter(new BufferedWriter(
new FileWriter("witout.txt"), bufSize_), false);
long start = System.currentTimeMillis();
for (int ix = 0; ix < numTries_; ix++) {
withoutStringBuffer();
pwForWithout_.flush();
System.out.println(df.format(System.currentTimeMillis() - start) + " ms without StringBuffer");
finally {
close(pwForWithout_);
private void withStringBuffer() throws Exception {
// allow for an int's string to be up to 8 chars
StringBuffer sbuf = new StringBuffer(numInts_ * 8);
for (int ix = 0; ix < numInts_; ix++) {
sbuf.append(ix);
String str = sbuf.toString();
bwForWith_.write(str, 0, str.length());
private void withoutStringBuffer() throws Exception {
for (int ix = 0; ix < numInts_; ix++) {
pwForWithout_.write(String.valueOf(ix));
private void close(Writer wr) {
if(wr != null) {
try {
wr.flush();
catch (Throwable th) {}
try {
wr.close();
catch (Throwable th) {}
} -
Perl Script execution causes system panic
Hi All,
I'm using a Perl script of my own for work purpose.The script was running fine yesterday,until today morning I have updated the XCode to 4.6.2.
Everytime I execute the script,my computer turns off and resatarts with system panic.
Here are the error report content.
Interval Since Last Panic Report: 974 sec
Panics Since Last Report: 3
Anonymous UUID: AE85B099-0AAA-B563-0607-3EFA733CAEDF
Thu Apr 18 15:08:18 2013
panic(cpu 0 caller 0xffffff8018d1edba): "negative open count (c, 16, 2)"@/SourceCache/xnu/xnu-2050.22.13/bsd/miscfs/specfs/spec_vnops.c:1813
Backtrace (CPU 0), Frame : Return Address
0xffffff80b3db3c20 : 0xffffff8018c1d626
0xffffff80b3db3c90 : 0xffffff8018d1edba
0xffffff80b3db3cd0 : 0xffffff8018d23c46
0xffffff80b3db3d20 : 0xffffff8018d10cb6
0xffffff80b3db3d60 : 0xffffff8018cf08a1
0xffffff80b3db3db0 : 0xffffff8018cf0021
0xffffff80b3db3df0 : 0xffffff8018cf0b9e
0xffffff80b3db3e20 : 0xffffff8018d1100f
0xffffff80b3db3e50 : 0xffffff8018f55b8d
0xffffff80b3db3ec0 : 0xffffff8018c39ce9
0xffffff80b3db3ef0 : 0xffffff8018c3c7e8
0xffffff80b3db3f20 : 0xffffff8018c3c65e
0xffffff80b3db3f50 : 0xffffff8018c1b70d
0xffffff80b3db3f90 : 0xffffff8018cb84a3
0xffffff80b3db3fb0 : 0xffffff8018ccd4ac
BSD process name corresponding to current thread: ssh
Mac OS version:
12D78
Kernel version:
Darwin Kernel Version 12.3.0: Sun Jan 6 22:37:10 PST 2013; root:xnu-2050.22.13~1/RELEASE_X86_64
Kernel UUID: 3EB7D8A7-C2D3-32EC-80F4-AB37D61492C6
Kernel slide: 0x0000000018a00000
Kernel text base: 0xffffff8018c00000
System model name: MacBookPro4,1 (Mac-F42C89C8)
System uptime in nanoseconds: 1213636802424
last loaded kext at 70392580239: com.apple.filesystems.smbfs 1.8 (addr 0xffffff7f9ae0b000, size 229376)
last unloaded kext at 168892078080: com.apple.iokit.IOSCSIBlockCommandsDevice 3.5.5 (addr 0xffffff7f992ca000, size 90112)
loaded kexts:
foo.tun 1.0
foo.tap 1.0
com.apple.filesystems.smbfs 1.8
com.apple.driver.AppleBluetoothMultitouch 75.19
com.apple.driver.AppleHWSensor 1.9.5d0
com.apple.filesystems.autofs 3.0
com.apple.driver.DiskImages.ReadWriteDiskImage 345
com.apple.driver.DiskImages.RAMBackingStore 345
com.apple.driver.AudioAUUC 1.60
com.apple.driver.IOBluetoothSCOAudioDriver 4.1.3f3
com.apple.iokit.IOBluetoothSerialManager 4.1.3f3
com.apple.driver.AppleHDA 2.3.7fc4
com.apple.iokit.BroadcomBluetoothHCIControllerUSBTransport 4.1.3f3
com.apple.iokit.IOUserEthernet 1.0.0d1
com.apple.Dont_Steal_Mac_OS_X 7.0.0
com.apple.driver.ApplePolicyControl 3.3.0
com.apple.driver.AppleLPC 1.6.0
com.apple.driver.AppleUpstreamUserClient 3.5.10
com.apple.driver.AppleSMCPDRC 1.0.0
com.apple.driver.AppleSMCLMU 2.0.3d0
com.apple.GeForce 8.1.0
com.apple.driver.AppleBacklight 170.2.5
com.apple.driver.AppleMCCSControl 1.1.11
com.apple.driver.ACPI_SMC_PlatformPlugin 1.0.0
com.apple.driver.SMCMotionSensor 3.0.3d1
com.apple.driver.AppleUSBTCButtons 237.1
com.apple.driver.AppleUSBTCKeyboard 237.1
com.apple.driver.AppleIRController 320.15
com.apple.AppleFSCompression.AppleFSCompressionTypeDataless 1.0.0d1
com.apple.AppleFSCompression.AppleFSCompressionTypeZlib 1.0.0d1
com.apple.BootCache 34
com.apple.driver.XsanFilter 404
com.apple.iokit.IOAHCIBlockStorage 2.3.1
com.apple.driver.AppleIntelPIIXATA 2.5.1
com.apple.driver.AppleAHCIPort 2.5.1
com.apple.driver.AppleSmartBatteryManager 161.0.0
com.apple.driver.AppleUSBHub 5.5.5
com.apple.driver.AirPortBrcm43224 600.36.17
com.apple.driver.AppleFWOHCI 4.9.6
com.apple.iokit.AppleYukon2 3.2.3b1
com.apple.driver.AppleUSBEHCI 5.5.0
com.apple.driver.AppleUSBUHCI 5.2.5
com.apple.driver.AppleEFINVRAM 1.7
com.apple.driver.AppleRTC 1.5
com.apple.driver.AppleHPET 1.8
com.apple.driver.AppleACPIButtons 1.7
com.apple.driver.AppleSMBIOS 1.9
com.apple.driver.AppleACPIEC 1.7
com.apple.driver.AppleAPIC 1.6
com.apple.driver.AppleIntelCPUPowerManagementClient 196.0.0
com.apple.nke.applicationfirewall 4.0.39
com.apple.security.quarantine 2
com.apple.driver.AppleIntelCPUPowerManagement 196.0.0
com.apple.driver.IOBluetoothHIDDriver 4.1.3f3
com.apple.driver.AppleMultitouchDriver 235.29
com.apple.kext.triggers 1.0
com.apple.driver.DiskImages.KernelBacked 345
com.apple.iokit.IOSerialFamily 10.0.6
com.apple.driver.DspFuncLib 2.3.7fc4
com.apple.iokit.IOAudioFamily 1.8.9fc11
com.apple.kext.OSvKernDSPLib 1.6
com.apple.iokit.AppleBluetoothHCIControllerUSBTransport 4.1.3f3
com.apple.driver.AppleHDAController 2.3.7fc4
com.apple.iokit.IOHDAFamily 2.3.7fc4
com.apple.iokit.IOSurface 86.0.4
com.apple.iokit.IOBluetoothFamily 4.1.3f3
com.apple.iokit.IOFireWireIP 2.2.5
com.apple.driver.AppleGraphicsControl 3.3.0
com.apple.driver.AppleBacklightExpert 1.0.4
com.apple.driver.AppleSMBusController 1.0.11d0
com.apple.driver.IOPlatformPluginLegacy 1.0.0
com.apple.driver.IOPlatformPluginFamily 5.3.0d51
com.apple.nvidia.nv50hal 8.1.0
com.apple.NVDAResman 8.1.0
com.apple.iokit.IONDRVSupport 2.3.7
com.apple.iokit.IOGraphicsFamily 2.3.7
com.apple.driver.AppleSMC 3.1.4d2
com.apple.driver.AppleUSBMultitouch 237.3
com.apple.iokit.IOUSBHIDDriver 5.2.5
com.apple.driver.AppleUSBMergeNub 5.5.5
com.apple.driver.AppleUSBComposite 5.2.5
com.apple.iokit.IOSCSIArchitectureModelFamily 3.5.5
com.apple.iokit.IOATABlockStorage 3.0.2
com.apple.iokit.IOATAFamily 2.5.1
com.apple.iokit.IOAHCIFamily 2.3.1
com.apple.iokit.IOUSBUserClient 5.5.5
com.apple.iokit.IO80211Family 522.4
com.apple.iokit.IOFireWireFamily 4.5.5
com.apple.iokit.IONetworkingFamily 3.0
com.apple.iokit.IOUSBFamily 5.5.5
com.apple.driver.AppleEFIRuntime 1.7
com.apple.iokit.IOHIDFamily 1.8.1
com.apple.iokit.IOSMBusFamily 1.1
com.apple.security.sandbox 220.2
com.apple.kext.AppleMatch 1.0.0d1
com.apple.security.TMSafetyNet 7
com.apple.driver.DiskImages 345
com.apple.iokit.IOStorageFamily 1.8
com.apple.driver.AppleKeyStore 28.21
com.apple.driver.AppleACPIPlatform 1.7
com.apple.iokit.IOPCIFamily 2.7.3
com.apple.iokit.IOACPIFamily 1.4
com.apple.kec.corecrypto 1.0
Model: MacBookPro4,1, BootROM MBP41.00C1.B03, 2 processors, Intel Core 2 Duo, 2.4 GHz, 5 GB, SMC 1.27f3
Graphics: NVIDIA GeForce 8600M GT, GeForce 8600M GT, PCIe, 256 MB
Memory Module: BANK 0/DIMM0, 1 GB, DDR2 SDRAM, 667 MHz, 0xCE00000000000000, 0x4D342037305432393533455A332D43453620
Memory Module: BANK 1/DIMM1, 4 GB, DDR2 SDRAM, 667 MHz, 0x2C00000000000000, 0x3136485453353132363448592D3636374131
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8C), Broadcom BCM43xx 1.0 (5.10.131.36.16)
Bluetooth: Version 4.1.3f3 11349, 2 service, 18 devices, 1 incoming serial ports
Network Service: Ethernet, Ethernet, en0
Serial ATA Device: KINGSTON SV300S37A120G, 120.03 GB
Parallel ATA Device: ST9500420AS, 500.11 GB
USB Device: Built-in iSight, apple_vendor_id, 0x8502, 0xfd400000 / 2
USB Device: USB Receiver, 0x046d (Logitech Inc.), 0xc52b, 0x1a200000 / 3
USB Device: BRCM2046 Hub, 0x0a5c (Broadcom Corp.), 0x4500, 0x1a100000 / 2
USB Device: Bluetooth USB Host Controller, apple_vendor_id, 0x820f, 0x1a110000 / 4
USB Device: Apple Internal Keyboard / Trackpad, apple_vendor_id, 0x0230, 0x5d200000 / 3
USB Device: IR Receiver, apple_vendor_id, 0x8242, 0x5d100000 / 2
I can't currently use my script because of this and it is very crucial for me.
Anyone has any idea why this is happening?
Thanks in advance!rsonnens wrote:
Etresoft, Yes you are correct I did post the same link. Sorry that I did not notice it. However, as Gajillion also noted in his post, it is a kernel timing/race condition bug and NOT a 3rd party extension bug and can only be fixed by Apple. There is nothing someone can practically do to workaround the issue.
My suggestion for everyone having this issue is to entering a case into Apple's bug tracking system, and every time you get the error also be sure that you allow the reporter app to send the info to Apple.
No one is having this issue. The only people who have reported it have extensive 3rd party software installations. I have never seen a Perl-induced kernel panic on my Mac and I do some really crazy things with Perl - SOAP servers routed through launcd with my own transport protocols. No panics.
You are free to send in any panic or bug reports to Apple, should they arise. But I am quite confident that anyone encoutering such a problem really can't be said to be running OS X anymore. Once you make that many modifications, it is some hybrid Linux-style monster. And yes, such things panic if you look at them the wrong way - just like Linux. -
ORA-12842: Cursor invalidated during parallel execution
Hi,
Database version: 9.2.0.6.0
OS : Red Hat AS 3.
I encountered this problem lately in one of our scripts.
The error message shows:
BEGIN SP_RPT77B_V2(SYSDATE -1); END;
ERROR at line 1:
ORA-12842: Cursor invalidated during parallel execution
ORA-06512: at "REPORTADMIN.SP_RPT77B_V2", line 273
ORA-06512: at line 1
Elapsed: 00:02:49.60
Does anyone have any clues on what does this error messages means? Is there a way to rectified the problem?
Any advise, thanks.Hi!
Check the error description --
ORA-12842 schema modified during parallel execution
Cause: Schema modified during the parse phase of parallel processing.
Action: No action required.
And the other error is --
ORA-06512 at string line string
Cause: Backtrace message as the stack is unwound by unhandled exceptions.
Action: Fix the problem causing the exception or write an exception handler for this condition. Or you may need to contact your application administrator or database administrator.
Regards.
Satyaki De. -
1. I have a SELECT query with Explain Plan cost as 39 with some specific inputs.
2. When this query is run with same inputs directly on Oracle DB through PL/SQL client or TOAD,
for first run it takes around 13 seconds. Later runs take less than 1 second.
Query outputs 94 Rows.
2. When this query is run with java.sql.Statement with same inputs hard coded in query string,
it takes less than 1 second consistently.
3. When this query is run with java.sql.PreparedStatement with same inputs bound,
it takes 12 minutes - 25 minutes consistently.
4. This increased query execution time is only for specific input values.
For other other inputs query execution time is normal within 0 - 2 seconds.
5. With PreparedStatement, when a query string is modified by adding single space in
between, query executes in around 13 seconds.
Every time query string is modified by adding space, next query execution takes around 13 seconds.
But when it is called successively without query string modification, query execution goes beyond 12
minutes.
What could be the reason for PreparedStatement execution taking so much time ?
Is it something related to PreparedStatement cached ?
Edited by: 872289 on Jul 13, 2011 9:54 AM
Edited by: 872289 on Jul 13, 2011 9:54 AMWe are using JDBC through JBoss App server Datasource i.e. oracle-ds.xml
Content of oracle-ds.xml
<datasources>
<!-- Uncomment the following two blocks and put appropriate SERVERNAME, SID, User and password -->
<local-tx-datasource>
<jndi-name>JNDI_ORADS</jndi-name>
<connection-url>jdbc:oracle:thin:@scorpio:1521:devdb</connection-url>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<user-name>prod64</user-name>
<password>prod64</password>
<min-pool-size>10</min-pool-size>
<max-pool-size>50</max-pool-size>
<exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
<metadata>
<type-mapping>Oracle9i</type-mapping>
</metadata>
</local-tx-datasource>
</datasources>
Content of JSP Code where PreparedStatement is used :
<%@ page import = "java.util.*, java.sql.Timestamp,java.sql.*,java.io.*"%>
<%@ page import ="com.prod.db.utils.*"%>
<%!
public void testMethod() throws Exception
Connection w_conn = ConnectionHelper.getConnection();
String w_query =
"SELECT usv.timesheetid, ts.weekstartdate, ts.userid, ts.STATUS, ts.timesubmitted, " +
"usv.projectid, usv.projectname, usv.projectcode, " +
"usv.itemtype, usv.itemid, usv.itemname, usv.itemcode, usv.itemcategory, " +
"usv.STATUS, usv.billable, usv.hoursremaining, usv.percentcomplete, " +
"usv.billable_prev, usv.hoursremaining_prev, usv.percentcomplete_prev, " +
"ptl.startdate, ptl.timesheetid,ptl.actualhours, ptl.actualhours_prev, usv.orderofselection, " +
"usv.billable_sub, usv.hoursremaining_sub, usv.percentcomplete_sub, ptl.actualhours_sub, usv.itemmodified, " +
"usv.startdate, ptl.breakupavailable, ptl.commentavailable, ptl.rejectionstatus, " +
"ptl.tl_billablehours, usv.pstatus,usv.SEQNUMBER, usv.TSITEMTYPE, " +
"usv.CanEdit, usv.isHidden, usv.actualenddate, " +
"nvl(w.wbscode, '99999.99999.99999') as prjWBS,ptl.dailyremhours, usv.enterpriseId,ptl.dailyremhours_changed " +
"FROM timesheet ts, (TIMESHEET_VIEW_ROUTE usv Left Outer Join projecttimelogs ptl " +
"on usv.projectid = ptl.projectid " +
"AND usv.itemtype = ptl.itemtype " +
"AND usv.itemid = ptl.itemid " +
"AND usv.startdate <= ptl.startdate " +
"AND trunc(usv.enddate) >= trunc(ptl.enddate) " +
"AND usv.timesheetid = ptl.timesheetid " +
"AND (nvl(ptl.actualHours, 0) > 0 OR nvl(ptl.dailyRemHours, 0) > 0)) " +
"left outer join wbshierarchy w on usv.enterpriseid = w.ownerid " +
"AND usv.projectid = w.itemid and w.ownertype = 'Ent' " +
"AND w.itemtype = 'Prj' " +
"WHERE ts.userid = usv.userid " +
"AND usv.userid = ? " +
"AND (usv.timesheetid = ? OR usv.timesheetid = -1) " +
"AND (ts.timesheetid = usv.timesheetid or usv.timesheetid = -1) " +
"AND ts.weekstartdate = ? " +
"AND usv.startdate < ts.weekstartdate + 7 " +
"AND (usv.enddate >= ts.weekstartdate OR usv.enddate IS NULL) " +
"AND (usv.projectid = ? OR -1 = ?) " +
"AND (usv.STATUS != 'Deleted' OR usv.STATUS IS NULL) " +
"AND (usv.actualfinish IS NULL OR usv.actualfinish >= ts.weekstartdate) " +
"AND ('All' = ? OR USV.ITEMTYPE = ?) " +
"AND (-1 = ? OR USV.ITEMID = ?) " +
"ORDER BY prjWbs, projectid DESC, TSITEMTYPE, itemcode, itemtype, itemid, orderofselection, 21 ";
PreparedStatement w_stmt = w_conn.prepareStatement(w_query);
w_stmt.setInt(1, 50000);
w_stmt.setInt(2, 121540);
w_stmt.setTimestamp(3, Utilities.getTimestamp("11-JUL-2011","dd-MMM-yyyy"));
w_stmt.setInt(4, -1);
w_stmt.setInt(5, -1);
w_stmt.setString(6, "All");
w_stmt.setString(7, "All");
w_stmt.setInt(8, -1);
w_stmt.setInt(9, -1);
System.out.println("_____******_____ preparedstatement start ");
long t = System.currentTimeMillis();
ResultSet w_result = w_stmt.executeQuery();
t = System.currentTimeMillis() - t;
int i = 0;
while(w_result.next()) i++;
System.out.println("resultset rows" + i);
System.out.println("_____******_____ preparedstatement finish t=" + t);
w_result.close();
w_stmt.close();
w_conn.close();
%>
<%
testMethod();
%> -
Upgrade Failed, WL Express 6.1 to 8.1
Last nite, we finally upgraded our WL servers to 8.1 from 6.1 (Express). This lasted about 4 hours, and then the connection pool started throwing a massive amount of errors. (so much so, that we had to roll back @ 6 am this morning).
Configuration: WL express 8.1 sp3, DB2 7.2 fp 10,using the COM.ibm.db2.jdbc.DB2XADataSource XA driver.
1. is this a supported configuration?
2. There were a few different errors that were noted:
a)com.scholarone.persistence.PersistenceException: Error during save: COM.ibm.db2.jdbc.DB2Exception: [IBM][CLI Driver][DB2/LINUX] SQL0913N Unsuccessful execution caused by deadlock or timeout. Reason code "68". SQLSTATE=57033
at com.scholarone.persistence.StoredProcedurePersistence.save...(rest of meaningless stack) These were happening @ about 1 or 2 a minute.
about 30 minutes later, these started showing up:
b)com.scholarone.persistence.PersistenceException: Error during save: java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: Transaction rolled back: Transaction timed out after 84 seconds Xid=BEA1-4A81FA38DB6F85A97DD2(190522938),Status=Active,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=84,seconds left=60,activeThread=Thread[ExecuteThread: '22' for queue: 'weblogic.kernel.Default',5,Thread Group for Queue: 'weblogic.kernel.Default'],XAServerResourceInfo[mc-prod-db2pool]=(ServerResourceInfo[mc-prod-db2pool]=(state=started,assigned=none),xar=mc-prod-db2pool,re-Registered = false),SCInfo[scholarone+mc-prod-mcv3-wl01]=(state=active),local properties=({weblogic.jdbc.jta.mc-prod-db2pool=weblogic.jdbc.wrapper.TxInfo@b5bc5a2}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=mc-prod-mcv3-wl01+10.10.40.70:7051+scholarone+t3+, XAResources={},NonXAResources={})],CoordinatorURL=mc-prod-mcv3-wl01+10.10.40.70:7051+scholarone+t3+)
at weblogic.jdbc.jta.DataSource.enlist(Lweblogic.transaction.Transaction;)V(Optimized Method)
at weblogic.jdbc.wrapper.JTAConnection.checkConnection()Ljava.sql.Connection;(Optimized Method)
at weblogic.jdbc.wrapper.Statement.checkStatement()V(Statement.java:234)
at weblogic.jdbc.wrapper.Statement.preInvocationHandler(Ljava.lang.String;[Ljava.lang.Object;)V(Statement.java:83)
at weblogic.jdbc.wrapper.CallableStatement_COM_ibm_db2_jdbc_app_DB2CallableStatement.getObject(I)Ljava.lang.Object;(Unknown Source)
at com.scholarone.persistence.StoredProcedurePersistence.save(Lcom.scholarone.valueobject.ValueObject;Ljava.lang.String;)V(StoredProcedurePersistence.java:564)
at.....
c)com.scholarone.persistence.PersistenceException: java.sql.SQLException: Internal error: Cannot obtain XAConnection weblogic.common.resourcepool.ResourceDeadException: COM.ibm.db2.jdbc.DB2Exception: [IBM][CLI Driver] CLI0106E Connection is closed. SQLSTATE=08003
at COM.ibm.db2.jdbc.app.SQLExceptionGenerator.throw_SQLException(LCOM.ibm.db2.jdbc.app.DB2Connection;)V(SQLExceptionGenerator.java:186)
at COM.ibm.db2.jdbc.app.SQLExceptionGenerator.check_return_code(LCOM.ibm.db2.jdbc.app.DB2Connection;I)V(SQLExceptionGenerator.java:438)
at COM.ibm.db2.jdbc.app.DB2Connection.getTransactionIsolation()I(DB2Connection.java:1194)
at weblogic.jdbc.wrapper.XAConnection.getTransactionIsolation()I(XAConnection.java:838)
at weblogic.jdbc.common.internal.XAConnectionEnvFactory.refreshResource(Lweblogic.common.resourcepool.PooledResource;Z)V(XAConnectionEnvFactory.java:143)
at weblogic.jdbc.common.internal.XAConnectionEnvFactory.refreshResource(Lweblogic.common.resourcepool.PooledResource;)V(XAConnectionEnvFactory.java:113)
at weblogic.common.resourcepool.ResourcePoolImpl.refreshResource(Lweblogic.common.resourcepool.PooledResource;)V(ResourcePoolImpl.java:1533)
at weblogic.common.resourcepool.ResourcePoolImpl.checkResource(Lweblogic.common.resourcepool.PooledResource;I)V(ResourcePoolImpl.java:1512)
at weblogic.common.resourcepool.ResourcePoolImpl.checkAndReturnResource(Lweblogic.common.resourcepool.PooledResourceWrapper;I)Lweblogic.common.resourcepool.PooledResource;(ResourcePoolImpl.java:1402)
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ILweblogic.common.resourcepool.PooledResourceInfo;Z)Lweblogic.common.resourcepool.PooledResource;(ResourcePoolImpl.java:295)
at weblogic.jdbc.common.internal.ConnectionPool.reserve(Lweblogic.security.acl.internal.AuthenticatedSubject;IZ)Lweblogic.jdbc.common.internal.ConnectionEnv;(ConnectionPool.java:451)
at weblogic.jdbc.common.internal.ConnectionPool.reserve(Lweblogic.security.acl.internal.AuthenticatedSubject;I)Lweblogic.jdbc.common.internal.ConnectionEnv;(ConnectionPool.java:359)
at weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(Lweblogic.security.acl.internal.AuthenticatedSubject;Ljava.lang.String;Ljava.lang.String;I)Lweblogic.jdbc.common.internal.ConnectionEnv;(ConnectionPoolManager.java:80)
at weblogic.jdbc.jta.DataSource.getXAConnectionFromPool(Lweblogic.transaction.Transaction;)Lweblogic.jdbc.wrapper.XAConnection;(DataSource.java:1425)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(Lweblogic.jdbc.wrapper.XAConnection;Lweblogic.jdbc.wrapper.JTAConnection;Z)Lweblogic.jdbc.wrapper.XAConnection;(Optimized Method)
at weblogic.jdbc.jta.DataSource.getConnection()Ljava.sql.Connection;(DataSource.java:396)
at weblogic.jdbc.jta.DataSource.connect(Ljava.lang.String;Ljava.util.Properties;)Ljava.sql.Connection;(DataSource.java:354)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection()Ljava.sql.Connection;(RmiDataSource.java:305)
at com.scholarone.persistence.SQLPersistence.openConnection()Ljava.sql.Connection;(SQLPersistence.java:325)
at ........
d)com.scholarone.common.BaseController - Error in DocumentController.submitDocument (Owner unassigned)
java.lang.NullPointerException at weblogic.jdbc.wrapper.JTAConnection.checkConnection()Ljava.sql.Connection;(Optimized Method) at weblogic.jdbc.wrapper.Connection.clearPreparedStatement(Ljava.lang.String;)Z(Connection.java:166) at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate()I(PreparedStatement.java:117) at com.scholarone.persistence.StoredProcedurePersistence.save(Lcom.scholarone.valueobject.ValueObject;Ljava.lang.String;)V(StoredProcedurePersistence.java:555) at com.scholarone.valueobject.ValueObject.save(Ljava.lang.Integer;Ljava.lang.String;Lcom.scholarone.persistence.GenericPersistenceStrategy;)V(ValueObject.java:336) at com.scholarone.valueobject.ValueObject.save(Ljava.lang.Integer;Lcom.scholarone.persistence.GenericPersistenceStrategy;)V(ValueObject.java:276) at ......
e)com.scholarone.persistence.PersistenceException: java.sql.SQLException: Internal error: Cannot obtain XAConnection weblogic.common.resourcepool.ResourceDeadException: 0:[IBM][CLI Driver] SQL30090N Operation invalid for application execution environment. Reason code = "06". SQLSTATE=25000
at com.scholarone.persistence.StoredProcedurePersistence.loadObject(Lcom.scholarone.valueobject.ValueObjectConfiguration;Ljava.util.Vector;Ljava.lang.String;)Lcom.scholarone.valueobject.ValueObject;(StoredProcedurePersistence.java:167) at com.scholarone.valueobject.ValueObjectFactory.findObjectByCriteria(Ljava.lang.String;Ljava.util.Vector;Ljava.lang.String;Ljava.lang.String;)Lcom.scholarone.valueobject.ValueObject;(ValueObjectFactory.java:241) at ......
These exceptions continued through the nite, until we finally decided to roll back to WL 6.1
The settings on the connection pool for 8.1 were:
Remove Infected Connections Enabled(on) (which we have since determined that we will turn off)
Test Reserved Connections(on)
Allow Shrinking (on)
Keep XA Connection Till Transaction Complete(on)
initial: 10
max: 70
incriment: 5
the rest are the defaults.
-There were no code changes associated with this release.
-There is no history of this happening with the codebase and WL 6.1.
any suggestions, ideas, shots in the dark would be much appreciated.
Greg Mowery
[email protected]Hi. You should probably open an official support case to
get this all managed and shepherded to success, but among
the exceptions, it shows IBM jdbc connections dieing...
Joe
Greg Mowery wrote:
Last nite, we finally upgraded our WL servers to 8.1 from 6.1 (Express). This lasted about 4 hours, and then the connection pool started throwing a massive amount of errors. (so much so, that we had to roll back @ 6 am this morning).
Configuration: WL express 8.1 sp3, DB2 7.2 fp 10,using the COM.ibm.db2.jdbc.DB2XADataSource XA driver.
1. is this a supported configuration?
2. There were a few different errors that were noted:
a)com.scholarone.persistence.PersistenceException: Error during save: COM.ibm.db2.jdbc.DB2Exception: [IBM][CLI Driver][DB2/LINUX] SQL0913N Unsuccessful execution caused by deadlock or timeout. Reason code "68". SQLSTATE=57033
at com.scholarone.persistence.StoredProcedurePersistence.save...(rest of meaningless stack) These were happening @ about 1 or 2 a minute.
about 30 minutes later, these started showing up:
b)com.scholarone.persistence.PersistenceException: Error during save: java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: Transaction rolled back: Transaction timed out after 84 seconds Xid=BEA1-4A81FA38DB6F85A97DD2(190522938),Status=Active,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=84,seconds left=60,activeThread=Thread[ExecuteThread: '22' for queue: 'weblogic.kernel.Default',5,Thread Group for Queue: 'weblogic.kernel.Default'],XAServerResourceInfo[mc-prod-db2pool]=(ServerResourceInfo[mc-prod-db2pool]=(state=started,assigned=none),xar=mc-prod-db2pool,re-Registered = false),SCInfo[scholarone+mc-prod-mcv3-wl01]=(state=active),local properties=({weblogic.jdbc.jta.mc-prod-db2pool=weblogic.jdbc.wrapper.TxInfo@b5bc5a2}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=mc-prod-mcv3-wl01+10.10.40.70:7051+scholarone+t3+, XAResources={},NonXAResources={})],CoordinatorURL=mc-prod-mcv3-wl01+10.10.40.70:7051+scholarone+t3+)
at weblogic.jdbc.jta.DataSource.enlist(Lweblogic.transaction.Transaction;)V(Optimized Method)
at weblogic.jdbc.wrapper.JTAConnection.checkConnection()Ljava.sql.Connection;(Optimized Method)
at weblogic.jdbc.wrapper.Statement.checkStatement()V(Statement.java:234)
at weblogic.jdbc.wrapper.Statement.preInvocationHandler(Ljava.lang.String;[Ljava.lang.Object;)V(Statement.java:83)
at weblogic.jdbc.wrapper.CallableStatement_COM_ibm_db2_jdbc_app_DB2CallableStatement.getObject(I)Ljava.lang.Object;(Unknown Source)
at com.scholarone.persistence.StoredProcedurePersistence.save(Lcom.scholarone.valueobject.ValueObject;Ljava.lang.String;)V(StoredProcedurePersistence.java:564)
at.....
c)com.scholarone.persistence.PersistenceException: java.sql.SQLException: Internal error: Cannot obtain XAConnection weblogic.common.resourcepool.ResourceDeadException: COM.ibm.db2.jdbc.DB2Exception: [IBM][CLI Driver] CLI0106E Connection is closed. SQLSTATE=08003
at COM.ibm.db2.jdbc.app.SQLExceptionGenerator.throw_SQLException(LCOM.ibm.db2.jdbc.app.DB2Connection;)V(SQLExceptionGenerator.java:186)
at COM.ibm.db2.jdbc.app.SQLExceptionGenerator.check_return_code(LCOM.ibm.db2.jdbc.app.DB2Connection;I)V(SQLExceptionGenerator.java:438)
at COM.ibm.db2.jdbc.app.DB2Connection.getTransactionIsolation()I(DB2Connection.java:1194)
at weblogic.jdbc.wrapper.XAConnection.getTransactionIsolation()I(XAConnection.java:838)
at weblogic.jdbc.common.internal.XAConnectionEnvFactory.refreshResource(Lweblogic.common.resourcepool.PooledResource;Z)V(XAConnectionEnvFactory.java:143)
at weblogic.jdbc.common.internal.XAConnectionEnvFactory.refreshResource(Lweblogic.common.resourcepool.PooledResource;)V(XAConnectionEnvFactory.java:113)
at weblogic.common.resourcepool.ResourcePoolImpl.refreshResource(Lweblogic.common.resourcepool.PooledResource;)V(ResourcePoolImpl.java:1533)
at weblogic.common.resourcepool.ResourcePoolImpl.checkResource(Lweblogic.common.resourcepool.PooledResource;I)V(ResourcePoolImpl.java:1512)
at weblogic.common.resourcepool.ResourcePoolImpl.checkAndReturnResource(Lweblogic.common.resourcepool.PooledResourceWrapper;I)Lweblogic.common.resourcepool.PooledResource;(ResourcePoolImpl.java:1402)
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ILweblogic.common.resourcepool.PooledResourceInfo;Z)Lweblogic.common.resourcepool.PooledResource;(ResourcePoolImpl.java:295)
at weblogic.jdbc.common.internal.ConnectionPool.reserve(Lweblogic.security.acl.internal.AuthenticatedSubject;IZ)Lweblogic.jdbc.common.internal.ConnectionEnv;(ConnectionPool.java:451)
at weblogic.jdbc.common.internal.ConnectionPool.reserve(Lweblogic.security.acl.internal.AuthenticatedSubject;I)Lweblogic.jdbc.common.internal.ConnectionEnv;(ConnectionPool.java:359)
at weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(Lweblogic.security.acl.internal.AuthenticatedSubject;Ljava.lang.String;Ljava.lang.String;I)Lweblogic.jdbc.common.internal.ConnectionEnv;(ConnectionPoolManager.java:80)
at weblogic.jdbc.jta.DataSource.getXAConnectionFromPool(Lweblogic.transaction.Transaction;)Lweblogic.jdbc.wrapper.XAConnection;(DataSource.java:1425)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(Lweblogic.jdbc.wrapper.XAConnection;Lweblogic.jdbc.wrapper.JTAConnection;Z)Lweblogic.jdbc.wrapper.XAConnection;(Optimized Method)
at weblogic.jdbc.jta.DataSource.getConnection()Ljava.sql.Connection;(DataSource.java:396)
at weblogic.jdbc.jta.DataSource.connect(Ljava.lang.String;Ljava.util.Properties;)Ljava.sql.Connection;(DataSource.java:354)
at weblogic.jdbc.common.internal.RmiDataSource.getConnection()Ljava.sql.Connection;(RmiDataSource.java:305)
at com.scholarone.persistence.SQLPersistence.openConnection()Ljava.sql.Connection;(SQLPersistence.java:325)
at ........
d)com.scholarone.common.BaseController - Error in DocumentController.submitDocument (Owner unassigned)
java.lang.NullPointerException at weblogic.jdbc.wrapper.JTAConnection.checkConnection()Ljava.sql.Connection;(Optimized Method) at weblogic.jdbc.wrapper.Connection.clearPreparedStatement(Ljava.lang.String;)Z(Connection.java:166) at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate()I(PreparedStatement.java:117) at com.scholarone.persistence.StoredProcedurePersistence.save(Lcom.scholarone.valueobject.ValueObject;Ljava.lang.String;)V(StoredProcedurePersistence.java:555) at com.scholarone.valueobject.ValueObject.save(Ljava.lang.Integer;Ljava.lang.String;Lcom.scholarone.persistence.GenericPersistenceStrategy;)V(ValueObject.java:336) at com.scholarone.valueobject.ValueObject.save(Ljava.lang.Integer;Lcom.scholarone.persistence.GenericPersistenceStrategy;)V(ValueObject.java:276) at ......
e)com.scholarone.persistence.PersistenceException: java.sql.SQLException: Internal error: Cannot obtain XAConnection weblogic.common.resourcepool.ResourceDeadException: 0:[IBM][CLI Driver] SQL30090N Operation invalid for application execution environment. Reason code = "06". SQLSTATE=25000
at com.scholarone.persistence.StoredProcedurePersistence.loadObject(Lcom.scholarone.valueobject.ValueObjectConfiguration;Ljava.util.Vector;Ljava.lang.String;)Lcom.scholarone.valueobject.ValueObject;(StoredProcedurePersistence.java:167) at com.scholarone.valueobject.ValueObjectFactory.findObjectByCriteria(Ljava.lang.String;Ljava.util.Vector;Ljava.lang.String;Ljava.lang.String;)Lcom.scholarone.valueobject.ValueObject;(ValueObjectFactory.java:241) at ......
These exceptions continued through the nite, until we finally decided to roll back to WL 6.1
The settings on the connection pool for 8.1 were:
Remove Infected Connections Enabled(on) (which we have since determined that we will turn off)
Test Reserved Connections(on)
Allow Shrinking (on)
Keep XA Connection Till Transaction Complete(on)
initial: 10
max: 70
incriment: 5
the rest are the defaults.
-There were no code changes associated with this release.
-There is no history of this happening with the codebase and WL 6.1.
any suggestions, ideas, shots in the dark would be much appreciated.
Greg Mowery
[email protected] -
We're currently using READ_ONLY_AUTOCOMMIT as setting for HS_TRANSACTION_MODEL in attempt to minimize potential for locking in remote database. Nevertheless, read locks are created with DG4ODBC use by default and persist within remote database for duration of long-running queries which makes use of READ_ONLY_AUTOCOMMIT prohibitive in our environment. Is there a way to totally disable use of transaction model for DG4ODBC to achieve "pure read only" access. In other words, is it possible to change interaction between DG4ODBC agent and ODBC driver in a way that prevents creation of read locks within remote UniVerse 10.1 database?
It would be ideal for interaction between DG4ODBC agent and ODBC driver to emulate MS Excel/ MS Query interaction that occurs using same ODBC driver as is used for DG4ODBC configuration. When accessing remote data source using MS Excel / MS Query, there is no locking on remote server whatsoever. I've performed a couple different types of ODBC traces and some differences that seem relevant/noteworthy between Excel and DG4ODBC are listed below:
Excerpts from Excel / MS Query ODBC trace:
===========================
SQLSetConnectOption
0x01010000
SQL_LOGIN_TIMEOUT
0x2D000000
SQL_SUCCESS
SQLDriverConnect
0x01010000
0xF8070200
[16]DSN=dsn.ODBC;
SQL_NTS
[16]DSN=dsn.ODBC;
1024
16
SQL_DRIVER_COMPLETE
SQL_SUCCESS
SQLGetInfo
0x01010000
SQL_DATA_SOURCE_READ_ONLY
[1]N
2048
1
SQL_SUCCESS
Reference to "SQL_DATA_SOURCE_READ_ONLY above seems indicative of behavior I would like DG4ODBC to emulate. There is no such reference within ODBC trace output for DG4ODBC test:
Excerpts from DG4ODBC trace:
===================
SQLSetConnectOption
0x01010000
SQL_AUTOCOMMIT
SQL_AUTOCOMMIT_ON
SQL_SUCCESS
SQLDriverConnect
0x01010000
0x00000000
[36]DSN=dsn.ODBC;UID=userid;PWD=password;
SQL_NTS
[36]DSN=dsn.ODBC;UID=userid;PWD=password;
1024
36
SQL_DRIVER_NOPROMPT
SQL_SUCCESS
References above to "SQL_AUTOCOMMIT" and "SQL_AUTOCOMMIT_ON" in the case of DG4ODBC seem in line with current READ_ONLY_AUTOCOMMIT setting for HS_TRANSACTION_MODEL and default DG4ODBC behavior where a transaction is set even for "read only" access to FDS (foreign data source).
In one other ODBC trace I performed the following error is reported which seems related to DG4ODBC default behavior / HS_TRANSACTION_MODEL:
UCI SQLExecute() returned -1
SQLSTATE : S1000 Native Error : 950151 [U2][SQL Client][UNIVERSE]UniVerse/SQL: Isolation levels are not supported for file types 1, 19 and 25
Facility: DBCAPERR Severity: ERROR Error ID: 46 Extern error: 950151 Message: UCI Error. Func: SQLExecute(); State: S1000; uniVerse code: 950151; Msg: [U2][SQL Client][UNIVERSE]UniVerse/SQL: Isolation levels are not supported for file types 1, 19 and 25.
So while nature and effect of locking that occurs in FDS may be largely dependent on architecture and conventions associated with foreign data source, it certainly seems possible to achieve "pure read only" access to remote data source with ODBC driver we're using by altering behavior/calls to driver issued by client/agent. MS Excel / MS Query as client/agent exhibits desired behavior. Is it possible to alter behavior of DG4ODBC agent in some way to achieve same end result?
Overall, results of DG4ODBC tests so far have been very promising but this issue of locking is turning out to be a real problem for large files and, hence, long running queries for our environment in the way of making this a viable/practical option for production use.
Any help with achieving SQL_DATA_SOURCE_READ_ONLY with DG4ODBC would be greatly appreciated!!
Thread containing additional relevant background info: https://forums.oracle.com/forums/thread.jspa?threadID=2313253
Regards, Glenn
Edited by: WileyCoyote on Dec 19, 2011 10:46 AM
Edited by: WileyCoyote on Mar 8, 2012 11:00 AMThis bit of output from remote AIX based UniVerse 10.1 system illustrates the locks being created as a result of DG4ODBC based access:
LIST-READU EVERY
Active File Locks:
Device.... Inode.... Netnode Userno Lmode Pid Login Id
3014661 4263309 0 65417 7 FS 81396 root
Active Group Locks: Record Group Group Group
Device.... Inode.... Netnode Userno Lmode G-Address. Locks ...RD ...SH ...EX
3014661 4263309 0 65417 4 IN A00 1 0 0 0
3014661 4263309 0 65417 5 IN 2E00 1 0 0 0
3014661 1968116 0 61273 9 IN 25600 1 0 0 0
Active Record Locks:
Device.... Inode.... Netnode Userno Lmode Pid Login Id Item-ID.............
3014661 4263309 0 65417 4 RL 81396 root 01907474001
3014661 4263309 0 65417 5 RL 81396 root 01606473001
3014661 4263309 0 65417 10 RL 81396 root 01693774001
3014661 4263309 0 65417 11 RL 81396 root 01392773001Statements executed connected to Oracle XE 11gR2 database for purpose of extracting data from remote UniVerse tables for transfer to corresponding Oracle tables that cause locks such as those shown above:
insert into target_table select * from source_table@dg4odbc_sid;
create table target_table as (select * from source_table@dg4odbc_sid);However, this is the general form currently in use for daily ETL operation that executes automatically via crontab, shell & sql/plus scripts for a number of tables scheduled thus far:
begin
etl_&1;
end;
/Anonymous PL/SQL block of form shown above executes stored procedure for respective source/target table identified by parameter &1 passed to SQL/Plus script. Assuming a value for parameter &1 of "target_table", for instance, the definition for procedure etl_target_table referenced as etl_&1 above would be:
create or replace
procedure etl_target_table as
begin
insert into target_table select * from target_table@dg4odbc_sid;
commit;
end;I adopted this convention at present to eliminate code redundancy and to provide additional control needed in the case of a couple tables that call for additional filtering via a WHERE clause to exclude problematic rows or that require explicit list of items on SELECT as opposed to '*' to eliminate columns on remote system containing problematic data. I'm pointing this out because I suspect use of PL/SQL here may have a bearing on ability to accomplish stated objective prompting this thread. Most important thing is to somehow eliminate locking on remote system. I would rework structure of programming and scripting in whatever way is necessary to accomodate a purely read-only transaction model and DG4ODBC behavior that eliminates creation of transaction and, hence, locks created in remote UniVerse database.
General approach outlined above is in place and working ok for a series of smaller tables each of which only takes seconds or a few minutes to extract into corresponding Oracle tables. There are a number of larger tables in remote system that take much longer to scan for extraction. Locking invariably occurs for these larger tables and persists for duration of statement execution causing production users/processes of remote system to wait.
Also, if locks already exist for any of the remote tables at the time DG4ODBC extraction begins then the DG4ODBC process may fail with this error:
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[Rocket U2][UVODBC][2701121]Error ID: 39 Severity: ERROR Facility: DBCAPERR -
Serialization failure. {40001,NativeErr = 950261}
ORA-02063: preceding 2 lines from DSNThis excerpt is provided from UniVerse ODBC guide which seems to explain "serialization" error above:
"Serialization failure. The transaction to which the prepared statement belonged was terminated to prevent
deadlock. Error ID: 39, Severity: ERROR, Facility: DBCAPERR"So hopefully this provides additional insight into complexion of problem we're having and why we wish to change default behavior of DG4ODBC in this context; ie., prevent setting transaction that in turn causes locks in remote database when all we wish to do is retrieve data from FDS as is performed by other client tools such as Excel or MS Query using same ODBC DSN on which DG4ODBC configuration is based.
Use of DG4ODBC is actually working out better than any other alternative I've tried thus far for extraction to perform more complex data analysis business calls for than UniVerse "multivalue" datastore allows so I hope there's something that can be done that allows us to continue using Oracle and DG4ODBC approach. Thanks for your assistance!
Regards, Glenn
Edited by: WileyCoyote on Dec 20, 2011 11:01 AM
Edited by: WileyCoyote on Mar 8, 2012 11:03 AM -
Getting this error: STORAGE_PARAMETERS_WRONG_SET
Hello Experts,
We keep on getting the said runtime error when we run a certain report in our production server. Please help as to how can this be solved. Thank you guys and take care!Check OSS note: 72765 STORAGE_PARAMETERS_WRONG_SET with report execution
Cause and prerequisites
The report called up processes so much data, that the memory requirement exceeds the memory actually available.
Reason: for the direct output of the report following selection, the data is transferred via the memory.
Solution
Execute the report in two steps. Create an extract first, and output this extract later direct -
Problem in with automatic payment TRANSACTION f110 transaction
Hi
We are facing problem with automatic payment TRANSACTION f110 transaction version SAP ECC6.0
When there are multiple invoices for the same customer with cash discount amount given by the user manually( not cash discount percent)
For eg if there 5 invoices with cash discount amount
When there is payment method defined on document level for some of the invoices(3) and 2 invoices payment method is not defined on document level(Payment method is defined on customer master also for those customers )
The following dumb occurs
Settlement will be created, but the F110 cancelled with a shortdump on customer master balance update
Database error text........: "[IBM][CLI Driver][DB2] UNSUCCESSFUL EXECUTION
CAUSED BY DEADLOCK OR TIMEOUT. REASON CODE 00C90088, TYPE OF RESOURCE , AND
RESOURCE NAME"
Internal call code.........: "[RSQL/UPDT/KNC1 ]"
"DBIF_RSQL_SQL_ERROR" "CX_SY_OPEN_SQL_DB"
"SAPLF005" or "LF005F01"
"ZAHLVERHALTEN_FORTSCHREIBEN"
Any help in this is highly appreciated
Thanks
Sarath1615356 F110 Code improvements
1255455 F110 Exception BCD_FIELD_OVERFLOW during item output
1237330 F110 Error if more than one down payment
1105073 F110: Program termination DBIF_RSQL_SQL_ERROR -
Java heap problem, cant set heapsize
Hi,
I am currently facing problem of java heap. With large number of Objects in ArrayList ( approx 70000), i am getting exception from java heap.
Current task: i am collecting records from database, mysql 5.5, where i have one table about 4 million entries. I am so far successful with fetching records while setting MaxRow() ( although i am failed to use setFetchSize()). Whenever function tries to finish its processing and going to put records in ArrayList. JAva throws an Memory ( Java heap Exception). I like to know your ideas to have a concerete solution. Necessary information related to my development enviourment and solutions is as follows.
<p>
IDE: Eclipse europa
Server: Tomcat 6
Database: mysql 5.5
Operating System: Windows XP
</p>
<p>
Tried Solutions: I have tried to set Java heap size at Tomcat upto 3gb but it do not work.
Source Code :
<br />
</p>
// calcuate count(*) for desired query.<br />
QueryResultSize = getEvalautionQueryResults(true, 0, user, movie,<br />
board, hoster, from, until);<br />
<br />
if (QueryResultSize > 0) {<br />
<br />
// recall query till all results( query result size) are not<br />
// loaded in links list<br />
try {<br />
while (counter < QueryResultSize) {<br />
<br />
EvaluationObject eval;<br />
String q = "";<br />
String query = this.getEvaluationQueryString(false, user,<br />
movie, board, hoster, from, until);<br />
if (counter < fetchsize) {<br />
<br />
q = query + " and fetchid > 0 order by fetchid;";<br />
<br />
} else {<br />
<br />
q = query + " and fetchid > "<br />
+ Long.toString(lastFetchId)<br />
+ " order by fetchid;";<br />
}<br />
// just for test preparedstatement execution is called here<br />
pstmt = conn.prepareStatement(q);<br />
pstmt.setMaxRows(fetchsize);<br />
ResultSet rs = pstmt.executeQuery();<br />
<br />
while (rs.next()) {<br />
<br />
ResultSet nextRs;<br />
String boardd = "";<br />
URI uri = null;<br />
try {<br />
lastFetchId = Long.parseLong(rs<br />
.getString("fetchid"));<br />
boardd = rs.getString("board");<br />
uri = URI.create(boardd);<br />
} else {<br />
<br />
shortBoard = "";<br />
}<br />
<br />
}<br />
query = "SELECT * FROM board WHERE board='http://"<br />
+ shortBoard + "' OR board='http://www."<br />
+ shortBoard + "';";<br />
if (statement.isClosed()) {<br />
statement = conn.createStatement();<br />
}<br />
nextRs = statement.executeQuery(query);<br />
if (nextRs.next()) {<br />
eval = new EvaluationObject()
links.add(eval);<br />
} else {<br />
eval = new EvaluationObject()
links.add(eval);<br />
}<br />
if (counter > fetchsize)<br />
if (isCounterChanged) {<br />
model.notifyLogListener(null, counter<br />
+ " Links has been copied", "",<br />
LogEvent.ERROREVENT);<br />
System.out.println(counter<br />
+ " Links has been copied");<br />
isCounterChanged = false;<br />
}<br />
}<br />
// because each time resultset loads 5000 results<br />
counter += fetchsize;<br />
isCounterChanged = true;<br />
<br />
}<br />
<br />
} catch (SQLException e) {<br />
e.printStackTrace();<br />
model.notifyLogListener(null, e.getMessage(), "",<br />
LogEvent.ERROREVENT);<br />
}<br />
<br />
<p>
<br />Regards,
Romi
PS: Thanks in advance
I am not quite sure whether it is right place to post the question. If it is not and you know the answer ....... please dont hesitate to reply.
</p>With large number of Objects in ArrayList ( approx 70000),By itself that isn't that big. If each object was 1000 bytes then that would only be 70 meg.
Tried Solutions: I have tried to set Java heap size at Tomcat upto 3gb but it do not work. If you set it correctly and it did not change anything in how your application worked then it is a bug in your code.
You posted your code but however you did it is wrong and you need to use the forum correctly using code tags to post the code so it is readable. -
Bind variable and parse_call
if i use a bind variable in sql will it reduce the values of column parse_calls in v$sqlarea?
i am using bind variable in the following statement
declare
y number;
begin
:x:=101;
select salary into y from hr.employees where employee_id=:x;
end;
but subsequent execution of the pl/sql code shows an increase in parse_call column values
why its so.
if i am using a bind variable the parse_calls should be reduced.
i am using oracle 9.2To reduce the number of parse calls, you must hold cursors open. It is important to distinguish between a hard parse, a soft parse, and a session cursor cache hit. Using bind variables helps reduce the frequency of hard parses, where among other things memory is allocated to add a SQL statement to the library cache. Effectively using the SESSION_CACHED_CURSORS parameter helps reduce the number of soft parses, which among other things involve syntax and security checking, as well as obtaining the library cache latch to search for an already parsed and optimized version of the SQL statement. I would suggest taking a look at the following link and reading down through to the section "Reducing Parse Calls with Oracle Forms":
http://download.oracle.com/docs/cd/E11882_01/server.112/e10821/memory.htm#i34608
Also, take a look at the following Oracle Magazine article, specifically the section describing Execute to Parse:
http://www.oracle.com/technology/oramag/oracle/09-sep/o59asktom.html
Now a test case to possibly see what happens:
The set up:
CREATE TABLE T1 (EMPLOYEE_ID NUMBER);
INSERT INTO T1
SELECT
ROWNUM
FROM
DUAL
CONNECT BY
LEVEL<=1000;
COMMIT;
VARIABLE X NUMBERNow the test case:
EXEC :X:=10
SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X;
EMPLOYEE_ID
10
SELECT
HASH_VALUE,
CHILD_NUMBER,
PARSE_CALLS,
EXECUTIONS
FROM
V$SQL
WHERE
SQL_TEXT='SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X';
HASH_VALUE CHILD_NUMBER PARSE_CALLS EXECUTIONS
4179534647 0 1 1
SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X;
EMPLOYEE_ID
10
SELECT
HASH_VALUE,
CHILD_NUMBER,
PARSE_CALLS,
EXECUTIONS
FROM
V$SQL
WHERE
SQL_TEXT='SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X';
HASH_VALUE CHILD_NUMBER PARSE_CALLS EXECUTIONS
4179534647 0 2 2
SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X;
EMPLOYEE_ID
10
SELECT
HASH_VALUE,
CHILD_NUMBER,
PARSE_CALLS,
EXECUTIONS
FROM
V$SQL
WHERE
SQL_TEXT='SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X';
HASH_VALUE CHILD_NUMBER PARSE_CALLS EXECUTIONS
4179534647 0 3 3
SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X;
EMPLOYEE_ID
10
SELECT
HASH_VALUE,
CHILD_NUMBER,
PARSE_CALLS,
EXECUTIONS
FROM
V$SQL
WHERE
SQL_TEXT='SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X';
HASH_VALUE CHILD_NUMBER PARSE_CALLS EXECUTIONS
4179534647 0 4 4
COLUMN NAME FORMAT A27
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
MS.STATISTIC#=SN.STATISTIC#
AND SN.NAME LIKE 'session cursor cache%';
NAME VALUE
session cursor cache hits 1111
session cursor cache count 148
SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X;
EMPLOYEE_ID
10
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
MS.STATISTIC#=SN.STATISTIC#
AND SN.NAME LIKE 'session cursor cache%';
NAME VALUE
session cursor cache hits 1112
session cursor cache count 149
EXEC :X:=20
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
MS.STATISTIC#=SN.STATISTIC#
AND SN.NAME LIKE 'session cursor cache%';
NAME VALUE
session cursor cache hits 1112
session cursor cache count 149
SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X;
EMPLOYEE_ID
20
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
MS.STATISTIC#=SN.STATISTIC#
AND SN.NAME LIKE 'session cursor cache%';
NAME VALUE
session cursor cache hits 1114
session cursor cache count 150
EXEC :X:=30
SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X;
EMPLOYEE_ID
30
SELECT
SN.NAME,
MS.VALUE
FROM
V$MYSTAT MS,
V$STATNAME SN
WHERE
MS.STATISTIC#=SN.STATISTIC#
AND SN.NAME LIKE 'session cursor cache%';
NAME VALUE
session cursor cache hits 1116
session cursor cache count 150
SELECT
HASH_VALUE,
CHILD_NUMBER,
PARSE_CALLS,
EXECUTIONS
FROM
V$SQL
WHERE
SQL_TEXT='SELECT EMPLOYEE_ID FROM T1 WHERE EMPLOYEE_ID= :X';
HASH_VALUE CHILD_NUMBER PARSE_CALLS EXECUTIONS
4179534647 0 7 7In the above, the first execution caused a hard parse, which is not obvious from the V$SQL output. The second execution was a soft parse, as was the third. The fourth execution was satisfied by the session cursor cache (unfortunately, I did not capture the number of session cursor cache hits before the fourth execution). After the seventh execution, V$SQL still shows the number of PARSE_CALLS is equal to the number of EXECUTIONS, but the last four executions were satisfied without obtaining the library cache latch as would be required by a soft parse.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Out of control thermocoup​les
Hey, I've got a weird problem and I'm hoping someone can help. I've been using a program for about 6 months now that basically maintains a cold temperature at one side of a heat sink, and maintains a hot temperature at the other side. Then I have 25 thermocouples hooked up in between to measure temperatures so I can do analysis on the material in the middle. My problem is that all of a sudden the thermocouple readings have been out of control and unpredictable. I was adding automation to the program so it could run multiple tests at once and in the process this problem developed. The strange part is that the problem is also happening in the old version of the program now too, and that has been working perfectly for the last 6 months. I have 25 thermocouples hooked up and the temperatures will all look about right, but then 3 to 5 of them will be completely wrong randomly. What's interesting about this problem is that when I change the thermocouple that the heaters use to maintain the set point temperature, the values that were wrong become right again and some of the other thermocouples start giving invalid data. The setup uses one NI 9213 and 3 NI 9211 inputs for the thermocouples. It also uses a NI 9401 to control the heaters through a pulse width modulation electric box we have. Anyone ever experience something like this or have any ideas? Thanks for any help
Solved!
Go to Solution.Hello acolbourn,
Have you tried hooking the thermocouples that were giving you bad data back into another port that you were previously getting good data with. This will help you figure out if the problem is the thermocouples or the card inputs. Since you said the issues were fixed by replacing the thermocouples, that makes me think the issue is the thermocouples...but I would still trying plugging the "bad" thermocouples into another port just to see if you still get bad data.
How do you have your code setup to read the signal? Do you have multiple tasks reading from the different cards, or do you have one task with all the inputs on it? I noticed you are only reading one sample at a time, you may want to try reading multiple samples so that you have a buffer and are less likely to get that error. Also, if you were in highlight execution, then this error would have happened because of the how much highlight execution slows down the program, and not because of an issue with the code. When you debug with DAQmx, data overwrite errors are common. This is mainly because most debugging involves slowing down the execution to see what's going on, but slowing down the execution causes timed events to not function correctly, so this may be the cause of the -200279 error.
I hope this helps!
-Nathan H
Software Developer
National Instruments
Maybe you are looking for
-
Not all Prefixes in VRF Tables are Reachable?
Hello, During my studies with an MPLS VPN [MP-BGP] lab I found something unexpected. I wonder if I am mistaken in my comprehension or if this normal: Not all prefixes in a VRF Table are pingable/reachable ? Seems that in the standard routing tables,
-
Could someone lend a hand? I am attempting to write a program that will read characters from a input file into a two-dimensional char array. The character will be a 'x'. The program will give the path to the other side. Example(0,0;1,1;1,2... and so
-
I need help with my wireless card of my T60 anyone plz...
It was working fine but then one day, i couldnt get signal from the wireless router, i have tryed installing all of the network drivers and all it says on the divece manager is ""?ETHERNET CONTROLLER"" , and under properties it says OTHER, and this i
-
THM80 - zero in Delta in derivatives cash flow & Sum of delta derivative
Hi All, I was trying to execute effectiveness report - THM80. However, no matter the calculation type I used, it always give zero value in field Sum of delta derivative and Delta Derivatives Cash Flow. There's, however, a value populated in Delta i
-
My batch action won't work but when I run the action on a single picture it works
I'm on a Mac and I set up 2 actions the first one to resize a photo and the second one to make a frame with my photography business logo on it. The actions have previously worked fine with no issues and now all of a sudden it won't work on a batch ac