MySQL large table migration
Hi, I'm hoping this is an easy one and I have missed something obvious.
We have MySQL tables with between 30 million and 300 million rows. When migrating these tables, migration workbench appears to execute a SELECT * FROM table which obviously runs the client out of memory and takes forever to execute.
How do we tell it to be a little more sensible?
Hi,
at this point, there is only a Windows install for the MySQL plugin. However, you can run the Workbench on a Windows machine and attach to the UNIX server on which your MySQL server resides by entering the specific details for the host machine in Step 2 of the Capture Wizard in the Workbench.
It is not possible to change the default userid field in OMWB. This was set to the 'root' user because, in order to do a migration, a user should have a large amount of privileges so that they can access the relevant catalog tables and other users tables (the 'root' user in MySQL should have these privileges). Additionally, for security reasons, a migration should only be attempted by a 'super user' such as 'root'.
I hope this helps,
Tom.
Similar Messages
-
HS connection to MySQL fails for large table
Hello,
I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
select * from progressnotes@mysql_rmg where "encounterID" = 224720
ERROR at line 1:
ORA-00942: table or view does not exist
[Generic Connectivity Using ODBC][MySQL][ODBC 3.51
Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
(SQL State: S1T00; SQL Code: 2013)
ORA-02063: preceding 2 lines from MYSQL_RMG
I traced the HS connection and here is the result from the .trc file:
Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
Heterogeneous Agent Release
10.2.0.1.0
(0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
(0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
(0) IDENTIFIER_QUOTE_CHAR="",
(0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
(0) tasource name='MYSQL_RMG' type='ODBC'
(0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
(0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
(0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
(0) tokenSize='1000' noInsertParameterization='true'
noThreadedReadAhead='true'
(0) noCommandReuse='true'/></environment></binding></navobj>
(0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
(0) hoadtab(26); Entered.
(0) Table 1 - PROGRESSNOTES
(0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
(0) memory (SQL State: S1T00; SQL Code: 2008)
(0) (Last message occurred 2 times)
(0)
(0) hoapars(15); Entered.
(0) Sql Text is:
(0) SELECT * FROM "PROGRESSNOTES"
(0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
(0) server during query (SQL State: S1T00; SQL Code: 2013)
(0) (Last message occurred 2 times)
(0)
(0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
(0)
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
(0) log file for details.
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
(0) the log file for details.
(0) Closing log file at THU JUN 12 11:20:38 2008.
I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
select * from progressnotes@mysql_rmg where "encounterID" = 224720
ERROR at line 1:
ORA-02068: following severe error from MYSQL_RMG
ORA-28511: lost RPC connection to heterogeneous remote agent using
SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
# HS init parameters
HS_FDS_CONNECT_INFO = MYSQL_RMG
HS_FDS_TRACE_LEVEL = ON
My SID_LIST_LISTENER entry is:
(SID_DESC =
(PROGRAM = HSODBC)
(SID_NAME = MYSQL_RMG)
(ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
Finally, here is my TNSNAMES.ORA entry for the HS connection:
MYSQL_RMG =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
(CONNECT_DATA =
(SID = MYSQL_RMG)
(HS = OK)
Your advice will be greatly appeciated,
Thanks,
Luis
Message was edited by:
lmconsiteFirst of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
indicates the Driver or the DB abends the connection due to a timeout.
Check out the wait_timeout mysql variable on the server and increase it. -
Java.lang.NullPointerException during MySQL 5.0 migration
The error occurs at the end of the migration process of 150 tables and 4,917 columns. The error occurs with either "Quick Migrate" or "Capture Schema".
Additional error;
oracle.dbtools.migration.workbench.core.ui.AbstractMigrationProgressRunnable.start(AbstractMigrationProgressRunnable.java:141)
Any ideas as to the issue?
Source DB - MySQL5.0, 150 tables, approx 4,900 columns.
Target DB - 10.2.0.3 Ent
sqldeveloper -
CVS Version Internal to Oracle SQL Developer (client-only)
Java(TM) Platform 1.6.0_07
Oracle IDE 1.5.1.54.40
Versioning Support 1.5.1.54.40
Thanks in advance,
Tim
Addtional details for sqldeveloper
About
Oracle SQL Developer 1.5.1
Version 1.5.1
Build MAIN-5440
Copyright © 2005,2008 Oracle. All Rights Reserved.
IDE Version: 11.1.1.0.22.49.42
Product ID: oracle.sqldeveloper
Product Version: 11.1.1.54.40
Version
Component Version
========= =======
CVS Version Internal to Oracle SQL Developer (client-only)
Java(TM) Platform 1.6.0_07
Oracle IDE 1.5.1.54.40
Versioning Support 1.5.1.54.40
Properties
Name Value
==== =====
apple.laf.useScreenMenuBar true
awt.toolkit sun.awt.windows.WToolkit
class.load.environment oracle.ide.boot.IdeClassLoadEnvironment
class.load.log.level CONFIG
class.transfer delegate
com.apple.macos.smallTabs true
com.apple.mrj.application.apple.menu.about.name "SQL_Developer"
com.apple.mrj.application.growbox.intrudes false
file.encoding Cp1252
file.encoding.pkg sun.io
file.separator \
ice.browser.forcegc false
ice.pilots.html4.ignoreNonGenericFonts true
ice.pilots.html4.tileOptThreshold 0
ide.AssertTracingDisabled true
ide.bootstrap.start 13479365579083
ide.build MAIN-5440
ide.conf C:\sqldeveloper\sqldeveloper\bin\sqldeveloper.conf
ide.config_pathname C:\sqldeveloper\sqldeveloper\bin\sqldeveloper.conf
ide.debugbuild false
ide.devbuild false
ide.extension.search.path sqldeveloper/extensions:jdev/extensions:ide/extensions
ide.firstrun false
ide.java.minversion 1.5.0
ide.launcherProcessId 4684
ide.main.class oracle.ide.boot.IdeLauncher
ide.patches.dir ide/lib/patches
ide.pref.dir C:\Documents and Settings\Administrator\Application Data\SQL Developer
ide.pref.dir.base C:\Documents and Settings\Administrator\Application Data
ide.product oracle.sqldeveloper
ide.shell.enableFileTypeAssociation C:\sqldeveloper\sqldeveloper.exe
ide.splash.screen splash.gif
ide.startingArg0 C:\sqldeveloper\sqldeveloper.exe
ide.startingcwd C:\sqldeveloper
ide.user.dir C:\Documents and Settings\Administrator\Application Data\SQL Developer
ide.user.dir.var IDE_USER_DIR
ide.work.dir C:\Documents and Settings\Administrator\My Documents\SQL Developer
ide.work.dir.base C:\Documents and Settings\Administrator\My Documents
java.awt.graphicsenv sun.awt.Win32GraphicsEnvironment
java.awt.printerjob sun.awt.windows.WPrinterJob
java.class.path ..\..\ide\lib\ide-boot.jar
java.class.version 50.0
java.endorsed.dirs C:\Java\jdk1.6.0_07\jre\lib\endorsed
java.ext.dirs C:\Java\jdk1.6.0_07\jre\lib\ext;C:\WINDOWS\Sun\Java\lib\ext
java.home C:\Java\jdk1.6.0_07\jre
java.io.tmpdir c:\temp\
java.library.path C:\sqldeveloper;.;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\oracle\product\10.2.0\client_1\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Universal Extractor\bin;C:\Program Files\IDM Computer Solutions\UltraEdit-32;C:\Program Files\Diskeeper Corporation\Diskeeper\;c:\Embarcadero\PerformanceCenter
java.naming.factory.initial oracle.javatools.jndi.LocalInitialContextFactory
java.runtime.name Java(TM) SE Runtime Environment
java.runtime.version 1.6.0_07-b06
java.specification.name Java Platform API Specification
java.specification.vendor Sun Microsystems Inc.
java.specification.version 1.6
java.util.logging.config.file logging.conf
java.vendor Sun Microsystems Inc.
java.vendor.url http://java.sun.com/
java.vendor.url.bug http://java.sun.com/cgi-bin/bugreport.cgi
java.version 1.6.0_07
java.vm.info mixed mode
java.vm.name Java HotSpot(TM) Client VM
java.vm.specification.name Java Virtual Machine Specification
java.vm.specification.vendor Sun Microsystems Inc.
java.vm.specification.version 1.0
java.vm.vendor Sun Microsystems Inc.
java.vm.version 10.0-b23
jdbc.driver.home /C:/oracle/product/10.2.0/client_1/
jdbc.library /C:/oracle/product/10.2.0/client_1/jdbc/lib/ojdbc14.jar
line.separator \r\n
oracle.home C:\sqldeveloper
oracle.ide.util.AddinPolicyUtils.OVERRIDE_FLAG true
oracle.translated.locales de,es,fr,it,ja,ko,pt_BR,zh_CN,zh_TW
oracle.xdkjava.compatibility.version 9.0.4
orai18n.library /C:/oracle/product/10.2.0/client_1/jlib/orai18n.jar
os.arch x86
os.name Windows XP
os.version 5.1
path.separator ;
reserved_filenames con,aux,prn,lpt1,lpt2,lpt3,lpt4,lpt5,lpt6,lpt7,lpt8,lpt9,com1,com2,com3,com4,com5,com6,com7,com8,com9,conin$,conout,conout$
sun.arch.data.model 32
sun.boot.class.path C:\Java\jdk1.6.0_07\jre\lib\resources.jar;C:\Java\jdk1.6.0_07\jre\lib\rt.jar;C:\Java\jdk1.6.0_07\jre\lib\sunrsasign.jar;C:\Java\jdk1.6.0_07\jre\lib\jsse.jar;C:\Java\jdk1.6.0_07\jre\lib\jce.jar;C:\Java\jdk1.6.0_07\jre\lib\charsets.jar;C:\Java\jdk1.6.0_07\jre\classes
sun.boot.library.path C:\Java\jdk1.6.0_07\jre\bin
sun.cpu.endian little
sun.cpu.isalist
sun.desktop windows
sun.io.unicode.encoding UnicodeLittle
sun.java2d.ddoffscreen false
sun.jnu.encoding Cp1252
sun.management.compiler HotSpot Client Compiler
sun.os.patch.level Service Pack 3
user.country US
user.dir C:\sqldeveloper\sqldeveloper\bin
user.home C:\Documents and Settings\Administrator
user.language en
user.name zsysadmin
user.timezone America/Los_Angeles
user.variant
windows.shell.font.languages
Extensions
Name Identifier Version Status
==== ========== ======= ======
Check For Updates oracle.ide.webupdate 11.1.1.0.22.49.42 Loaded
Code Editor oracle.ide.ceditor 11.1.1.0.22.49.42 Loaded
Database Connection Support oracle.jdeveloper.db.connection 11.1.1.0.22.49.42 Loaded
Database Object Explorers oracle.ide.db.explorer 11.1.1.0.22.49.42 Loaded
Database UI oracle.ide.db 11.1.1.0.22.49.42 Loaded
Diff/Merge oracle.ide.diffmerge 11.1.1.0.22.49.42 Loaded
Extended IDE Platform oracle.javacore 11.1.1.0.22.49.42 Loaded
External Tools oracle.ide.externaltools 11.1.1.0.22.49.42 Loaded
Feedback oracle.ide.feedback 11.1.1.0.22.49.42 Loaded
File Support oracle.ide.files 11.1.1.0.22.49.42 Loaded
File System Navigator oracle.sqldeveloper.filenavigator 11.1.1.54.40 Loaded
Help System oracle.ide.help 11.1.1.0.22.49.42 Loaded
History Support oracle.jdeveloper.history 11.1.1.0.22.49.42 Loaded
Import/Export Support oracle.ide.importexport 11.1.1.0.22.49.42 Loaded
JTDS JDBC Driver oracle.sqldeveloper.thirdparty.drivers.sqlserver 11.1.1.54.11 Loaded
Log Window oracle.ide.log 11.1.1.0.22.49.42 Loaded
Mac OS X Adapter oracle.ideimpl.apple 11.1.1.0.22.49.42 Loaded
MySQL JDBC Driver oracle.sqldeveloper.thirdparty.drivers.mysql 11.1.1.54.11 Loaded
Navigator oracle.ide.navigator 11.1.1.0.22.49.42 Loaded
Object Gallery oracle.ide.gallery 11.1.1.0.22.49.42 Loaded
Object Viewer oracle.sqldeveloper.oviewer 11.1.1.54.40 Loaded
Oracle IDE oracle.ide 11.1.1.0.22.49.42 Loaded
Oracle Microsoft Access Browser oracle.sqldeveloper.thirdparty.access 11.1.1.54.40 Loaded
Oracle Migration Workbench oracle.sqldeveloper.migration 11.1.1.54.40 Loaded
Oracle Migration Workbench - MS Access oracle.sqldeveloper.migration.msaccess 11.1.1.54.40 Loaded
Oracle Migration Workbench - MySQL oracle.sqldeveloper.migration.mysql5 11.1.1.54.40 Loaded
Oracle Migration Workbench - SQLServer oracle.sqldeveloper.migration.sqlserver2005 11.1.1.54.40 Loaded
Oracle Migration Workbench - Translation Core oracle.sqldeveloper.migration.translation.core 11.1.1.54.44 Loaded
Oracle Migration Workbench - Translation MS Access oracle.sqldeveloper.migration.translation.msaccess 11.1.1.54.40 Loaded
Oracle Migration Workbench - Translation MS SQL Server oracle.sqldeveloper.migration.translation.sqlserver 11.1.1.54.40 Loaded
Oracle Migration Workbench - Translation MySQL oracle.sqldeveloper.migration.translation.mysql 11.1.1.54.40 Loaded
Oracle Migration Workbench - Translation Sybase oracle.sqldeveloper.migration.translation.sybase 11.1.1.54.44 Loaded
Oracle Migration Workbench - Translation UI oracle.sqldeveloper.migration.translation.gui 11.1.1.54.40 Loaded
Oracle MySQL Browser oracle.sqldeveloper.thirdparty.mysql 11.1.1.54.40 Loaded
Oracle SQL Developer oracle.sqldeveloper 11.1.1.54.40 Loaded
Oracle SQL Developer Extras oracle.sqldeveloper.extras 11.1.1.54.40 Loaded
Oracle SQL Developer Reports oracle.sqldeveloper.report 11.1.1.54.40 Loaded
Oracle SQL Developer SearchBar oracle.sqldeveloper.searchbar 11.1.1.54.40 Loaded
Oracle SQL Developer TimesTen oracle.sqldeveloper.timesten 1.5.1.1.2 Loaded
Oracle SQL Server Browser oracle.sqldeveloper.thirdparty.sqlserver 11.1.1.54.40 Loaded
Oracle Sybase Browser oracle.sqldeveloper.thirdparty.sybase 1.2.1.54.40 Loaded
Oracle XML Schema Support oracle.sqldeveloper.xmlschema 11.1.1.54.40 Loaded
OrindaBuild Java Service Generator (Demo) com.orindasoft.app.procbuilder.sqldeveloper 5.1.20081208 Loaded
PROBE Debugger oracle.jdeveloper.db.debug.probe 11.1.1.0.22.49.42 Loaded
Peek oracle.ide.peek 1.0 Loaded
Replace With oracle.ide.replace 11.1.1.0.22.49.42 Loaded
Runner oracle.ide.runner 11.1.1.0.22.49.42 Loaded
SQL Worksheet Window oracle.sqldeveloper.sqlworksheet 11.1.1.54.40 Loaded
Search Bar oracle.ide.searchbar 11.1.1.0.0 Loaded
Snippet Window oracle.sqldeveloper.snippet 11.1.1.54.40 Loaded
Sybase 12 oracle.sqldeveloper.migration.sybase12 11.1.1.54.40 Loaded
Sybase 15 oracle.sqldeveloper.migration.sybase15 11.1.1.54.40 Loaded
Tuning oracle.sqldeveloper.tuning 11.1.1.54.40 Loaded
VHV oracle.ide.vhv 11.1.1.0.22.49.42 Loaded
Versioning Support oracle.jdeveloper.vcs 11.1.1.0.22.49.42 Loaded
Versioning Support for CVS oracle.jdeveloper.cvs 11.1.1.0.22.49.42 Loaded
Versioning Support for Subversion oracle.jdeveloper.subversion 11.1.1.0.22.49.42 Loaded
Web Browser and Proxy oracle.ide.webbrowser 11.1.1.0.22.49.42 Loaded
oracle.ide.dependency oracle.ide.dependency 11.1.1.0.22.49.42 Loaded
oracle.ide.indexing oracle.ide.indexing 11.1.1.0.22.49.42 Loaded
Edited by: user518195 on Nov 9, 2008 3:59 PMMireille,
Thanks for the input. I did both steps you suggested and same result, with the 11.1.0.6 client as well. The stack trace and environment is noted below.
Still looking for a solution.
Cheers,
Tim
oracle.dbtools.migration.workbench.core.ui.AbstractMigrationProgressRunnable.start(AbstractMigrationProgressRunnable.java:141)
Stack trace;
java.lang.Exception: java.lang.NullPointerException
at oracle.dbtools.migration.workbench.core.ui.AbstractMigrationProgressRunnable.start(AbstractMigrationProgressRunnable.java:141)
at oracle.dbtools.migration.workbench.core.CaptureInitiator.launch(CaptureInitiator.java:93)
at oracle.dbtools.raptor.controls.sqldialog.ObjectActionController.handleEvent(ObjectActionController.java:146)
at oracle.ide.controller.IdeAction.performAction(IdeAction.java:524)
at oracle.ide.controller.IdeAction.actionPerformedImpl(IdeAction.java:855)
at oracle.ide.controller.IdeAction.actionPerformed(IdeAction.java:496)
at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1995)
at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2318)
at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387)
at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242)
at javax.swing.AbstractButton.doClick(AbstractButton.java:357)
at javax.swing.plaf.basic.BasicMenuItemUI.doClick(BasicMenuItemUI.java:1220)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(BasicMenuItemUI.java:1261)
at java.awt.Component.processMouseEvent(Component.java:6041)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3265)
at java.awt.Component.processEvent(Component.java:5806)
at java.awt.Container.processEvent(Container.java:2058)
at java.awt.Component.dispatchEventImpl(Component.java:4413)
at java.awt.Container.dispatchEventImpl(Container.java:2116)
at java.awt.Component.dispatchEvent(Component.java:4243)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4322)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:3986)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:3916)
at java.awt.Container.dispatchEventImpl(Container.java:2102)
at java.awt.Window.dispatchEventImpl(Window.java:2440)
at java.awt.Component.dispatchEvent(Component.java:4243)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:273)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:183)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:173)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:168)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:160)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:121)
Caused by: java.lang.NullPointerException
at oracle.dbtools.migration.workbench.plugin.MySQL5Capturer.captureColumnDetails(MySQL5Capturer.java:405)
at oracle.dbtools.migration.workbench.plugin.MySQLCapturer.captureObjects(MySQLCapturer.java:176)
at oracle.dbtools.migration.workbench.plugin.MySQL5Capturer.captureObjects(MySQL5Capturer.java:134)
at oracle.dbtools.migration.capture.OnlineCaptureWorker.capturePerTableImpl(OnlineCaptureWorker.java:188)
at oracle.dbtools.migration.capture.CaptureWorker.capturePerTable(CaptureWorker.java:526)
at oracle.dbtools.migration.capture.CaptureWorker.captureType(CaptureWorker.java:283)
at oracle.dbtools.migration.capture.CaptureWorker.runCapture(CaptureWorker.java:231)
at oracle.dbtools.migration.workbench.core.ui.CaptureRunner.doWork(CaptureRunner.java:65)
at oracle.dbtools.migration.workbench.core.ui.AbstractMigrationProgressRunnable.run(AbstractMigrationProgressRunnable.java:161)
at oracle.dbtools.migration.workbench.core.ui.MigrationProgressBar.run(MigrationProgressBar.java:569)
at java.lang.Thread.run(Thread.java:619)
java.lang.NullPointerException
Stack trace;
java.lang.Exception: java.lang.NullPointerException
at oracle.dbtools.migration.workbench.core.ui.AbstractMigrationProgressRunnable.start(AbstractMigrationProgressRunnable.java:141)
at oracle.dbtools.migration.workbench.core.CaptureInitiator.launch(CaptureInitiator.java:93)
at oracle.dbtools.raptor.controls.sqldialog.ObjectActionController.handleEvent(ObjectActionController.java:146)
at oracle.ide.controller.IdeAction.performAction(IdeAction.java:524)
at oracle.ide.controller.IdeAction.actionPerformedImpl(IdeAction.java:855)
at oracle.ide.controller.IdeAction.actionPerformed(IdeAction.java:496)
at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1995)
at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2318)
at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387)
at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242)
at javax.swing.AbstractButton.doClick(AbstractButton.java:357)
at javax.swing.plaf.basic.BasicMenuItemUI.doClick(BasicMenuItemUI.java:1220)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(BasicMenuItemUI.java:1261)
at java.awt.Component.processMouseEvent(Component.java:6041)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3265)
at java.awt.Component.processEvent(Component.java:5806)
at java.awt.Container.processEvent(Container.java:2058)
at java.awt.Component.dispatchEventImpl(Component.java:4413)
at java.awt.Container.dispatchEventImpl(Container.java:2116)
at java.awt.Component.dispatchEvent(Component.java:4243)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4322)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:3986)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:3916)
at java.awt.Container.dispatchEventImpl(Container.java:2102)
at java.awt.Window.dispatchEventImpl(Window.java:2440)
at java.awt.Component.dispatchEvent(Component.java:4243)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:273)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:183)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:173)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:168)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:160)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:121)
Caused by: java.lang.NullPointerException
at oracle.dbtools.migration.workbench.plugin.MySQL5Capturer.captureColumnDetails(MySQL5Capturer.java:405)
at oracle.dbtools.migration.workbench.plugin.MySQLCapturer.captureObjects(MySQLCapturer.java:176)
at oracle.dbtools.migration.workbench.plugin.MySQL5Capturer.captureObjects(MySQL5Capturer.java:134)
at oracle.dbtools.migration.capture.OnlineCaptureWorker.capturePerTableImpl(OnlineCaptureWorker.java:188)
at oracle.dbtools.migration.capture.CaptureWorker.capturePerTable(CaptureWorker.java:526)
at oracle.dbtools.migration.capture.CaptureWorker.captureType(CaptureWorker.java:283)
at oracle.dbtools.migration.capture.CaptureWorker.runCapture(CaptureWorker.java:231)
at oracle.dbtools.migration.workbench.core.ui.CaptureRunner.doWork(CaptureRunner.java:65)
at oracle.dbtools.migration.workbench.core.ui.AbstractMigrationProgressRunnable.run(AbstractMigrationProgressRunnable.java:161)
at oracle.dbtools.migration.workbench.core.ui.MigrationProgressBar.run(MigrationProgressBar.java:569)
at java.lang.Thread.run(Thread.java:619)
About
Oracle SQL Developer 1.5.1
Version 1.5.1
Build MAIN-5440
Copyright © 2005,2008 Oracle. All Rights Reserved.
IDE Version: 11.1.1.0.22.49.42
Product ID: oracle.sqldeveloper
Product Version: 11.1.1.54.40
Version
Component Version
========= =======
CVS Version Internal to Oracle SQL Developer (client-only)
Java(TM) Platform 1.6.0_07
Oracle IDE 1.5.1.54.40
Versioning Support 1.5.1.54.40
Properties
Name Value
==== =====
apple.laf.useScreenMenuBar true
awt.toolkit sun.awt.windows.WToolkit
class.load.environment oracle.ide.boot.IdeClassLoadEnvironment
class.load.log.level CONFIG
class.transfer delegate
com.apple.macos.smallTabs true
com.apple.mrj.application.apple.menu.about.name "SQL_Developer"
com.apple.mrj.application.growbox.intrudes false
file.encoding Cp1252
file.encoding.pkg sun.io
file.separator \
ice.browser.forcegc false
ice.pilots.html4.ignoreNonGenericFonts true
ice.pilots.html4.tileOptThreshold 0
ide.AssertTracingDisabled true
ide.bootstrap.start 453697421257
ide.build MAIN-5440
ide.conf C:\sqldeveloper\sqldeveloper\bin\sqldeveloper.conf
ide.config_pathname C:\sqldeveloper\sqldeveloper\bin\sqldeveloper.conf
ide.debugbuild false
ide.devbuild false
ide.extension.search.path sqldeveloper/extensions:jdev/extensions:ide/extensions
ide.firstrun false
ide.java.minversion 1.5.0
ide.launcherProcessId 5044
ide.main.class oracle.ide.boot.IdeLauncher
ide.patches.dir ide/lib/patches
ide.pref.dir C:\Documents and Settings\Administrator\Application Data\SQL Developer
ide.pref.dir.base C:\Documents and Settings\Administrator\Application Data
ide.product oracle.sqldeveloper
ide.shell.enableFileTypeAssociation C:\sqldeveloper\sqldeveloper.exe
ide.splash.screen splash.gif
ide.startingArg0 C:\sqldeveloper\sqldeveloper.exe
ide.startingcwd C:\sqldeveloper
ide.user.dir C:\Documents and Settings\Administrator\Application Data\SQL Developer
ide.user.dir.var IDE_USER_DIR
ide.work.dir C:\Documents and Settings\Administrator\My Documents\SQL Developer
ide.work.dir.base C:\Documents and Settings\Administrator\My Documents
java.awt.graphicsenv sun.awt.Win32GraphicsEnvironment
java.awt.printerjob sun.awt.windows.WPrinterJob
java.class.path ..\..\ide\lib\ide-boot.jar
java.class.version 50.0
java.endorsed.dirs C:\Java\jdk1.6.0_07\jre\lib\endorsed
java.ext.dirs C:\Java\jdk1.6.0_07\jre\lib\ext;C:\WINDOWS\Sun\Java\lib\ext
java.home C:\Java\jdk1.6.0_07\jre
java.io.tmpdir c:\temp\
java.library.path C:\sqldeveloper;.;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\oracle\product\11.1.0\client_1\bin;C:\oracle\product\10.2.0\client_1\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Universal Extractor\bin;C:\Program Files\IDM Computer Solutions\UltraEdit-32;C:\Program Files\Diskeeper Corporation\Diskeeper\;c:\Embarcadero\PerformanceCenter
java.naming.factory.initial oracle.javatools.jndi.LocalInitialContextFactory
java.runtime.name Java(TM) SE Runtime Environment
java.runtime.version 1.6.0_07-b06
java.specification.name Java Platform API Specification
java.specification.vendor Sun Microsystems Inc.
java.specification.version 1.6
java.util.logging.config.file logging.conf
java.vendor Sun Microsystems Inc.
java.vendor.url http://java.sun.com/
java.vendor.url.bug http://java.sun.com/cgi-bin/bugreport.cgi
java.version 1.6.0_07
java.vm.info mixed mode
java.vm.name Java HotSpot(TM) Client VM
java.vm.specification.name Java Virtual Machine Specification
java.vm.specification.vendor Sun Microsystems Inc.
java.vm.specification.version 1.0
java.vm.vendor Sun Microsystems Inc.
java.vm.version 10.0-b23
jdbc.driver.home /C:/oracle/product/11.1.0/client_1/
jdbc.library /C:/oracle/product/11.1.0/client_1/jdbc/lib/ojdbc5.jar
line.separator \r\n
oracle.home C:\sqldeveloper
oracle.ide.util.AddinPolicyUtils.OVERRIDE_FLAG true
oracle.translated.locales de,es,fr,it,ja,ko,pt_BR,zh_CN,zh_TW
oracle.xdkjava.compatibility.version 9.0.4
orai18n.library /C:/oracle/product/11.1.0/client_1/jlib/orai18n.jar
os.arch x86
os.name Windows XP
os.version 5.1
path.separator ;
reserved_filenames con,aux,prn,lpt1,lpt2,lpt3,lpt4,lpt5,lpt6,lpt7,lpt8,lpt9,com1,com2,com3,com4,com5,com6,com7,com8,com9,conin$,conout,conout$
sun.arch.data.model 32
sun.boot.class.path C:\Java\jdk1.6.0_07\jre\lib\resources.jar;C:\Java\jdk1.6.0_07\jre\lib\rt.jar;C:\Java\jdk1.6.0_07\jre\lib\sunrsasign.jar;C:\Java\jdk1.6.0_07\jre\lib\jsse.jar;C:\Java\jdk1.6.0_07\jre\lib\jce.jar;C:\Java\jdk1.6.0_07\jre\lib\charsets.jar;C:\Java\jdk1.6.0_07\jre\classes
sun.boot.library.path C:\Java\jdk1.6.0_07\jre\bin
sun.cpu.endian little
sun.cpu.isalist
sun.desktop windows
sun.io.unicode.encoding UnicodeLittle
sun.java2d.ddoffscreen false
sun.jnu.encoding Cp1252
sun.management.compiler HotSpot Client Compiler
sun.os.patch.level Service Pack 3
user.country US
user.dir C:\sqldeveloper\sqldeveloper\bin
user.home C:\Documents and Settings\Administrator
user.language en
user.name zsysadmin
user.timezone America/Los_Angeles
user.variant
windows.shell.font.languages
Extensions
Name Identifier Version Status
==== ========== ======= ======
Check For Updates oracle.ide.webupdate 11.1.1.0.22.49.42 Loaded
Code Editor oracle.ide.ceditor 11.1.1.0.22.49.42 Loaded
Database Connection Support oracle.jdeveloper.db.connection 11.1.1.0.22.49.42 Loaded
Database Object Explorers oracle.ide.db.explorer 11.1.1.0.22.49.42 Loaded
Database UI oracle.ide.db 11.1.1.0.22.49.42 Loaded
Diff/Merge oracle.ide.diffmerge 11.1.1.0.22.49.42 Loaded
Extended IDE Platform oracle.javacore 11.1.1.0.22.49.42 Loaded
External Tools oracle.ide.externaltools 11.1.1.0.22.49.42 Loaded
Feedback oracle.ide.feedback 11.1.1.0.22.49.42 Loaded
File Support oracle.ide.files 11.1.1.0.22.49.42 Loaded
File System Navigator oracle.sqldeveloper.filenavigator 11.1.1.54.40 Loaded
Help System oracle.ide.help 11.1.1.0.22.49.42 Loaded
History Support oracle.jdeveloper.history 11.1.1.0.22.49.42 Loaded
Import/Export Support oracle.ide.importexport 11.1.1.0.22.49.42 Loaded
JTDS JDBC Driver oracle.sqldeveloper.thirdparty.drivers.sqlserver 11.1.1.54.11 Loaded
Log Window oracle.ide.log 11.1.1.0.22.49.42 Loaded
Mac OS X Adapter oracle.ideimpl.apple 11.1.1.0.22.49.42 Loaded
MySQL JDBC Driver oracle.sqldeveloper.thirdparty.drivers.mysql 11.1.1.54.11 Loaded
Navigator oracle.ide.navigator 11.1.1.0.22.49.42 Loaded
Object Gallery oracle.ide.gallery 11.1.1.0.22.49.42 Loaded
Object Viewer oracle.sqldeveloper.oviewer 11.1.1.54.40 Loaded
Oracle IDE oracle.ide 11.1.1.0.22.49.42 Loaded
Oracle Microsoft Access Browser oracle.sqldeveloper.thirdparty.access 11.1.1.54.40 Loaded
Oracle Migration Workbench oracle.sqldeveloper.migration 11.1.1.54.40 Loaded
Oracle Migration Workbench - MS Access oracle.sqldeveloper.migration.msaccess 11.1.1.54.40 Loaded
Oracle Migration Workbench - MySQL oracle.sqldeveloper.migration.mysql5 11.1.1.54.40 Loaded
Oracle Migration Workbench - SQLServer oracle.sqldeveloper.migration.sqlserver2005 11.1.1.54.40 Loaded
Oracle Migration Workbench - Translation Core oracle.sqldeveloper.migration.translation.core 11.1.1.54.44 Loaded
Oracle Migration Workbench - Translation MS Access oracle.sqldeveloper.migration.translation.msaccess 11.1.1.54.40 Loaded
Oracle Migration Workbench - Translation MS SQL Server oracle.sqldeveloper.migration.translation.sqlserver 11.1.1.54.40 Loaded
Oracle Migration Workbench - Translation MySQL oracle.sqldeveloper.migration.translation.mysql 11.1.1.54.40 Loaded
Oracle Migration Workbench - Translation Sybase oracle.sqldeveloper.migration.translation.sybase 11.1.1.54.44 Loaded
Oracle Migration Workbench - Translation UI oracle.sqldeveloper.migration.translation.gui 11.1.1.54.40 Loaded
Oracle MySQL Browser oracle.sqldeveloper.thirdparty.mysql 11.1.1.54.40 Loaded
Oracle SQL Developer oracle.sqldeveloper 11.1.1.54.40 Loaded
Oracle SQL Developer Extras oracle.sqldeveloper.extras 11.1.1.54.40 Loaded
Oracle SQL Developer Reports oracle.sqldeveloper.report 11.1.1.54.40 Loaded
Oracle SQL Developer SearchBar oracle.sqldeveloper.searchbar 11.1.1.54.40 Loaded
Oracle SQL Developer TimesTen oracle.sqldeveloper.timesten 1.5.1.1.2 Loaded
Oracle SQL Server Browser oracle.sqldeveloper.thirdparty.sqlserver 11.1.1.54.40 Loaded
Oracle Sybase Browser oracle.sqldeveloper.thirdparty.sybase 1.2.1.54.40 Loaded
Oracle XML Schema Support oracle.sqldeveloper.xmlschema 11.1.1.54.40 Loaded
OrindaBuild Java Service Generator (Demo) com.orindasoft.app.procbuilder.sqldeveloper 5.1.20081208 Loaded
PROBE Debugger oracle.jdeveloper.db.debug.probe 11.1.1.0.22.49.42 Loaded
Peek oracle.ide.peek 1.0 Loaded
Replace With oracle.ide.replace 11.1.1.0.22.49.42 Loaded
Runner oracle.ide.runner 11.1.1.0.22.49.42 Loaded
SQL Worksheet Window oracle.sqldeveloper.sqlworksheet 11.1.1.54.40 Loaded
Search Bar oracle.ide.searchbar 11.1.1.0.0 Loaded
Snippet Window oracle.sqldeveloper.snippet 11.1.1.54.40 Loaded
Sybase 12 oracle.sqldeveloper.migration.sybase12 11.1.1.54.40 Loaded
Sybase 15 oracle.sqldeveloper.migration.sybase15 11.1.1.54.40 Loaded
Tuning oracle.sqldeveloper.tuning 11.1.1.54.40 Loaded
VHV oracle.ide.vhv 11.1.1.0.22.49.42 Loaded
Versioning Support oracle.jdeveloper.vcs 11.1.1.0.22.49.42 Loaded
Versioning Support for CVS oracle.jdeveloper.cvs 11.1.1.0.22.49.42 Loaded
Versioning Support for Subversion oracle.jdeveloper.subversion 11.1.1.0.22.49.42 Loaded
Web Browser and Proxy oracle.ide.webbrowser 11.1.1.0.22.49.42 Loaded
oracle.ide.dependency oracle.ide.dependency 11.1.1.0.22.49.42 Loaded
oracle.ide.indexing oracle.ide.indexing 11.1.1.0.22.49.42 Loaded -
Split a large table into multiple packages - R3load/MIGMON
Hello,
We are in the process of reducing the export and import downtime for the UNICODE migration/Conversion.
In this process, we have identified couple of large tables which were taking long time to export and import by a single R3load process.
Step 1:> We ran the System Copy --> Export Preparation
Step 2:> System Copy --> Table Splitting Preparation
We have created a file with the large tables which are required to split into multiple packages and where able to create a total of 3 WHR files for the following table under DATA directory of main EXPORT directory.
SplitTables.txt (Name of the file used in the SAPINST)
CATF%2
E071%2
Which means, we would like each of the above large tables to be exported using 2 R3load processes.
Step 3:> System Copy --> Database and Central Instance Export
During the SAPInst process at Split STR files screen , we have selected the option 'Split Predefined Tables' and select the file which has predefined tables.
Filename: SplitTable.txt
CATF
E071
When we started the export process, we haven't seen the above tables been processed by mutiple R3load processes.
They were exported by a Single R3load processes.
In the order_by.txt file, we have found the following entries...
order_by.txt----
# generated by SAPinst at: Sat Feb 24 08:33:39 GMT-0700 (Mountain
Standard Time) 2007
default package order: by name
CATF
D010TAB
DD03L
DOKCLU
E071
GLOSSARY
REPOSRC
SAP0000
SAPAPPL0_1
SAPAPPL0_2
We have selected a total of 20 parallel jobs.
Here my questions are:
a> what are we doing wrong here?
b> Is there a different way to specify/define a large table into multiple packages, so that they get exported by multiple R3load processes?
I really appreciate your response.
Thank you,
NikeeHi Haleem,
As for your queries are concerned -
1. With R3ta , you will split large tables using WHERE clause. WHR files get generated. If you have mentioned CDCLS%2 in the input file for table splitting, then it generates 2~3 WHR files CDCLS-1, CDCLS-2 & CDCLS-3 (depending upon WHERE conditions)
2. While using MIGMON ( for sequencial / parallel export-import process), you have the choice of Package Order in th e properties file.
E.g : For Import - In the import_monitor_cmd.properties, specify
Package order: name | size | file with package names
orderBy=/upgexp/SOURCE/pkg_imp_order.txt
And in the pkg_imp_txt, I have specified the import package order as
BSIS-7
CDCLS-3
SAPAPPL1_184
SAPAPPL1_72
CDCLS-2
SAPAPPL2_2
CDCLS-1
Similarly , you can specify the Export package order as well in the export properties file ...
I hope this clarifies your doubt
Warm Regards,
SANUP.V -
Accessing MySQL InnoDB tables via JDBC using Oracle SQL Developer
I had posted a problem in the Oracle SQL Developer forum with how that application (v1.1) accesses MySQL InnoDB tables and someone replied that the "[data migration] team created the integration with MySQL", so I am posting here in hopes of learning more about the problem.
Here's a summary:
When I use Oracle SQL Developer to query MySQL InnoDB tables, I need to issue a "commit" before I do each query to ensure that I am getting current results. Otherwise, it appears to be using a snapshot as-of the last commit. Is this a problem with SQL Developer, or a JDBC configuration, or MySQL problem even?
The full details are here:
Re: MySQL InnoDB tablesHi,
I've posted a response to your original thread.
Regards,
Dermot. -
How could i migrate large tables multi-million row from Oracle 8i to 9i.Pls explain me the process
Vijay,
this is the wrong forum for this question. Ask yourself these questions though:
Is this an insitu upgrade, hence there will be data migration scripts as part of the upgrade, otherwise, there may be transportable tablespaces you can use.
Are the machines disparate, Is the data temporal in nature, can you subsection the datamove, holding back on the volatile data until you need to switch over? -
MySQL, importing table in OWB
Hi,
when I try to import a MySQL table definition into owb, it looks like OWB (or hsodbc) first gets all the records (select * from table) from the table i'm trying to import. So, with smaller tables I don't notice it, but there is one table with 55 million records. So the hsodbc.exe process starts to consume all the memory and after a while says it is finished importing...but the metadata of the (large) table has not been imported.
I've got a feeling it has got something to do with the odbc driver, but i am not sure.
Anyone with an idea on how to solve this?Thanks a lot for the tip -- I don't know if the DB2 driver supports them or not but I'll look. There's a tab in the ODBC driver config for specifying custom parameters so it's possible.
Overall I find the ODBC connection works OK, but it's not that great for large data volumes. It's too slow for one thing. It's very convenient to be able to go from Oracle straight to DB2 without having to extract to flat files in between though. -
Large table, primary key constraint
I have migrated a table from 8i to 9i that is over 300 million rows. I migrated the the table to a 9i database without constraints or indexes.
I have successfully created a composite index of two columns, t1 varchar2(512), t2 varchar2(32). This index took nearly 16 hours to create.
I am now trying to create a primary key based on that index with the following sql:
alter table table1
add constraint table1_t1_t2_pk primary key(t1,t2)
using index table1_t1_t2_idx
nologging
This process has taken over 24 hours and is well into the second day. Studio reports it will take an additional 15 hours to create.
My questions are these?
1. Is my syntax okay?
2. I thought that by creating a primary key on an existing index, that another index is not being created. I thought it would be faster this way. Why is it taking a lot longer to create then the index it is based upon?
3. Is there a more efficient method (other than parallel query) to create this index/constraint on such a large table? What happens when I go production and need to recreate this index if I have a failure. I have never had to do this before. I can't be down for 48 hours to create an index. What other alternatives do I have?
The table is partit[i]Long postings are being truncated to ~1 kB at this time.Is INDEX table1_t1_t2_idx UNIQUE? If it's not that might explain why building the primary key constraint takes longer.
I think the USING INDEX clause with an existing index is intended mainly for different UNIQUE constraints to share the same index. In your situation I think you would be better off just building the primary key constraint.
Cheers, APC -
Pagination query help needed for large table - force a different index
I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
SELECT members.*
FROM members,
SELECT RID, rownum rnum
FROM
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
The problem I have is this:
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
SELECT /*+ index(members, joindate_idx) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
SELECT /*+ first_rows(100) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
SELECT members.* -- Select all data from members table
FROM members, -- members table added to FROM clause
SELECT RID, rownum rnum
FROM
SELECT /*+ index(members, joindate_idx) */ rowid as RID -- Hint is ignored now that I am joining in the outer query
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowid -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
Thanks!Lakmal Rajapakse wrote:
OK here is an example to illustrate the advantage:
SQL> set autot traceonly
SQL> select * from (
2 select a.*, rownum x from
3 (
4 select a.* from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 )
9 where x >= 1100
10 /
101 rows selected.
Execution Plan
Plan hash value: 3711662397
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 1 | VIEW | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1200 | 506K| 192 (0)| 00:00:03 |
| 4 | TABLE ACCESS BY INDEX ROWID| EVENTS | 253M| 34G| 192 (0)| 00:00:03 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 1200 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("X">=1100)
2 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
443 consistent gets
0 physical reads
0 redo size
25203 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
SQL>
SQL>
SQL> select * from aoswf.events a, (
2 select rid, rownum x from
3 (
4 select rowid rid from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 ) b
9 where x >= 1100
10 and a.rowid = rid
11 /
101 rows selected.
Execution Plan
Plan hash value: 2308864810
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 201K| 261K (1)| 00:52:21 |
| 1 | NESTED LOOPS | | 1200 | 201K| 261K (1)| 00:52:21 |
|* 2 | VIEW | | 1200 | 30000 | 260K (1)| 00:52:06 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 253M| 2895M| 260K (1)| 00:52:06 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 253M| 4826M| 260K (1)| 00:52:06 |
| 6 | TABLE ACCESS BY USER ROWID| EVENTS | 1 | 147 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("X">=1100)
3 - filter(ROWNUM<=1200)
Statistics
8 recursive calls
0 db block gets
117 consistent gets
0 physical reads
0 redo size
27539 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
Lakmal (and OP),
Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter pga
NAME TYPE VALUE
pga_aggregate_target big integer 103M
SQL> create table t nologging as select * from all_objects where 1 = 2 ;
Table created.
SQL> create index t_idx on t(last_ddl_time) nologging ;
Index created.
SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
40617 rows created.
SQL> commit ;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
PL/SQL procedure successfully completed.
SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME CREATED
47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
47672 ALL$OLAP2_CUBE_DIM_USES 28-JUL-2009 08:08:39
47681 ALL$OLAP2_CUBE_MEASURE_MAPS 28-JUL-2009 08:08:39
47682 ALL$OLAP2_FACT_LEVEL_USES 28-JUL-2009 08:08:39
47685 ALL$OLAP2_AGGREGATION_USES 28-JUL-2009 08:08:39
47692 ALL$OLAP2_CATALOGS 28-JUL-2009 08:08:39
47665 ALL$OLAPMR_FACTTBLKEYMAPS 28-JUL-2009 08:08:39
47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS 28-JUL-2009 08:08:39
47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS 28-JUL-2009 08:08:39
47669 ALL$OLAP9I2_HIER_DIMENSIONS 28-JUL-2009 08:08:39
47666 ALL$OLAP9I1_HIER_DIMENSIONS 28-JUL-2009 08:08:39
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> set autotrace traceonly
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
2 ;
11 rows selected.
Execution Plan
Plan hash value: 44968669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 180 (2)| 00:00:03 |
| 1 | SORT ORDER BY | | 1200 | 91200 | 180 (2)| 00:00:03 |
|* 2 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 3 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 4 | COUNT STOPKEY | | | | | |
| 5 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 6 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 7 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("T".ROWID="T1"."RID")
3 - filter("RN">=1190)
4 - filter(ROWNUM<=1200)
Statistics
1 recursive calls
0 db block gets
348 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
343 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
11 rows selected.
Execution Plan
Plan hash value: 168880862
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 1 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 2 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 5 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 6 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("T".ROWID="T1"."RID")
2 - filter("RN">=1190)
3 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
349 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
175 recursive calls
0 db block gets
388 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> set autotrace off
SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query. -
My product has some very large tables (using RANGE_PAGING). I want to allow the end user to do a Select All operation, and then press a command button to act on all selected rows, but want my backing code to detect a Select All has been done, rather than attempt to retrieve all rows from the table. Is this doable with a RichTable?
I'm porting an existing UI over to ADF -- the old UI handled this by having a separate select all button (column header), which would get unset if any row got unselected, and the backing code could interrogate that. I was wondering if there was a more ADF-ish way of handling this.
Using Oracle JDeveloper 11g Release 1 (11.1.1.6.0)
Edited by: user12614476 on Dec 5, 2011 1:58 PM
(added JDeveloper version info)Are you really using 11.1.1.6? If so, I guess you should be asking in one of the internal Oracle forums.
But, no, there's no "more ADF-ish" way to my knowledge :)
John -
Retrieve data from a large table from ORACLE 10g
I am working with a Microsoft Visual Studio Project that requires to retrieve data from a large table from Oracle 10g database and export the data into the hard drive.
The problem here is that I am not able to connect to the database directly because of license issue but I can use a third party API to retrieve data from the database. This API has sufficient previllege/license permission on to the database to perform retrieval of data. So, I am not able to use DTS/SSIS or other tool to import data from the database directly connecting to it.
Here my approach is...first retrieve the data using the API into a .net DataTable and then dump the records from it into the hard drive in a specific format (might be in Excel file/ another SQL server database).
When I try to retrieve the data from a large table having over 13 lacs records (3-4 GB) in a data table using the visual studio project, I get an Out of memory exception.
But is there any better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?
Any help on this problem will be highly appriciated.
Thanks in advance...
-Jahedur Rahman
Edited by: Jahedur on May 16, 2010 11:42 PMGirish...Thanks for your reply...But I am sorry for the confusions. Let me explain that...
1."export the data into another media into the hard drive."
What does it mean by this line i.e. another media into hard drive???
ANS: Sorry...I just want to write the data in a file or in a table in SQL server database.
2."I am not able to connect to the database directly because of license issue"
huh?? I never heard this question that a user is not able to connect the db because of license. What error / message you are getting?
ANS: My company uses a 3rd party application that uses ORACLE 10g. And my compnay is licensed to use the 3rd party application (APP+Database is a package) and did not purchased ORACLE license to use directly. So I will not connect to the database directly.
3.I am not sure which API is you are talking about, but i am running an application of the visual studio data grid or similar kind of controls; in which i can select (select query) as many rows as i needed; no issue.
ANS: This API is provided by the 3rd party application vendor. I can pass a query to it and it returns a datatable.
4."better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?"
ANS: As I get a system error (out of memory) when I select all rows in a datatable at a time, I wanted to retrieve the data in multiple phases.
E.g: 1 to 20,000 records in 1st phase
20,001 to 40,000 records in 2nd phase
40,001 to ...... records in 3nd phase
and so on...
Please let me know if this does not clarify your confusions... :)
Thanks...
-Jahedur Rahman
Edited by: user13114507 on May 12, 2010 11:28 PM -
Need help in optimisation for a select query on a large table
Hi Gurus
Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
My Select is reading from a table which contains 10 Million records.
I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
I am pasting the code. please help
Data: wa_i_tab1 type tys_tg_1 .
DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
Data : wa_result_pkg type tys_tg_1,
wa_result_pkg1 type tys_tg_1.
SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
/BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
into CORRESPONDING FIELDS OF table i_tab
FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
where
/bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
AND
AGREEMENT = RESULT_PACKAGE-AGREEMENT
AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
loop at RESULT_PACKAGE into wa_result_pkg.
read TABLE i_tab INTO wa_i_tab1 with key
/BIC/ZREB_SDAT =
wa_result_pkg-/BIC/ZREB_SDAT
AGREEMENT = wa_result_pkg-AGREEMENT
/BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
IF SY-SUBRC = 0.
move wa_i_tab1-/BIC/ZSETLRUN to
wa_result_pkg-/BIC/ZSETLRUN.
wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
modify RESULT_PACKAGE from wa_result_pkg1
TRANSPORTING /BIC/ZSETLRUN.
ENDIF.
CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
endloop.Hi,
1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
refer the below code is
RESULT_PACKAGE1[] = RESULT_PACKAGE[].
sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
from /BIC/PZREB_SDAT
into table i_tab
FOR ALL ENTRIES IN RESULT_PACKAGE1
where
/bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
AND
AGREEMENT = RESULT_PACKAGE1-AGREEMENT
AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
and one more thing your getting 10 million records so use package size in you select query.
Refer the following link also For All Entry for 1 Million Records
Regards,
Dhina..
Edited by: Dhina DMD on Sep 15, 2011 7:17 AM -
How to subdivide 1 large TABLE based on the output of a VIEW
I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
I am groping with the following strategy in PL/SQL:
1 -- set up cursor, execute the view (so I have the list of identifiers)
2 -- create a second cursor (or loop?) which:
accepts each of the identifiers in turn
executes a query (EXECUTE IMMEDIATE?) on the larger table
INSERTs (or appends?) each resultset into the GTT
3 -- Then the GTT contains just the requires subset of the larger table for further processing and eventual import into iReport for reporting.
Can anyone point me to code that would "spoon feed" me on this? Or suggest the best / better way to go about it?
The scale of the issue here -- GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
Thanks,
RobWelcome to the forum!
>
I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
Can anyone point me to code that would "spoon feed" me on this? Or suggest the best / better way to go about it?
The scale of the issue here -- GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
>
No - there is no code to point you to.
As many of the previous responses indicate part of the concern is that you seem to have already chosen and partially implemented a solution but the information you provided makes us question whether you have adequately analyzed and defined the actual problem and processing that needs to happen. Here's why I have questions about your approach
1. GTT - a red flag issue - these tables are generally not needed in Oracle. So when you, or anyone says they plan to use one it raises a red flag. People want to be sure you really need one rather than not using a table at all, or just using a regular table instead of a GTT.
2. Double nested CURSOR loops - a DOUBLE red flag issue - this is almost always SLOW-BY-SLOW (row-by-row) processing at its worst. It is seldom needed, doesn't perform well and won't scale. People are going to question this choice and rightfully so.
3. EXECUTE IMMEDIATE - a red flag issue or at least a yellow/warning flag. This is definitely a legitimate methodology when it is needed but may times developers resort to it when it isn't needed because it seems easier than doing the hard work of actually defining ALL of the requirements. It seems easier because it appears that it will allow and work for those 'unexpected' things that seem to come up in new development.
Unfortunately most of those unexpected things come up because the developer did not adequately define all of the requirements. The code may execute when those things arise but it likely won't do the right thing.
Seeing all three of those red flag issues in the same question is like waving a red flag at a charging bull. The responses you get are all likely to be of the 'DO NOT DO THAT' variety.
You are correct that a work table is appropriate when there is business logic to be applied to a set of data that cannot be applied using SQL alone. Use a regular table unless
1. you plan to have multiple sessions working with the table simutaneously,
2. each session needs to work with ONLY their own data in that table and not data from other sessions
3. the data does NOT need to be available after the session ends
4. you actually need a GTT to take advantage of the automatic data preservation (ON COMMIT PRESERVE/DELETE) functionality
Remember - when a session ends the data in the GTT is gone. That can makek it very difficult to troubleshoot data related problems since a different session can't see what data is in the table. Even if a GTT is needed for the final product it is very useful to use a regular table so that the data can be examined after test runs to help find and fix problems. Then after development is complete and initial testing is done a GTT would be substituted and final testing performed.
So the main remaining question is why you need to perform multiple dynamic queries to get the data populated into the work table? Especially why is a nested cursor loop needed? My suspicion is that you have the queries stored in a query table and one of your loops extracts the query and executes it dynamically.
How many queries are we talking about? Do these queries change from run to run? Please provide more detail of the process and an example query for the selection filtering as well as a typical dynamic query you plan to use. -
OutOfMemory error when trying to display large tables
We use JDeveloper 10.1.3. Our project uses ADF Faces + EJB3 Session Facade + TopLink.
We have a large table (over 100K rows) which we try to show to the user via an ADF Read-only Table. We build the page by dragging the facade findAllXXX method's result onto the page and choosing "ADF Read-only Table".
The problem is that during execution we get an OutOfMemory error. The Facade method attempts to extract the whole result set and to transfer it to a List. But the result set is simply too large. There's not enough memory.
Initially, I was under the impression that the table iterator would be running queries that automatically fetch just a chunk of the db table data at a time. Sadly, this is not the case. Apparently, all the data gets fetched. And then the iterator simply iterates through a List in memory. This is not what we needed.
So, I'd like to ask: is there a way for us to show a very large database table inside an ADF Table? And when the user clicks on "Next", to have the iterator automatically execute queries against the database and fetch the next chunk of data, if necessary?
If that is not possible with ADF components, it looks like we'll have to either write our own component or simply use the old code that we have which supports paging for huge tables by simply running new queries whenever necessary. Alternatively, each time the user clicks on "Next" or "Previous", we might have to intercept the event and manually send range information to a facade method which would then fetch the appropriate data from the database. I don't know how easy or difficult that would be to implement.
Naturally, I'd prefer to have that functionality available in ADF Faces. I hope there's a way to do this. But I'm still a novice and I would appreciate any advice.Hi Shay,
We do use search pages and we do give the users the opportunity to specify search criteria.
The trouble comes when the search criteria are not specific enough and the result set is huge. Transferring the whole result set into memory will be disastrous, especially for servers used by hundreds of users simultaneously. So, we'll have to limit the number of rows fetched at a time. We should do this either by setting the Maximum Rows option for the TopLink query (or using rownum<=XXX inside the SQL), or through using a data provider that supports paging.
I don't like the first approach very much because I don't have a good recipe for calculating the optimum number of Maximum Rows for each query. By specifying some average number of, say, 500 rows, I risk fetching too many rows at once and I also risk filling the TopLink cache with objects that are not necessary. I can use methods like query.dontMaintainCache() but in my case this is a workaround, not a solution.
I would prefer fetching relatively small chunks of data at a time and not limiting the user to a certain number of maximum rows. Furthermore, this way I won't fetch large amounts of data at the very beginning and I won't be forced to turn off the caching for the query.
Regarding the "ADF Developer's Guide", I read there that "To create a table using a data control, you must bind to a method on the data control that returns a collection. JDeveloper allows you to do this declaratively by dragging and dropping a collection from the Data Control Palette."
So, it looks like I'll have to implement a collection which, in turn, implements the paging functionality that I need. Is the TopLink object you are referring to some type of collection? I know that I can specify a collection class that TopLink should use for queries through the query.useCollectionClass(...) method. But if TopLink doesn't provide the collection I need, I will have to write that collection myself. I still haven't found the section in the TopLink documentation that says what types of Collections are natively provided by TopLink. I can see other collections like oracle.toplink.indirection.IndirectList, for example. But I have not found a specific discussion on large result sets with the exception of Streams and Cursors and I feel uneasy about maintaining cursors between client requests.
And I completely agree with you about reading the docs first and doing the programming afterwards. Whenever time permits, I always do that. I have already read the "ADF Developer's Guide" with the exception of chapters 20 and 21. And I switched to the "TopLink Developer's Guide" because it seems that we must focus on the model. Unfortunately, because of the circumstances, I've spent a lot of time reading and not enough time practicing what I read. So, my knowledge is kind of shaky at the moment and perhaps I'm not seeing things that are obvious to you. That's why I tried using this forum -- to ask the experts for advice on the best method for implementing paging. And I'm thankful to everyone who replied to my post so far. -
Performance during joining large tables
Hi,
I have to maintain a report which gets data from many large tables as below. Currently it is using join statement to join all 8 tables and causing a very slow performance.
SELECT
into corresponding fields of table equip
FROM caufv
join afih on afih~aufnr = caufv~aufnr
join iloa on iloa~iloan = afih~iloan
join iflos on iflos~tplnr = iloa~tplnr
join iflotx on iflos~tplnr = iflotx~tplnr
join vbak on vbak~aufnr = caufv~aufnr
join equz on equz~equnr = afih~equnr
join equi on equi~equnr = equz~equnr
join vbap on vbak~vbeln = vbap~vbeln
WHERE
Please suggest me another way, I'm newbie in ABAP. I tried using FOR ALL ENTRIES IN but it did not work. I would very appreciate if you can leave me some sample lines of code.
Thanks,Hi Dear ,
I will suggest you not to use inner join for such i.e. 8 number of table and that too huge tables. Instead use For All entries wherever possible. But before using for all entries check initial for base table and if its not possible to avoid inner join then try to minimise it. Use inner join between header and item.
Hope this will help you to solve your problem . Feel free to ask if you have any doubt.
Regards,
Vijay
Maybe you are looking for
-
hi all, i have program that should print like this: System.out.println((Month)+" is a "+(Season[i])+" month with "+(Days[32])); Prints: January is a winter season with 31 days. Code: public class Exercise3 public static void main(String[] a
-
How to represent non-printable characters in XSD?
I'm trying to represent some Ascii characters like 0D, 0A, 01, 02 and 03 in Hex in one XSD produced by the Native Format Builder but so far I keep getting errors by the Builder's test tool. Any idea how to do this? Thanks, MV
-
Loading sales from R3 to BW and from BW to APO
Hi Experts, I am Suvecha having little bit knowledge in APO. I want to know about the exact sense of "LOADING DATA FROM R/3 TO BW AND BW TO APO". Also clarify me whether it is APO BW or SAP BW. Also send me the exact scenario in real time. Thanks to
-
Enhancement, BAdI management with SOLAR02
Dear all, We are managing Transaction&Program List related with processes on SOLAR02 transaction. There are several object type we can select. WAPA BSP Application WTAG BSP Extension CLAS
-
I Have updated my iPhone & now when I try to update my apps an old email address which requires the password attached to it pops up. I don't remember that password & haven't used that email as my Apple ID in years. How can I get my current email & sp