Performance issue when inserting into spatial indexed table with JDBC

We have a table named 'feature' which has a "sdo_geometry" column, and we created spatial index on that column,
CREATE TABLE feature ( id number, desc varchar, oshape sdo_gemotry)
CREATE INDEX feature_sp_idx ON feature(oshape) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
Then we executed the following SQL to insert about 800 records into that table(We tried this by using DB visualizer and
our Java application, both of them were using JDBC driver to connect to the oracle 11gR2 database) .
insert into feature(id,desc,oshape) values (1001,xxx,xxxxx);
insert into feature (id,desc,oshape) values (1002,xxx,xxxxx);
insert into feature (id,desc,oshape) values (1800,xxx,xxxxx);
We encoutered the same problem as this topic
Performance of insert with spatial index
It takes nearly 1 secs for inserting one record,compare to 50 records inserted per sec without spatial index,
which is 50x drop in peformance when doing insertion with spatial index.
However, when we copy and paste those insertion scripts into Oracle Client(same test and same table with spatial index), we got a totally different performance result:
more than 50 records inserted in 1 secs, just as fast as the insertion without building spatial index.
Is it because Oracle Client is not using JDBC? Perhaps JDBC was got something wrong when updating those spatial indexed tables.
Edited by: 860605 on 19/09/2011 18:57
Edited by: 860605 on 19/09/2011 18:58
Edited by: 860605 on 19/09/2011 19:00

Normally JDBC use auto-commit. So each insert can causes a commit.
I don't know about Oracle Client. In sqlplus, insert is just a insert,
and you execute "commit" to explicitly commit your changes.
So maybe this is the reason.

Similar Messages

  • Nologging direct-path insert into an indexed table

    Hello,
    Does anyone have an idea how I can suppress generation of undo logs for direct-path insert into an indexed table on 11.2.0.1.0:
    CREATE TABLE TBL(ID NUMBER) NOLOGGING;
    CREATE INDEX IDX ON TBL(ID) NOLOGGING;
    INSERT /*+ APPEND */ INTO TBL SELECT /*+ APPEND */ ROWNUM FROM ...; -- Source table has 400,000,000+ rows
    Regards,
    Angel Tsankov

    Pl do not post duplicates - Why does Oracle not use direct-path insert when instructed to do so - pl continue the discussion in your original thread

  • Urgent : Performance Issue DELETE , INSERT INTO SELECT, UPDATE

    Hi,
    NEED ASSISTANCE TO OPTIMIZE THE INSERT STATEMENT (insert into select):
    =================================================
    We have a report.
    As per current design following steps are used to populate the custom table whcih is used for reporting purpose:
    1) DELETE all the recods from the custom table XXX_TEMP_REP.
    2) INSERT records in custom table XXX_TEMP_REP (Assume all the records related to type A)
    using
    INSERT..... INTO..... SELECT.....
    statement.
    3) Update records in XXX_TEMP_REP
    using some custom logic for the records populated .
    4) INSERT records in custom table XXX_TEMP_REP (Records related to type B)
    using
    INSERT..... INTO..... SELECT.....
    statement.
    Stats gathered related to Insert statement are:
    Event Wait Information
    SID 460 is waiting on event : db file sequential read
    P1 Text : file#
    P1 Value : 20
    P2 Text : block#
    P2 Value : 435039
    P3 Text : blocks
    P3 Value : 1
    Session Statistics
    redo size : 293.84 M
    parse count (hard) : 34
    parse count (total) : 1217
    user commits : 3
    Transaction and Rollback Information
    Rollback Used : 35.1796875 M
    Rollback Records : 355886
    Rollback Segment Number : 12
    Rollback Segment Name : _SYSSMU12$
    Logical IOs : 1627182
    Physical IOs : 136409
    RBS Startng Extent ID : 14
    Transaction Start Time : 09/29/10 04:22:11
    Transaction_Status : ACTIVE
    Please suggest how this can be optimized.
    Regards,
    Narender

    Hello,
    Is there any relation with the Oracle Forms tool ?
    Francois

  • Error when inserting into a internal table

    hello.
    I get the fowolling error.
    An exception occurred that is explained in detail below.
    The exception, which is assigned to class 'CX_SY_OPEN_SQL_DB', was not caught
      and
    therefore caused a runtime error.
    The reason for the exception is:
    In a SELECT access, the read file could not be placed in the target
    field provided.
    Either the conversion is not supported for the type of the target field,
    the target field is too small to include the value, or the data does not
    have the format required for the target field.
    I did a join of 5 table and it is inseting into an interal table wich tpye is al the fields that i am pulling from the other tables. can someone please help. thanks

    Performance wise its not suggested to use CORRESPONDING FIELDS so better take care of the fields u are selecting into table by having the structure of internal table exactly like the fields u are selecting, in the same order.
    ex:
    types: begin of ty_vbap,
               vbeln  type vbeln,
               posnr type posnr,
              end of ty_vbap.
    data: it_vbap type standard table of ty_vbap.
    select vbeln posnr into table it_vbap from vbap where ...
    кu03B1ятu03B9к

  • Strange PDC issue when inserting compressor on a group with side chain

    Hi Forum People!
    Seem to have hit a brick wall trouble shooting this issue, and hope someone here can help.
    Whenever I insert a compressor on a bus, and send a sidechain trigger to the compressor, PDC stops working for all multi-output synths. I can sidechain stereo tracks without an issue, but not group tracks with mutliple tracks routed through the groups.
    For the record, as soon as I remove the send to the sidechain, PDC works normally.
    Is this a limitation of Logic's PDC, or is there a workaround that I haven't discovered?
    Many thanks for your time
    Steve

    LOL! Sorry, didn't have time to respond yesterday. The problem was the way in which I'd set-up the group. Here's how I had the group set up, which broke PDC:
    I created an auxiliary track (the group), inserted a compressor, and routed my drums direct to this group.
    Here's how I fixed the problem:
    instead of creating an aux channel for the group, I routed my drum outputs through an unused bus, which in turn created the group channel automatically.
    I've only been using Logic for 1 month, having used Cubase for nearly 20 years, so realizing some things are done very differently.
    Thanks chaps!
    Steve

  • ORA-06502: error when inserting into table via db link with long datatype

    Folks,
    I am getting the following error:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small.
    This occurs when an insert is done via a database link into a table that has a LONG data type for one of the columns, and the string contains some carriage returns and or line feeds.
    I have checked by removing the db link, and inserting into a local table with identical column data types, where there is no error.
    So this might be another db link bug?
    So I need to remove the carriage returns and/or line feeds
    in my pl/sql block in the page process. I have tried
    l_text := REPLACE(l_text, CHR(10), ' ');
    l_text := REPLACE(l_text, CHR(13), NULL);
    but still getting the ORA-06502. Would really appreciate some advice here, please.
    Cheers
    KIM

    Scott,
    Time to 'fess up':
    My fault sorry, the error was coming from another page process where I had allowed insufficient string length for one of the variables, and my error message did not identify the page process clearly.
    This leads me to make a request for future releases, could the system error messages state which page process caused the problem?
    One other thing I notice, and this might be a feature not a fault, the page processes are numbered: "Page Process:      3 of 5". However process 3 is not the 3rd one to be processed, and probably refers to the order in which they are created. Should the number reflect the process order?
    Cheers
    KIM

  • SQL Query produces different results when inserting into a table

    I have an SQL query which produces different results when run as a simple query to when it is run as an INSERT INTO table SELECT ...
    The query is:
    SELECT   mhldr.account_number
    ,        NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
    ,        COUNT(1) num_apps
    FROM     app_parties ap
    SELECT   accsta.account_number
    ,        actply.party_sysid
    ,        RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
    FROM     activity_players actply
    ,        account_status accsta
    WHERE    1 = 1
    AND      actply.table_id (+) = 'ACCGRP'
    AND      actply.acttyp_code (+) = 'MHLDRM'
    AND      NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
    AND      actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
    ) mhldr
    WHERE    1 = 1
    AND      ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
    GROUP BY mhldr.account_number;      The INSERT INTO code:
    TRUNCATE TABLE applicant_summary;
    INSERT /*+ APPEND */
    INTO     applicant_summary
    (  account_number
    ,  main_borrower_status
    ,  num_apps
    SELECT   mhldr.account_number
    ,        NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
    ,        COUNT(1) num_apps
    FROM     app_parties ap
    SELECT   accsta.account_number
    ,        actply.party_sysid
    ,        RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
    FROM     activity_players actply
    ,        account_status accsta
    WHERE    1 = 1
    AND      actply.table_id (+) = 'ACCGRP'
    AND      actply.acttyp_code (+) = 'MHLDRM'
    AND      NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
    AND      actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
    ) mhldr
    WHERE    1 = 1
    AND      ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
    GROUP BY mhldr.account_number;      When run as a query, this code consistently returns 2 for the num_apps field (for a certain group of accounts), but when run as an INSERT INTO command, the num_apps field is logged as 1. I have secured the tables used within the query to ensure that nothing is changing the data in the underlying tables.
    If I run the query as a cursor for loop with an insert into the applicant_summary table within the loop, I get the same results in the table as I get when I run as a stand alone query.
    I would appreciate any suggestions for what could be causing this odd behaviour.
    Cheers,
    Steve
    Oracle database details:
    Oracle Database 10g Release 10.2.0.2.0 - Production
    PL/SQL Release 10.2.0.2.0 - Production
    CORE 10.2.0.2.0 Production
    TNS for 32-bit Windows: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    Edited by: stevensutcliffe on Oct 10, 2008 5:26 AM
    Edited by: stevensutcliffe on Oct 10, 2008 5:27 AM

    stevensutcliffe wrote:
    Yes, using COUNT(*) gives the same result as COUNT(1).
    I have found another example of this kind of behaviour:
    Running the following INSERT statements produce different values for the total_amount_invested and num_records fields. It appears that adding the additional aggregation (MAX(amount_invested)) is causing problems with the other aggregated values.
    Again, I have ensured that the source data and destination tables are not being accessed / changed by any other processes or users. Is this potentially a bug in Oracle?Just as a side note, these are not INSERT statements but CTAS statements.
    The only non-bug explanation for this behaviour would be a potential query rewrite happening only under particular circumstances (but not always) in the lower integrity modes "trusted" or "stale_tolerated". So if you're not aware of any corresponding materialized views, your QUERY_REWRITE_INTEGRITY parameter is set to the default of "enforced" and your explain plan doesn't show any "MAT_VIEW REWRITE ACCESS" lines, I would consider this as a bug.
    Since you're running on 10.2.0.2 it's not unlikely that you hit one of the various "wrong result" bugs that exist(ed) in Oracle. I'm aware of a particular one I've hit in 10.2.0.2 when performing a parallel NESTED LOOP ANTI operation which returned wrong results, but only in parallel execution. Serial execution was showing the correct results.
    If you're performing parallel ddl/dml/query operations, try to do the same in serial execution to check if it is related to the parallel feature.
    You could also test if omitting the "APPEND" hint changes anything but still these are just workarounds for a buggy behaviour.
    I suggest to consider installing the latest patch set 10.2.0.4 but this requires thorough testing because there were (more or less) subtle changes/bugs introduced with [10.2.0.3|http://oracle-randolf.blogspot.com/2008/02/nasty-bug-introduced-with-patch-set.html] and [10.2.0.4|http://oracle-randolf.blogspot.com/2008/04/overview-of-new-and-changed-features-in.html].
    You could also open a SR with Oracle and clarify if there is already a one-off patch available for your 10.2.0.2 platform release. If not it's quite unlikely that you are going to get a backport for 10.2.0.2.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to execute a Java method when row inserted into a database table?

    I have the need to fire off a java method when a row is inserted into a database table. I am unfortunately working with MySQL which just recently supported triggers but these new triggers can not execute a Java application on any event.
    What I am looking for is an event driven approach such that when a row is inserted into a specific table I can fire off a java method (sitting in a tomcat container) that will take the contents and send it to a web service.
    It has been mentioned that JMS may have the ability to poll and monitor a database table. Just wondering if anyone could point me in the right direction.
    thanks
    JavaTek

    A service handler might be the right way to run some code at the end of a service call (another way would be to make use of filters).
    First, make sure your static table is merged with ServiceHandlers.
    Secondary, change your custom method name into one that is not already in the service definition of COLLECTION_COPY_LOT (preferably a unique method name like collectionCopyLotLastAction that describes its purpose) and remove the following line from your code:
    m_service.doCodeEx("",this);Now create a service definition for COLLECTION_COPY_LOT in your custom component based on the original COLLECTION_COPY_LOT (copy paste from the original service definition) and add you own method collectionCopyLotLastAction as the last step in the service. Play with the load order to make sure CS is using your service definition of COLLECTION_COPY_LOT instead of the original.
    regards,
    Fabian

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Partioning spatially indexed tables

    I am a complete novice when it comes to partitioning, so can someone tell me any advantages/disadvantages/problems in partitioning a large RTREE spatially indexed table (using an SDO_GEOMETRY column) containing several million rows?

    Just to make sure you understand what is and isn't possible:
    When partitioning a table with spatial data, you cannot partition based on the geometry column.
    Spatial index partitioning is only available on a table that has been partitioned by range.
    All of the manageability benefits associated with partitioning data applies to data with a spatial column as well. If the index has been built using partitioning, you can do things like store different index partitions in different tablespaces, index partition rebuilds, index partition archives, exchange, split, merge partitions, etc.
    In terms of performance, there are several benefits that can be achieved:
    If the partition key can be specified as a predicate on the query, then there can be a large performance improvement.
    If you can't specify a partition key in the query, then if the index is stored across different tablespaces then concurrency can improve if multiple sessions are accessing the index.
    Single stream performance can be improved through the use of parallelism as well.
    Most of the above is available in 9iR1, although exchange partition and parallelism both were available as of 9iR2.
    If there is no parallelism, and no partition key predicate can be specified in spatial queries, and there is only single session access to the database, and you don't need the manageability benefits, then partitioning probably isn't for you.

  • Oracle Retail 13 - Performance issues when open, save, approving worksheets

    Hi Guys,
    Recently we started facing performance issues when we started working with Oracle Retail 13 worksheets from within the java GUI at clients desktops.
    We run Oracle Retail 13.1 powered by Oracle Database 11g R1 and AS 10g in latest release.
    Issues:
    - Opening, saving, approving worksheets with approx 9 thousands of items takes up to 15 minutes.
    - Time for smaller worksheets is also around 10 minutes just to open a worksheet
    - Also just to open multiple worksheets takes "ages" up to 10-15 minuts
    Questions:
    - Is it expected performance for such worksheets?
    - What is your experience with Oracle Retail 13 in terms of performance while working with worksheets - how much time does it normally take to open edit save a worksheet?
    - What are the average expected times for such operations?
    Any feedback and hints would be much appreciated.
    Cheers!!

    Hi,
    I guess you mean Order/Buyer worksheets?
    This is not normal, should be quicker, matter of seconds to at most a minute.
    Database side tuning is where I would look for clues.
    And the obvious question: remember any changes to anything that may have caused the issue? Are the table and index statistics freshly gathered?
    Best regards, Erik Ykema

  • Inconsistent datatypes error when inserting into a object

    I am trying to insert some test data into the table emp.I have managed to succesfully create the objects and types but when I try to insert into the emp table I get a inconsistent datatypes error however I have checked the datatypes and they all seem fine. Can anyone help me.
    thanks
    CREATE OR REPLACE TYPE Address_T AS object
    (ADDR1                VC2_40,
    ADDR2               VC2_40,
    CITY_TX          VC2_40,
    COUNTY_CD          VC2_40,
    POST_CD          VC2_40);
    CREATE OR REPLACE TYPE PERSON_T AS OBJECT (
    LNAME_TX           NAME_T,
    FNAME_TX           NAME_T,
    BIRTH_DATE          DATE,
    TELEPHONE          VC2_20,
    EMAIL               VC2_40);
    CREATE OR REPLACE TYPE EMP_T AS OBJECT (
    EMP_ID     NUMBER (10),
    PERSON     PERSON_T,
    ADDRESS ADDRESS_T,
    HIRE_DATE DATE)
    CREATE TABLE EMP OF EMP_T
    (EMP_ID NOT NULL PRIMARY KEY);
    INSERT INTO EMP VALUES (1,           
    PERSON_T('PETCH',
    'GAVIN',
    '23-JAN-80',
    '(01964)550700',
    '[email protected]'),
    ADDRESS_T('67 CANADA',
    'WALKINGTON',
    'BEVERLEY',
    'EAST YORKSHIRE',
    'HU17 7RL'),
    '11-FEB-02'
    ERROR at line 1:
    ORA-00932: inconsistent datatypes

    Gavin,
    What is your VC2_40 and NAME_T type definition? Your insert used these as varchar2, which is a built-in type not a user-defined type. If you explicitly define these to be varchar2's, the insert statement works fine.
    Regards,
    Geoff
    I am trying to insert some test data into the table emp.I have managed to succesfully create the objects and types but when I try to insert into the emp table I get a inconsistent datatypes error however I have checked the datatypes and they all seem fine. Can anyone help me.
    thanks
    CREATE OR REPLACE TYPE Address_T AS object
    (ADDR1                VC2_40,
    ADDR2               VC2_40,
    CITY_TX          VC2_40,
    COUNTY_CD          VC2_40,
    POST_CD          VC2_40);
    CREATE OR REPLACE TYPE PERSON_T AS OBJECT (
    LNAME_TX           NAME_T,
    FNAME_TX           NAME_T,
    BIRTH_DATE          DATE,
    TELEPHONE          VC2_20,
    EMAIL               VC2_40);
    CREATE OR REPLACE TYPE EMP_T AS OBJECT (
    EMP_ID     NUMBER (10),
    PERSON     PERSON_T,
    ADDRESS ADDRESS_T,
    HIRE_DATE DATE)
    CREATE TABLE EMP OF EMP_T
    (EMP_ID NOT NULL PRIMARY KEY);
    INSERT INTO EMP VALUES (1,           
    PERSON_T('PETCH',
    'GAVIN',
    '23-JAN-80',
    '(01964)550700',
    '[email protected]'),
    ADDRESS_T('67 CANADA',
    'WALKINGTON',
    'BEVERLEY',
    'EAST YORKSHIRE',
    'HU17 7RL'),
    '11-FEB-02'
    ERROR at line 1:
    ORA-00932: inconsistent datatypes

  • ORA-07445 (solaris 9.0.) when inserting into LONG column

    i am getting a core-dump when inserting into a table w/ long column. running 9.0.1 on solaris.
    ORA-07445: exception encountered: core dump [kghtshrt()+68] [SIGSEGV] [Address not mapped to object] [0x387BBF0] [] []
    if anyone has ANY input - please provide it ... i am desperate at this point.
    i am trying to avoid upgrading to 9.2.0 to solve this problem.
    regards -
    jerome

    You should report this in a service request on http://metalink.oracle.com.
    It is a shame that you put all the effort here to describe your problem, but on the other hand you can now also copy & paste the question to Oracle Support.
    Because you are using 10.2.0.3; I am guessing that you have a valid service contract...

  • Inserting into a base table of a materialized view takes lot of time.......

    Dear All,
    I have created a materialized view which refreshes on commit.materialized view is enabled query rewrite.I have created a materialized view log on the base table also While inserting into the base table it takes lot of time................Can u please tell me why?

    Dear Rahul,
    Here is my materialized view..........
    create materialized view mv_test on prebuilt table refresh force on commit
    enable query rewrite as
    SELECT P.PID,
    SUM(HH_REGD) AS HH_REGD,
    SUM(INPRO_WORKS) AS INPRO_WORKS,
    SUM(COMP_WORKS) AS COMP_WORKS,
    SUM(SKILL_WAGE) AS SKILL_WAGE,
    SUM(UN_SKILL_WAGE) AS UN_SKILL_WAGE,
    SUM(WAGE_ADVANCE) AS WAGE_ADVANCE,
    SUM(MAT_AMT) AS MAT_AMT,
    SUM(DAYS) AS DAYS,
    P.INYYYYMM,P.FIN_YEAR
    FROM PROG_MONTHLY P
    WHERE SUBSTR(PID,5,2)<>'PP'
    GROUP BY PID,P.INYYYYMM,P.FIN_YEAR;
    Please help me if query enable rewrite does any performance degradation......
    Thanks & Regards
    Kris

  • How to select data from 3rd row of Excel to insert into Sql server table using ssis

    Hi,
    Iam having Excel files with headers in first two rows , i want two skip that two rows and select data from 3rd row to insert into Sql Server table using ssis.3rd row is having column names.

                                                         CUSTOMER DETAILS
                         REGION
    COL1        COL2        COL3       COL4           COL5          COL6          COL7
           COL8          COL9          COL10            COL11      
    1            XXX            yyyy         zzzz
    2            XXX            yyyy        zzzzz
    3           XXX            yyyy          zzzzz
    4          XXX             yyyy          zzzzz
    First two rows having cells merged and with headings in excel , i want two skip the first two rows and select the data from 3rd row and insert into sql server using ssis
    Set range within Excel command as per below
    See
    http://www.joellipman.com/articles/microsoft/sql-server/ssis/646-ssis-skip-rows-in-excel-source-file.html
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

Maybe you are looking for

  • EJB 3.0  Stateless Local Interface not found

    Using Jdeveloper 1.3 it looked that EJB 3.0 local stateless session EJB bean cannot be found using ejection or lookup. It works with a remote interface. Context context =new InitialContext(); SessionEJB sessionEJB = (SessionEJB)context.lookup("Sessio

  • Loop stop when pressing the minimize button??

    Hi, I've made data-acquisition VI, when i run it and press "for example" the minimize button of my VI it stops untill i release the minimize button, is there anything to avoid this?? So that when i press the minimize button that my vi just go's on wi

  • Error: Illegal destination type 'H'

    Hello All, We've maintained an RFC destination type 'H' for the ALE connection from the HR system to FI.  When running tcode PC00_M99_CIPE the following error is received: System error during RFC call BAPI_FIXACCOUNT_GETLIST: Illegal destination type

  • Booting to different OS

    I have very little experience using MACs and am having trouble with my MAC G4 that I use at work. I restarted my computer and it booted to an older version. It booted to OS 9.2. At some point OS X Panther was installed, but it's not booting up to tha

  • ORA-01991 invalid password file 'string'

    Hi all I am getting the message ORA-01991 invalid password file 'string' . do i need to create a new password file. If so what is the syntax and where to write the syntax. can something be done which avoids creating the new password file.