Performance problem on parsing on remote databases.

Hello experts,
We are facing a performance issue on ORACLE 9.2.0.5
1. We do a select on on a SYNONYM (S_ppm) of a VIEW(ppm) . The View is union
of two tables on two databases on two solaris machines (used database links).
create or replace view ppm as select * from ppm_P UNION ALL SELECT * FROM creactor.ppm_P@SMFD ;
drop synonym S_ppm;
create synonym S_ppm for ppm;
We get the tkprof results as for the select statement on (S_ppm)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 7278 3.54 7.88 0 0 1015 0
Execute 7616 1.09 1.53 0 0 0 0
Fetch 7109 2.36 4.03 0 13722 0 6098
total 22003 7.00 13.44 0 13722 1015 6098
Misses in library cache during parse: 5
2. We increase the machine to three . Three database on three solaris machine.
create or replace view ppm as select * from ppm_P UNION ALL SELECT * FROM creactor.ppm_P@SMFD UNION ALL SELECT * FROM creactor.ppm_P@SMFE;
drop synonym S_ppm;
create synonym S_ppm for ppm;
We get the tkprof results as for the select statement on (S_ppm)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 6214 4.96 11.96 0 1 872 0
Execute 6503 1.00 1.31 0 0 0 0
Fetch 6069 3.40 6.28 0 11708 0 5205
total 18786 9.37 19.56 0 11709 872 5205
Misses in library cache during parse: 5
3. Similarly we increase the machine and database to four
We get the tkprof results as for the select statement on (S_ppm)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 6473 7.19 17.79 0 1 908 0
Execute 6774 1.14 1.43 0 0 0 0
Fetch 6323 4.87 9.46 0 12201 0 5424
total 19570 13.21 28.70 0 12202 908 5424
Misses in library cache during parse: 5
3. Similarly we increase the machine and database to five
We get the tkprof results as for the select statement on (S_ppm)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 6095 8.58 21.91 0 0 849 0
Execute 6379 1.16 1.42 0 0 0 0
Fetch 5954 5.99 11.78 0 11494 0 5108
total 18428 15.73 35.12 0 11494 849 5108
We see that PARSING elapsed time increases abnormally.
From 7.88 to 11.96 to 17.79 to 21.91
What can be the reason ? and how can we improve on that ?
We found a bug on ORACLE 9.2.0.5
Metalink Bug Id= 4913460 "A soft parse occurs each time a query having embedded remote tables over a database
link is executed."
But ORACLE has no patch for 9.2.0.5 and we cant upgrade to any higher versions?
Can any one help ?
Thanks and regards
Rinchen

jr_wastebasket.purge_insignificant_versions helps, I drop old versions of forms
Edited by: Vazha_Mantua on Mar 27, 2009 9:24 AM

Similar Messages

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • Problem in quering from remote database

    Hi
    i have a table named "gatepass" in database A.
    i created public synonym
    (create public synonym gatepass for schema.gatepass@databaselink) in database B for the table "gatepass" in database A.
    when i query that table in db B through SQL prompt, it fetches data.
    but when i query that table through form, form hangs(goes into working state) and ends.
    where is the problem
    plz help
    thanx

    Here is a key statement......*All was working fine until last week when users started reporting that data was loading slowly.*
    So the key question is......*What changed last week?*
    Did anyone make database changes?
    Did some software get updated?
    Did the network configuration change?
    Did patches get applied at any level?
    Did you perform some sort of low level maintenance that you don't think would affect anything?
    Did the data substantial change?
    Did more users get added to the system?
    What has changed?
    Regards
    Tim

  • Parsing Schema, Remote Database

    Hi,
    [1]During application creation we encounter screens where we are to select table/view ...there we also have a combo-box ...Parsing Schema ... I always see one option available ..the user I am logged-in. As the field is a combo...I guess we could have many listed in that combo? How?
    [2]Is it possible to include pages that fetch data from a different database (rather than the one on which we have ApEx installed) ... I guess one method would be to use dblinks ....can we do it from non-Oracle databases?
    Thanks

    Devang,
    Your first question is impossible to parse with all the embedded ellipses. Let me guess at an answer that might help: In various wizards you see comboboxes with schemas to choose from. This list includes the current application's parsing schema and all schemas that have granted select privilege on at least one table or view to the parsing schema.
    The answer to question 2 is yes, use database links. For non-Oracle sources see the literature on Heterogenous Services.
    Scott

  • Performance problem in 9.2.0 database

    Hello All,
    Database : 9.2.0.7
    Os : windows 2003 sevrer standdard edition
    RAM 4 Gigs
    The buffer cache hit ratio in this server is around 83%, where it normaly was around 98% before i did some maintenance activities.
    I have done some maintenance activities in January on this database.
    Maintenance activties includes below steps
    1.In production i have deleted old data in the production tables
    2.Reorganized tablespaces,tables
    3.Rebuild indexes for those tables.
    4. At last collected statistics for those tables.
    Now after this activity the buffer cache hit ratio is very low.
    Any one please advice on how to increase the hit ratio.
    TIA,

    ORCLDB wrote:
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    CPU time                                                        7,029    55.87
    PX Deq: Signal ACK                                 77,056       1,950    15.50
    db file sequential read                         6,119,051       1,302    10.35
    db file scattered read                          3,544,645       1,054     8.38
    log file sync                                     127,427         551     4.38
    How long was the snapshot interval ?
    How many CPU in the machine ?
    How many users to you have to support ?
    Is your operating system 32 bit or 64 bit ?
    Is this an OLTP system or a datawarehouse / DSS type of system ?
    First thoughts
    You probably have some changes in execution plans because you made some objects smaller and more densely packed - leading Oracle to think that tablescans and index fast full scans would be efficient. (This is a GUESS base on the large number of "db file scattered read" and the assumption that this is NOT a data warehouse).
    You probably have a large amount of spare RAM (your machine has 4GB, your cache is 300MB) and should increase the size of the cache. (This is a guess based on the fact that your db file reads are averaging less than 0.3 milliseconds so are almost sure to be coming out of a local filesystem cache.) The change may decrease the amount of CPU you are using, because all that copying between caches will be using CPU.
    The parallel execution probably needs to be stopped (based on your comments to another user about rebuilding indexes in parallel) because this can result in lots of tablescans and index fast full scans - which aren't necessarily going to use direct path reads because Oracle may be doing tablescans that go "serial to parallel". Set the suspect indexes back to "no parallel".
    For confirmation about where the time goes - check the "SQL Ordered by Reads" section of the report and look at the execution paths of the top two or three SQL statements. (Note: if you set statspack to take snapshots at level 6, you can usually get the actual execution plans of the top SQL by running the srcipt $ORACLE_HOME/rdbms/admin/sqrepsql.sql
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    A general reminder about "Forum Etiquette / Reward Points": http://forums.oracle.com/forums/ann.jspa?annID=718
    If you never mark your questions as answered people will eventually decide that it's not worth trying to answer you because they will never know whether or not their answer has been of any use, or whether you even bothered to read it.
    It is also important to mark answers that you thought helpful - again it lets other people know that you appreciate their help, but it also acts as a pointer for other people when they are researching the same question, moreover it means that when you mark a bad or wrong answer as helpful someone may be prompted to tell you (and the rest of the forum) what's so bad or wrong about the answer you found helpful.

  • Problems in accessing a remote database

    I want to access an MS ACCESS database located at an "IP".
    for this purpose i used the URL
    String url = "jdbc:odbc:@127.0.0.1:80:eDatt";
    here eDatt is the name of database. and it is a system DSN.
    but when i run the program i found the following error.
    Unable to connect
    java.sql.SQLException: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
         at sun.jdbc.odbc.JdbcOdbc.createSQLException(JdbcOdbc.java:6031)
         at sun.jdbc.odbc.JdbcOdbc.standardError(JdbcOdbc.java:6188)
         at sun.jdbc.odbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:2458)
         at sun.jdbc.odbc.JdbcOdbcConnection.initialize(JdbcOdbcConnection.java:320)
         at sun.jdbc.odbc.JdbcOdbcDriver.connect(JdbcOdbcDriver.java:163)
         at java.sql.DriverManager.getConnection(DriverManager.java:517)
         at java.sql.DriverManager.getConnection(DriverManager.java:177)
         at eDattDatabaseHandler.<init>(eDattDatabaseHandler.java:27)
         at eDattDatabaseHandler.main(eDattDatabaseHandler.java:196)Connectiobn is SuccessfUL
    java.lang.NullPointerException
         at eDattDatabaseHandler.logOn(eDattDatabaseHandler.java:109)
         at eDattDatabaseHandler.main(eDattDatabaseHandler.java:205)
    Exception in thread "main"

    Does this work, String url = "jdbc:odbc:eDatt"; ?

  • Using rman to backup a remote database

    I am after a good guide or advice on how (what) to install and (how to) run Oracle RMAN
    to backup a remote Oracle database.
    RMAN must be installed and run on a Solaris (SunOS 5.8) box remote from the Solaris
    box (also SunOS 5.8) running the Oracle database but connected to it via a TCP/IP
    network.
    The Oracle Recovery Manager and Database version is 9.2.0.1.0.
    RMAN must perform the backup of the remote database over the network and store
    the backup on a disk local to the RMAN box.
    The backups may be full or incremental and may be "hot" backups taken while the
    Oracle database is open.
    The disk files are to be then backed up to tape using Veritas.
    What software do I need to install on the RMAN box ?
    What software do I need to install on the remote Oracle database box ?
    Thanks,
    Brett Morgan.

    writes to a NFS-mounted diskRman is writing to disk or tape (when is bundled (libraries needed) with some backup software for tapes such as Veritas). The disk device could be local, or mounted disk form other host or storage (over NFS as Werner mentioned, SMB and others).

  • Trouble creating forms from remote database

    Hi,
    We are having problems with connecting to remote databases with portal. We can create reports fine from the remote databases but when we try to create forms, it gives the error:
    'This Table does not exist or you do not have the required privileges (WWV-13020)'.
    We've created a link to the remote db, and synonyms for each of the tables. I don't understand because it seems we can build reports and other components...just not forms.
    If anyone has any ideas as to what the problem could be, any assistance you may provide is greatly appreciated!
    Thanks!

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Dmitry Nonkin([email protected]):
    Sarah,
    It could be dull, but anyway - make sure that you granted all the required grants (INSERT,UPDATE,DELETE) in addition to SELECT to the application schema.
    Also, at which step of the wizard you recieve this error?
    Thanks,
    Dmitry<HR></BLOCKQUOTE>
    Dmitry - we're having the same problems...
    I've had our DBA team grant us every possible privilage we can find - but I still can't create a form from within the portal to a remote database. The wizard lets me put in the table name, ex. asset.wban_trans@drsrch, and does come up with the proper columns on the layout page of the wizard, but when I click the "Finish" button I get the following errors: Line/Column Error
    733/21 PLS-00454: with a returning into clause, the table expression cannot be remote or a subquery
    733/9 PL/SQL: SQL Statement ignored
    841/16 PLS-00454: with a returning into clause, the table expression cannot be remote or a subquery
    841/9 PL/SQL: SQL Statement ignored
    The form is then listed in my applications list, but if I try and run it I get the following error:
    Error: An unexpected error occurred: ORA-04063: has errors
    ORA-04063: package body "PORTAL.FORM_0305092506" has errors
    ORA-06508: PL/SQL: could not find program unit being called (WWV-16016)
    Thanks for any help....
    Christina
    null

  • DB Performance problem

    Hi Friends,
    We are experiencing performance problem with our oracle applications/database.
    I run the OEM and I got the following report charts:
    http://farm3.static.flickr.com/2447/3613769336_1b142c9dd.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    Are there any clues that these charts can give re: performance problem?
    What other charts in OEM that can help solve or give assitance performance problem?
    Thanks a lot in advance

    ytterp2009 wrote:
    Hi Charles,
    This is the output of:
    SELECT
    SUBSTR(NAME,1,30) NAME,
    SUBSTR(VALUE,1,40) VALUE
    FROM
    V$PARAMETER
    ORDER BY
    UPPER(NAME);
    (snip)
    Are there parameters need tuning?
    ThanksThanks for posting the output of the SQL statement. The output answers several potential questions (note to other readers, shift the values in the SQL statement's output down by one row).
    Parameters which I found to be interesting:
    control_files                 C:\ORACLE\PRODUCT\10.2.0\ORADATA\BQDB1\C
    cpu_count                     2
    db_block_buffers              995,648 = 8,156,348,416 bytes = 7.6 GB
    db_block_size                 8192
    db_cache_advice               on
    db_file_multiblock_read_count 16
    hash_area_size                131,072
    log_buffer                    7,024,640
    open_cursors                  300
    pga_aggregate_target          2.68435E+12 = 2,684,350,000,000 = 2,500 GB
    processes                     950
    sessions                      1,200
    session_cached_cursors        20
    shared_pool_size              570,425,344
    sga_max_size                  8,749,318,144
    sga_target                    0
    sort_area_retained_size       0
    sort_area_size                65536
    use_indirect_data_buffers     TRUE
    workarea_size_policy          AUTOFrom the above, the server is running on Windows, and based on the value for use_indirect_data_buffers is running a 32 bit version of Windows using a windowing technique to access memory (database buffer cache only) beyond the 4GB upper limit for 32 bit applications. By default, 32 bit Windows limits each process to a maximum of 2GB of memory utilization. This 2GB limit may be raised to 3GB through a change in the Windows configuration, but a certain amount of the lower 4GB region (specifically in the upper 2GB of that region) must be used for the windowing technique to access the upper memory (the default might be 1GB of memory, but verify with Metalink).
    By default on Windows, each session connecting to the database requires 1MB of server memory for the initial connection (this may be decreased, see Metalink), and with SESSIONS set at 1,200, 1.2GB of the lower 2GB (or 3GB) memory region would be consumed just to let the sessions connect, before any processing is performed by the sessions.
    The shared pool is potentially consuming another 544MB (0.531GB) of the lower 2GB (or 3GB) memory region, and the log buffer is consuming another 6.7MB of memory.
    Just with the combination of the memory required per thread for each session, the memory for the shared pool, and the memory for the log buffer, the server is very close to the 2GB memory limit before the clients have performed any real work.
    Note that the workarea_size_policy is set to AUTO, so as long as that parameter is not adjusted at the session level, the sort_area_size and sort_area_retained_size have no impact. However, the 2,500 GB specification (very likely an error) for the pga_aggregate_target is definitely a problem as the memory must come from the lower 2GB (or 3GB) memory region.
    If I recall correctly, a couple years ago Dell performed testing with 32 bit servers using memory windowing to utilize memory above the 4GB limit. Their tests found that the server must have roughly 12GB of memory to match (or possibly exceed) the performance of a server with just 4GB of memory which was not using memory windowing. Enabling memory windowing and placing the database buffer cache into the memory above the 4GB limit has a negative performance impact - Dell found that once 12GB of memory was available in the server, performance recovered to the point that it was just as good as if the server had only 4GB of memory. You might reconsider whether or not to continue using the memory above the 4GB limit.
    db_file_multiblock_read_count is set to 16 - on Oracle 10.2.0.1 and above this parameter should be left unset, allowing Oracle to automatically configure the parameter (it will likely be set to achieve 1MB multi-block reads with a value of 128).
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Connection to the remote database

    I met one problem about connection to remote database. the version of Oracle8i I used is 8.1.5. I try to create a database link to a remote database. In the document of Oracle8i(8.1.5) there is an example:
    CREATE DATABASE LINK sales.hq.acme.com
    CONNECT TO scott IDENTIFIED BY tiger
    USING 'sales';
    SELECT * FROM [email protected];
    When I used this same way as following,
    DROP DATABASE LINK drugnet.cohpa.ucf.edu;
    CREATE DATABASE LINK drugnet.cohpa.ucf.edu
    CONNECT to userid identified by mypassword
    using 'drugnet';
    (or using 'drugnet.cohpa.ucf.edu')
    select * from [email protected];
    the db link was created and I wanted to query and then I got an error:
    ERROR at line 1:
    ORA-02085: database link DRUGNET.COHPA.UCF.EDU connects to DRUGNET
    This message means the name of db link should be the same name as the database to which it connects. but I did it according to this way. This message really confused me.
    Somebody suggested me to use the following way, but it didn't work:
    create database link myName
    connect to myUser Identified by mypassword
    using 'servic name'
    Can anybody give me some clues?
    Thanks a lot
    Bing

    If you could change the policy file within the Applet then there'd be no point having security restrictions - the Applet could just assign whatever permissions it wanted!
    The usual way of giving an Applet extended security permissions is to sign it. Have you tried that?

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Interactive report performance problem over database link

    Hi gurus,
    I have an interactive report that retrieves values from two joined tables both placed on the same remote database. It takes 45 seconds to populate (refresh) a page after issuing a search. If I catch the actual select that is generated by apex ( the one with count(*) over ()...) and run it from sql environment, it takes 1 or 2 seconds. What on earth does consume the rest of the time and how to speed this up? I would like to awoid creating and maintaining local materialized views if possible.
    Regards
    Samo

    Hi
    APEX normally needs to return the full result set for the purposes of pagination (where you have it set to something along the lines of show x-y of z), changing this or the max row count can affect performance greatly.
    The driving site hint would need to refer to the name of the view, but can be a bit temperamental with this kind of thing. The materialize hint only works for sub-factored queries (in a 'WITH blah AS(SELECT /*+ MATERIALIZE */ * FROM etc. etc.)). They tend to materialize anyway and its best not to use hints like that for production code unless there is absolutely no other option, but just sub factoring without hints can often have a profound effect on performance. For instance
    WITH a AS
    SELECT c1,
            c2,
            c3,
            c4,
            c5
    FROM schema1.view1)
    , b AS
    SELECT c1,
            c2,
            c3
    FROM schema1.view2)
    SELECT *
    FROM a, b
    WHERE a.c5 = b.c3May produce a different plan or even just sub factoring one of the external tables may have an effect.
    You need to try things out here and experiment and keep looking at the execution plans. You should also change Toads row fetch setting to be all 9's to get a real idea of performance.
    Let me know how you get on.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)
    Edited by: Munky on Sep 29, 2009 1:41 PM

  • RMAN duplicate target database from active database - performance problem

    Hello. I’m running into a major performance problem when trying to duplicate a database from a target located inside our firewall to an auxiliary located outside our firewall. Both target and auxiliary are located in the same equipment room just on different subnets. Previously I had the auxiliary located on the same subnet as the target behind the firewall and duplicating a 4.5T database took 12 hours. Now with the auxiliary moved outside the firewall attempting to duplicate the same 4.5T database is estimated to exceed 35 hours. The target is a RAC instance using ASM and so is the auxiliary. Ping, tnsping, traceroutes to and from target and auxiliary all indicate no problem or latency. Any ideas on things to consider while hunting for this elusive performance decrease?
    Thanks in advance.

    It would obviously appear network related. Have you captured any network/firewall metrics? Are all components set to full duplex? Would it be possible to take the firewall down temporarily and then test the throughput? Do you encounter any latency if you were to copy a large file across the subnets?
    You may want to check V$RMAN_BACKUP_JOB_DETAILS, V$BACKUP_SYNC_IO or V$BACKUP_ASYNC_IO when the backup is running.

  • Remote database problem with Geometry data type

    Hello!
    I'm trying to insert and update a spatial table in a remote database. The syntax looks like:
    insert into tableA@remotedb (col1, col2)
    select col1, col2 from tableA
    where col3='abc';
    This works fine with regular tables. But when I try it on spatial tables, I get this error:
    'remote operations not permitted on object tables or user-defined type columns'.
    My table contains geometry datatype and user-defined datatype. Does anyone know how to solve this problem?
    Thanks!

    I created 2 temp tables in the remote db. One contains MDSYS.SDO_Geometry column without user-defined column; the other one has user-defined column without MDSYS.SDO_Geometry. Both got the same error, ORA-22804: remote operations not permitted on object tables or user-defined type columns.
    But strangly, I tried to insert into a local table from a remote table with MDSYS.SDO_Geometry column, and it worked! And I tried the same thing with user-defined column, but it didn't work.
    I wonder why insert sdo_geometry column from local db to remote db didn't work, but it worked the other way round. And inserting user-defined column between db didn't work at all!

  • Problem connecting sql developer with a remote database MAc OSX Snow Leopar

    Hi everyone, sorry for my poor english but i don 't speak this language.
    I'm trying to connect Sql develper to a remote database and it does not work, showing this error: The Network Adapter could not establish the connection.
    Before running sql developer i install the oracle instant client and sql plus, i use the same tnsnames.ora file as my windows machine and sqlplus (in snow leopard) connect perfect.
    i set the path of my tnsnames in the setup of sql developer but i can't do it work.
    Anyone can help me please? thanks a lot.

    Duplicate thread Problem running Sql developer in Mac OSX Snow Leopard.

Maybe you are looking for