Dedicated database & cursor_sharing

I have Oracle db 11gr2 on aix wich is running in dedicated mode.
Based on a application documentation, I put cursor_sharing parameter to EXACT.
but, I wonder... could it there be any cursor sharing if a db is in dedicated mode?

PrafullaNath wrote:
Oracle says we should keep cursor_sharing=similar. Could you point out a documentation with this ?
EXACT is the default value:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10820/initparams041.htm#REFRN10025
Using any other value than the default one requires strong application testing since it has been caused a lot of troubles by the past (wrong results were returned).
Nicolas.

Similar Messages

  • Dedicated database session?

    Is there a dedicated database connection for a given OBIEE user or connections are shared between users? If they are shared, Is there any way to make a user have a dedicated database connection? Is this possible in any way?
    Swapan.

    No, users connect to the BI server, from there the BI server connects to the source databases or retrieved data from the cach server.

  • HTTP post data from the Oracle database to another web server

    Hi ,
    I have searched the forum and the net on this. And yes I have followed the links
    http://awads.net/wp/2005/11/30/http-post-from-inside-oracle/
    http://manib.wordpress.com/2007/12/03/utl_http/
    and Eddie Awad's Blog on the same topic. I was successful in calling the servlet but I keep getting errors.
    I am using Oracle 10 g and My servlet is part of a ADF BC JSF application.
    My requirement is that I have blob table in another DB and our Oracle Forms application based on another DB has to view the documents . Viewing blobs over dblinks is not possible. So Option 1 is to call a procedure passing the doc_blob_id parameter and call the web server passing the parameters.
    The errors I am getting is:
    First the parameters passed returned null. and
    2. Since my servlet directly downloads the document on the response outputStream, gives this error.
    'com.evermind.server.http.HttpIOException: An established connection was aborted by the software in your host machine'
    Any help please. I am running out of time.
    Thanks

    user10264958 wrote:
    My requirement is that I have blob table in another DB and our Oracle Forms application based on another DB has to view the documents . Viewing blobs over dblinks is not possible. Incorrect. You can use remote LOBs via a database link. However, you cannot use a local LOB variable (called a LOB <i>locator</i>) to reference a remote LOB. A LOB variable/locator is a pointer - that pointer cannot reference a LOB that resides on a remote server. So simply do not use a LOB variable locally as it cannot reference a remote LOB.
    Instead provide a remote interface that can deal with that LOB remotely, dereference that pointer on the remote system, and pass the actual contents being pointed at, to the local database.
    The following demonstrates the basic approach. How one designs and implements the actual remote interface, need to be decided taking existing requirements into consideration. I simply used a very basic wrapper function.
    SQL> --// we create a database link to our own database as it is easier for demonstration purposes
    SQL> create database link remote_db connect to scott identified by tiger using
      2  '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=1521))(CONNECT_DATA=(SID=dev)(SERVER=dedicated)))';
    Database link created.
    SQL> --// we create a table with a CLOB that we will access via this db link
    SQL> create table xml_files( file_id number, xml_file clob );
    Table created.
    SQL> insert into xml_files values( 1, '<root><text>What do you want, universe?</text></root>' );
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> --// a local select against the table works fine
    SQL> select x.*, length(xml_file) as "SIZE" from xml_files x;
       FILE_ID XML_FILE                                                                                SIZE
             1 <root><text>What do you want, universe?</text></root>                                    53
    SQL> --// a remote select against the table fails as we cannot use remote pointers/locators
    SQL> select * from xml_files@remote_db x;
    ERROR:
    ORA-22992: cannot use LOB locators selected from remote tables
    no rows selected
    SQL> //-- we create an interface on the remote db to deal with the pointer for us
    SQL> create or replace function ReturnXMLFile( fileID number, offset integer, amount integer ) return varchar2 is
      2          buffer  varchar2(32767);
      3  begin
      4          select
      5                  DBMS_LOB.SubStr( x.xml_file, amount, offset )
      6                          into
      7                  buffer
      8          from    xml_files x
      9          where   x.file_id = fileID;
    10 
    11          return( buffer );
    12  end;
    13  /
    Function created.
    SQL> --// we now can access the contents of the remote LOB (only in 4000 char chunks using this example)
    SQL> select
      2          file_id,
      3          ReturnXMLFile@remote_db( x.file_id, 1, 4000 ) as "Chunk_1"
      4  from       xml_files@remote_db x;
       FILE_ID Chunk_1
             1 <root><text>What do you want, universe?</text></root>
    SQL> --// we can also copy the entire remote LOB across into a local LOB and use the local one
    SQL> declare
      2          c               clob;
      3          pos             integer;
      4          iterations      integer;
      5          buf             varchar2(20);   --// small buffer for demonstration purposes only
      6  begin
      7          DBMS_LOB.CreateTemporary( c, true );
      8 
      9          pos := 1;
    10          iterations := 1;
    11          loop
    12                  buf := ReturnXMLFile@remote_db( 1, pos, 20 );
    13                  exit when buf is null;
    14                  pos := pos + length(buf);
    15                  iterations := iterations + 1;
    16                  DBMS_LOB.WriteAppend( c, length(buf), buf );
    17          end loop;
    18 
    19          DBMS_OUTPUT.put_line( 'Copied '||length(c)||' byte(s) from remote LOB' );
    20          DBMS_OUTPUT.put_line( 'Read Iterations: '||iterations );
    21          DBMS_OUTPUT.put_line( 'LOB contents (1-4000):'|| DBMS_LOB.SubStr(c,4000,1) );
    22 
    23          DBMS_LOB.FreeTemporary( c );
    24  end;
    25  /
    Copied 53 byte(s) from remote LOB
    Read Iterations: 4
    LOB contents (1-4000):<root><text>What do you want, universe?</text></root>
    PL/SQL procedure successfully completed.
    SQL> The concern is the size of the LOB. It does not always make sense to access the entire LOB in the database. What if that LOB is a 100GB in size? Irrespective of how you do it, selecting that LOB column from that table will require a 100GB of data to be transferred from the database to your client.
    So you need to decide WHY you want the LOB on the client (which will be the local PL/SQL code in case of dealing with a LOB on a remote database)? Do you need the entire LOB? Do you need a specific piece from it? Do you need the database to first parse that LOB into a more structured data struct and then pass specific information from that struct to you? Etc.
    The bottom line however is that you can use remote LOBs. Simply that you cannot use a local pointer variable to point and dereference a remote LOB.

  • Is transparent gateway needed to connect to IS Cache database?

    Hello,
    I have been asked by one of our developers how to create a connection from his Oracle 11.2 database to SQL Server and also InterSystems Cache' databases without having to use Transparent Gateway?
    Is this possible, and if so, how can it be done?
    (thanks in advance)

    Hi,
    You say - "I guess it means that Oracle Database Gateway is FREE (per se)" but to make the point again - only the Database Gateway for ODBC (DG4ODBC) is free. The other gateways need a license.
    If you run on Windows then many of the Microsoft ODBC drivers are free or included as part of other products so you do not need to pay for them, and also many non-Oracle database providers include an ODCB driver as part of the product, so again you do not need to pay for anything else to use DG4ODBC, for example MySQL.
    To interface with a Cache database you can use Dg4ODBC but you need to provide the ODBC driver. There is not a 'dedicated' database gateway for that product. That is why we provide the DG4ODBC which can interface with any non-Oracle database or datastore for which a compatible ODBC driver is available.
    This note available on My Oracle Support has information about the install notes on various platforms -
    Note.1083703.1 Master Note for Oracle Gateway Products
    Regards,
    Mike

  • Server cleanup wizard problem - unable to connect to the WSUS Server Database.

    I'm trying to run server cleanup wizard.. it starts to run and then after a while it gives me this error:
    The WSUS administration console was unable to connect to the WSUS Server Database.
    Verify that SQL server is running on the WSUS Server. If the problem persists, try restarting SQL.
    System.Data.SqlClient.SqlException -- Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
    The statement has been terminated.
    Source
    .Net SqlClient Data Provider
    Stack Trace:
       at Microsoft.UpdateServices.Internal.BaseApi.SoapExceptionProcessor.DeserializeAndThrow(SoapException soapException)
       at Microsoft.UpdateServices.Internal.DatabaseAccess.AdminDataAccessProxy.ExecuteSPSearchUpdates(String updateScopeXml, String preferredCulture, ExtendedPublicationState publicationState)
       at Microsoft.UpdateServices.Internal.BaseApi.Update.SearchUpdates(UpdateScope searchScope, ExtendedPublicationState publicationState, UpdateServer updateServer)
       at Microsoft.UpdateServices.Internal.BaseApi.UpdateServer.GetUpdates(UpdateScope searchScope)
       at Microsoft.UpdateServices.UI.AdminApiAccess.UpdateManager.GetUpdates(ExtendedUpdateScope filter)
       at Microsoft.UpdateServices.UI.AdminApiAccess.WsusSynchronizationInfo.InitializeDerivedProperties()
       at Microsoft.UpdateServices.UI.AdminApiAccess.WsusSynchronizationInfo.get_NewUpdatesCount()
       at Microsoft.UpdateServices.UI.SnapIn.Pages.SyncResultsListPage.GetSyncInfoRow(WsusSynchronizationInfo syncInfo)
       at Microsoft.UpdateServices.UI.SnapIn.Pages.SyncResultsListPage.GetListRows()
    Thanks

     Some questions:
    Are there any other databases running on this Std Edition SQL service?
    [a] Yes there are, we have Kaspersky enterprise DB, Report Server DB and local application DB.
    Are there any other services running on this WSUS Server?
    [b] Yes there are, we have Active Directory, Kaspersky enterprise, SQL Server 2005, and WSUS all on the same server.
    How many days since your WSUS server was first installed?
    [c] It's been about a year now.
    What is the physical size of the SUSDB.mdf file?
    [d] 9,666,752 KB
    What is the hardware configuration of this machine, including disk drives?
    [e] Intel Xeon 1.86, 2GB Ram, HD C: 39GB - E: 25.2, running Windows Server 2003 R2 SP2.
    How many client systems are you servicing from this WSUS Server?
    [f] Around 40.
    What products/classifications are you synchronizing.
    [g] Windows XP-vista, Windows Server 2003, Office 2003-2007, SQL Server 2005.
    Okay, for starters, you have an underpowered/overextended machine running Active Directory, ASP.NET, and a database server, all on a sub 2GHz CPU with 2GB RAM, and not enough disk spindles. The machine has had WSUS running for about a year and is 9GB in size.
    There's no doubt in my mind that some of your performance issues are directly related to disk and database fragmentation.
    There's also no doubt that some of your performance issues are directly related to memory starvation.
    I'd suggest the following long-term fixes:
    1. Get a second machine. Make it a dedicated database server. Provision it accordingly to support servicing multiple database applications.
    2. Lacking #1, this machine needs more memory. It also needs more disk spindles. At a minimum the databases being serviced should be on a dedicated physical drive; ideally there would be two dedicated drives allocated for supporting database services. The
    For the short-term fixes, do this:
    1. During after-hours time, if you don't already have one, build a temporary machine that can act as a DC/GC, while you take this machine temporarily offline.
    1. Shutdown the Update Services service, SQL Server database engine, and any other services dependent on the SQL Server database engine (Kapersky, and other reporting applications). Disconnect from the network to temporarly eliminate DC traffic. (You could also shutdown the AD services, but disconnecting the network cable is ever-so-much easier.) Defragment ALL drives.
    2. Restart ONLY the SQL Server service. Obtain this SQL script to Reindex the WSUS Databases.
    3. Restart ONLY the Update Services service. Attempt the Server Cleanup Wizard again. Run it in two passes. Pass 1 performing everything except  remove unused updates. Pass 2 running only remove unused updates.
    4. After completion of the Server Cleanup Wizard, reconnect the machine to the network and resume all other services.
    5. If you're able to complete #3, secure the services of a well-qualified DBA to determine if there are any misconfigurations in your SQL Server setup that would account for why your WSUS database is 9GB in size -- such as improperly configured autogrowth parameters. Based on the products you're updating and only forty clients, 9GB is about 3x the maximum size I would expect to see in a WSUS database. It's possible this is simply caused by excess unused updates, it's possible it's caused by fragmentation, it's probable it was caused by unnecessary autogrowth of the database due to insufficient update maintenance. You'll want a DBA to assist you in shrinking that database after you successfully run the Database Maintenance and Server Cleanup Wizard.
    Lawrence Garvin, M.S., MCITP(x2), MCTS(x5), MCP(x7), MCBMSP
    Principal/CTO, Onsite Technology Solutions, Houston, Texas
    Microsoft MVP - Software Distribution (2005-2009)

  • Runnin oracle apex on a different server to the main production database

    Hi,
    I'm new to apex. I have a large prod db 20TB+. We want to have some sort of a simple frontend to our data warehouse. We would like to use apex.
    However we do not want to install APEX schema's etc in our production database.
    What we want to do is to install apex binaries and a dedicated database for APEX schema's on a seperate server. Then use this installation to run UI's for our main production database.
    Is this sort of confirugation possible.
    Thanks and regards,
    Pritesh
    Edited by: jainprit on Nov 7, 2010 10:29 PM

    Well yes, you can use database links to access your DWH tables/views from your APEX Database.
    Search the forum, there should be plenty of threads dealing with database links and APEX.
    brgds,
    Peter
    Blog: http://www.oracle-and-apex.com
    ApexLib: http://apexlib.oracleapex.info
    BuilderPlugin: http://builderplugin.oracleapex.info
    Work: http://www.click-click.at

  • Database file sizes

    Hi All,
    Is there any specific guidelines on the size of datafiles in mssql?.
    The best practices documents say you can maintain number of data files = the number of processors. When installing the SAP system it creates 3 data files by default. In production systems currently the size of these 3 files are very high.So is it a good option to restrict the growth of these files and add another 3 new datafiles and allow those files to grow.
    regards,
    dev

    Hi dev,
    there's a whitepaper published on Juergen Thomas' Blog ([http://blogs.msdn.com/b/saponsqlserver/archive/2009/06/24/new-sap-on-sql-server-2008-whitepaper-released.aspx]) that states the following:
    - Small sized systems, where 4 data files should be fine. These systems usually run on dedicated database servers that have around 4 cores.
    - Medium sized systems, where at least 8 data files are required. These systems usually run on dedicated database servers that have between 8 and 16 CPU cores.
    - Large sized systems where a minimum of 16 data files are required. These are usually systems that run on hardware that has between 16 and 32 CPU cores.
    - Xtra Large sized systems. Upcoming hardware over the next years certainly will support up to 256 CPU cores. However, we donu2019t necessarily expect a lot of customers deploying this kind of hardware for one dedicated database server, servicing one database of an SAP application. For XL systems we recommend 32 to 64 data files.
    For more information check out the whitepaper (its 404 currently which should be fixed soon)

  • HP Systems Insight Manager - Database questions, what is this database for?

    Hello,
    I am either missing this information completely in the Installation guide or it doesn't exist, but I am looking for details of the database that HP SIM needs to install.
    -I assume this database will be used as a CMS?
    -What information will this database be storing?
    -If I have multiple HP servers, is it best to put this database on a remote dedicated database server? Will each HP machine that has SIM installed be pointing to this database?
    Thanks.

    Hello,
    New to this site so here we go with an issue on HP Systems Insight Manager v7.4.0
    System Insight Manager (SIM Server)  receives alerts from a second server when the server restarts but no other alerts although it should be receiving all alerts. 
    When looking at the management agent on the second server I try and send a test trap and although it is sent it is not received on the SIM Server.
    I do get alerts on the SIM server (and receive an email which has been set to send for all alerts) when the second server is restarted. 
    As I have set all alerts to be logged I should be getting spammed by the second server but it's all very quiet.
    Any ideas anyone as I am stumped?
    This topic first appeared in the Spiceworks Community

  • What is Minimum database requirement for EBS R12

    What is Minimum database requirement for EBS R12? for example if it works only with enterprise edition or with standard edition also.
    Regards,
    Sandeep V

    Question is very interesting and very important. The link does not answer if "standard edition" can be used. Obviously, Rapid Install installs EE database. But we can move the database to another dedicated database server.
    And the question becomes "Can we move it to a Standard Edition database server"? Huge ramifications on costs. Extremely important. Nowhere I could find positive answer. I wonder if anyone had found an answer?AFAIK, it is not supported.
    My own research shows that VIS database (built at home) has partitioned tables in APPLSYS and APPS schemas. Partitioning is not supported in Standard Edition.Correct.
    I wonder if we can migrate the database and manually move several partitioned tables into non-partitioned Standard Edition. Will EBS R12 break? Will Oracle Support cancel its support?Oracle will not support this -- Please log a SR to confirm this with Oracle Support.
    Thanks,
    Hussein

  • Cursor_sharing=similar or cursor_sharing=exact

    Hai,
    I have doubt regarding setting cursor_sharing parameter exact and similar at session level.
    On my database cursor_sharing is set similar at system level.
    test >show parameter cursor
    NAME                                 TYPE                             VALUE
    cursor_sharing                       string                           SIMILARI have fired a simple select statement without setting any cursor_sharing at session level
    TEST >variable b1 number;
    TEST >exec :b1:=7499;
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    TEST >select empno,job from emp where job='SALESMAN' and empno=:b1;
         EMPNO JOB
          7499 SALESMANchecking the hash value for query fired
    test >select sql_text,invalidations,hash_value,executions,loads from v$sqlarea
    16:14:50   2  where sql_text like '%select empno,job from%';
    SQL_TEXT
    INVALIDATIONS HASH_VALUE EXECUTIONS      LOADS
    select empno,job from emp where job=:"SYS_B_0" and empno=:b1
                0 3727168047          1          1Literal job='SALESMAN' is converted into system generated bind variable job=:"SYS_B_0" as my cursor_sharing=similar
    Fired the same statement by setting cursor_sharing=exact at session level
    TEST >alter session set cursor_sharing=exact;
    Session altered.
    Elapsed: 00:00:00.00
    16:15:25 TEST >select empno,job from emp where job='SALESMAN' and empno=:b1;Checking the hash value for newly fired query with cursor_sharing=exact
    SQL_TEXT
    INVALIDATIONS HASH_VALUE EXECUTIONS      LOADS
    select empno,job from emp where job='SALESMAN' and empno=:b1
                0 2065003705          1          1
    select empno,job from emp where job=:"SYS_B_0" and empno=:b1
                0 3727168047          1          1literal job='SALESMAN' is not converted into bind variable as my cursor_sharing=exact
    At the same session fired the same query by setting cursor_sharing=similar ..to check which hash value would be shared.
    16:15:28 TEST >alter session set cursor_sharing=similar;
    Session altered.
    Elapsed: 00:00:04.09
    17:27:54 TEST >select empno,job from emp where job='SALESMAN' and empno=:b1;
         EMPNO JOB
          7499 SALESMAN
    16:28:26 test >select sql_text,invalidations,hash_value,executions,loads from v$sqlarea
    17:28:13   2  where sql_text like '%select empno,job from%';
    SQL_TEXT
    INVALIDATIONS HASH_VALUE EXECUTIONS      LOADS
    select empno,job from emp where job='SALESMAN' and empno=:b1
                0 2065003705          2          *2*
    select empno,job from emp where job=:"SYS_B_0" and empno=:b1
                0 3727168047          1          1The hash value 2065003705 (cursor_sharing=exact) is shared as executions column is changed from 1 to 2.
    So after setting parameter cursor_sharing = similar why the hash value of 3727168047(cursor_sharing=similar)
    is not shared?I guess something is cached at session level but i want to know the exact reason..
    Again i flushed the shared pool
    test >alter system flush shared_pool;
    System altered.
    Elapsed: 00:00:03.09
    17:39:40 test >select sql_text,invalidations,hash_value,executions,loads from v$sqlarea
    17:39:44   2  where sql_text like '%select empno,job from%';
    SQL_TEXT
    INVALIDATIONS HASH_VALUE EXECUTIONS      LOADS
    select empno,job from emp where job='SALESMAN' and empno=:b1
                0 2065003705          0          2The hash value of 3727168047(cursor_sharing=similar) is removed ..not hash value 2065003705
    What is the reason behind that ...
    Regards,
    Meeran

    Meeran wrote:
    The hash value 2065003705 (cursor_sharing=exact) is shared as executions column is changed from 1 to 2.
    So after setting parameter cursor_sharing = similar why the hash value of 3727168047(cursor_sharing=similar)
    is not shared?I guess something is cached at session level but i want to know the exact reason..Because there is a query in the shared_pool with same literal value so it doesn't have to use the hash 3727168047 and substitute the bind where it already has a plan for the same statement which is 2065003705.
    I think with setting CURSOR_SHARING=similar again, If you try the query with JOB='ANALYST' then it will use the plan 3727168047 and substitute the bind with 'ANALYST'.
    Again i flushed the shared pool
    test >alter system flush shared_pool;
    System altered.
    Elapsed: 00:00:03.09
    17:39:40 test >select sql_text,invalidations,hash_value,executions,loads from v$sqlarea
    17:39:44   2  where sql_text like '%select empno,job from%';
    SQL_TEXT
    INVALIDATIONS HASH_VALUE EXECUTIONS      LOADS
    select empno,job from emp where job='SALESMAN' and empno=:b1
    0 2065003705          0          2The hash value of 3727168047(cursor_sharing=similar) is removed ..not hash value 2065003705
    What is the reason behind that ...If you have noticed, the executions for the hash plan 2065003705 are 0 after the shared_pool flushing. It seems that with flush shared_pool, oracle doesn't flush the queries with literals and just reset their stats. As literals are always the culprits.
    You may also wanna read this, as good document on CURSOR_SHARING.

  • PL/SQL reuse in ADF

    Is disabling the use of application module pooling good practice for intranet applications where the number of users are known in advance (although in the thousands)? This was suggested as a solution to limit the amount of recoding that will be required of existing PL/SQL programs that make extensive use of global temp tables/package variables/etc. By disabling pooling (and a few other tweaks) the user will access the db with the same session and hence database state is preserved between requests. But in my experience a lot of PLSQL code is written to fit specific use cases (mostly on Forms) without considering the need to reuse the code in any other context (although I must admit that it can easily happen in any other language). I have repeatedly tried to explain that state is better maintained and much easier to deal with if it sits on the application but perhaps I'm wrong. Any advice will be much appreciated.

    Hi,
    its not enough. If you want to build an ADF application that behaves like Forms, then you need to look for implementing dynamic JDBC credentials so that users have dedicated database connections (which is what Forms is doing). Note that using Application Module pooling and database connection pooling is guaranteeing you better performance so that the question is what the required changes are (global PLSQL variables need to be removed for sure) to PL/SQL so the logic can be used in ADF applications that follow best practices (which is that they run as true Java EE web applications)
    Frank

  • PL/SQL procedure is 10x slower when running from weblogic

    Hi everyone,
    we've developed a PL/SQL procedure performing reporting - the original solution was written in Java but due to performance problems we've decided to switch this particular piece to PL/SQL. Everything works fine as long as we execute the procedure from SQL Developer - the batch processing 20000 items finishes in about 80 seconds, which is a serious improvement compared to the previous solution.
    But once we call the very same procedure (on exactly the same data) from weblogic, the performance seriously drops - instead of 80 seconds it suddenly runs for about 23 minutes, which is 10x slower. And we don't know why this happens :-(
    We've profiled the procedure (in both environments) using DBMS_PROFILER, and we've found that if the procedure is executed from Weblogic, one of the SQL statements runs noticeably slower and consumes about 800 seconds (90% of the total run time) instead of 0.9 second (2% of the total run time), but we're not sure why - in both cases this query is executed 32742-times, giving 24ms vs. 0.03ms in average.
    The SQL is
    SELECT personId INTO v_personId FROM (            
            SELECT personId FROM PersonRelations
            WHERE extPersonId LIKE v_person_prefix || '%'
    ) WHERE rownum = 1;Basically it returns an ID of the person according to some external ID (or the prefix of the ID). I do understand why this query might be a performance problem (LIKE operator etc.), but I don't understand why this runs quite fast when executed from SQL Developer and 10x slower when executed from Weblogic (exactly the same data, etc.).
    Ve're using Oracle 10gR2 with Weblogic 10, running on a separate machine - there are no other intensive tasks, so there's nothing that could interfere with the oracle process. According to the 'top' command, the wait time is below 0.5%, so there should be no serious I/O problems. We've even checked JDBC connection pool settings in Weblogic, but I doubt this issue is related to JDBC (and everything looks fine anyway). The statistics are fresh and the results are quite consistent.
    Edited by: user6510516 on 17.7.2009 13:46

    The setup is quite simple - the database is running on a dedicated database server (development only). Generally there are no 'intensive' tasks running on this machine, especially not when the procedure I'm talking about was executed. The application server (weblogic 10) is running on different machine so it does not interfere with the database (in this case it was my own workstation).
    No, the procedure is not called 20000x - we have a table with batch of records we need to process, with a given flag (say processed=0). The procedure reads them using a cursor and processes the records one-by-one. By 'processing' I mean computing some sums, updating other table, etc. and finally switching the record to processed=1. I.e. the procedure looks like this:
    CREATE PROCEDURE process_records IS
        v_record records_to_process%ROWTYPE;
    BEGIN
         OPEN records_to_process;
         LOOP
              FETCH records_to_process INTO v_record;
              EXIT WHEN records_to_process%NOTFOUND;
              -- process the record (update table A, insert a record into B, delete from C, query table D ....)
              -- and finally mark the row as 'processed=1'
         END LOOP;
         CLOSE records_to_process;
    END process_records;The procedure is actually part of a package and the cursor 'records_to_process' is defined in the body. One of the queries executed in the procedure is the SELECT mentioned above (the one that jumps from 2% to 90%).
    So the only thing we actually do in Weblogic is
    CallableStatement cstmt = connection.prepareCall("{call ProcessPkg.process_records}");
    cstmt.execute();and that's it - there is only one call to the JDBC, so the network overhead shouldn't be a problem.
    There are 20000 rows we use for testing - we just update them to 'processed=0' (and clear some of the other tables). So actually each run uses exactly the same data, same code paths and produces the very same results. Yet when executed from SQL developer it takes 80 seconds and when executed from Weblogic it takes 800 seconds :-(
    The only difference I've just noticed is that when using SQL Developer, we're using PL/SQL notation, i.e. "BEGIN ProcessPkg.process_records; END;" instead of "{call }" but I guess that's irrelevant. And yet another difference - weblogic uses JDBC from 10gR2, while the SQL Developer is bundled with JDBC from 11g.

  • Installation of Preferences Pane on remote Server OSX fails

    On an OSXServer mini which I connect to with RDC, I am unable to install any Preference Pane programs such as Slink or Squeeze.
    I have full admin access to that machine, it's a dedicated database server, nothing exotic running on it, latest OS10.6.4, but I get the message;
    "You can't open the xxx Preferences Pane because it is not available to you at this time. To see this preference pane, you may need to connect a device to your computer."
    There is no mouse or screen connected to this mini, RDC access only.
    The PrefPane icons become visible but remain greyed out.
    Remote Management is activated in the ServerAdmin.
    A very similar configuration hosted elsewhere gives no problem. Same settings, same RDC access, same OS.
    At this point I'm at a loss. Clues anyone?
    TIA,
    Peter

    The problem isn't the filesystem per se. It's bad software quality if pathnames defined wrong or multiple times with different notation. Often there is only one or two letters with the wrong upper or lower case. I guess in the current Flash installer this is the case. With the previous versions I had no problems.
    Am 03.11.2013 um 00:41 schrieb Mike M <[email protected]>:
    Re: Installation of Flash Player 11.9 under OSX fails with case-sensitive filesystem!?
    created by Mike M in Installing Flash Player - View the full discussion
    My Mini isn't case sensitive, so I can't really say but I had two iMacs when I worked at Intuit that were case sensitive and they had all kinds of software installation issues. One of the Mac developers at Mountain View told me that Case Sesnitive shouldn't be chosen as a format except when a system is isolated and used as a testing server. For a workstation or home computer, it's a terrible choice for a setup.
    Please note that the Adobe Forums do not accept email attachments. If you want to embed a screen image in your message please visit the thread in the forum to embed the image at http://forums.adobe.com/message/5809947#5809947
    Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/5809947#5809947
    To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/5809947#5809947. In the Actions box on the right, click the Stop Email Notifications link.
    Start a new discussion in Installing Flash Player at Adobe Community
    For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746.

  • Global temp table trigger error on Oracle AS

    we have a set of triggers that load a temp table in the before delete,update and process the table in an after statement trigger. the data that's loaded is a complaintid and when the complaintid is selected into a variable in the after statement, a No data found error is fired. This only happens on Oracle AS both 9i and 10g, it does not happen on jboss. All the app servers use connection pooling and they are 9i Enterprise Edition Dedicated database servers.
    Is it possible that there is a bug in Oracle AS that allows multiple sessions to affect the same global variables ?
    Sorry it's a long one, but I wanted to include everything I could
    table creation script.
    CREATE GLOBAL TEMPORARY TABLE TEMPEVENTS
    (     COMPLAINTID VARCHAR2(20) NOT NULL ENABLE,
         COMPLAINTEVENTID NUMBER NOT NULL ENABLE,
         STARTDATE DATE,
         EVENTTYPE NUMBER,
         EVENTSUBTYPE NUMBER,
         DELETED NUMBER
    ) ON COMMIT DELETE ROWS
    Before update trigger-- as a test I saved the data in a permanent table and all columns have usable values.
    create or replace trigger BeforeUpdateReportDataROW
    BEFORE Delete or Update of deleted, startdate on complaintevents
    FOR EACH ROW
    BEGIN
    IF (DELETING AND :old.deleted = 0) OR (UPDATING AND :new.deleted=1 AND :old.startDate = :new.startDate) THEN
         TEMPDATA.v_triggerType := 'D';
    ELSIF UPDATING AND :old.deleted=1 AND :old.startDate = :new.startDate THEN /*undeleting*/
         TEMPDATA.v_triggerType := 'U';
    ELSIF UPDATING AND :old.startDate != :new.startDate THEN /*new date*/
         TEMPDATA.v_triggerType := 'S';
    ELSE
         TEMPDATA.v_triggerType := 'N';
    END IF;
    TEMPDATA.v_NumEntries := TEMPDATA.v_NumEntries + 1;
    TEMPDATA.v_complaintids(TEMPDATA.v_NumEntries) := :old.complaintid;
    TEMPDATA.v_complainteventids(TEMPDATA.v_NumEntries) := :old.complainteventid;
    END;
    After statement trigger -- the error happens on the
    SELECT complaintid
    INTO complaintid
    FROM complaintevents
    WHERE complaintid = tempdata.v_complaintids (loop_index)
    AND complainteventid = tempdata.v_complainteventids (loop_index);
    statement. this is all one transaction the complaintid is loaded from the complaintevent table, and is not a primary key, nor is there only one record in the complaintevent table for each complaintid.
    create or replace trigger complaintevents_del_upd_trig
    AFTER DELETE OR UPDATE OF deleted, startdate
    ON complaintevents
    DECLARE
    complaintid VARCHAR2 (20);
    loop_index NUMBER;
    hold_complaintid VARCHAR2 (20);
    BEGIN
    IF tempdata.v_triggertype = 'D'
    THEN /*deleting event*/
    hold_complaintid := ' ';
    FOR loop_index IN 1 .. tempdata.v_numentries
    LOOP
    SELECT complaintid
    INTO complaintid
    FROM complaintevents
    WHERE complaintid = tempdata.v_complaintids (loop_index)
    AND complainteventid = tempdata.v_complainteventids (loop_index);
    IF hold_complaintid != complaintid
    THEN
    INSERT INTO tempevents
    (complaintid, complainteventid, startdate, eventtype,
    eventsubtype, deleted)
    SELECT complaintid, complainteventid, startdate, eventtype,
    eventsubtype, deleted
    FROM complaintevents
    WHERE complaintid = tempdata.v_complaintids (loop_index);
    END IF;
    DELETE tempevents
    WHERE complainteventid =
    tempdata.v_complainteventids (loop_index)
    OR deleted = 1;
    hold_complaintid := complaintid;
    END LOOP;
    ELSIF tempdata.v_triggertype = 'U'
    THEN /*undeleting*/
    hold_complaintid := ' ';
    FOR loop_index IN 1 .. tempdata.v_numentries
    LOOP
    SELECT complaintid
    INTO complaintid
    FROM complaintevents
    WHERE complaintid = tempdata.v_complaintids (loop_index)
    AND complainteventid = tempdata.v_complainteventids (loop_index);
    IF hold_complaintid != complaintid
    THEN
    INSERT INTO tempevents
    (complaintid, complainteventid, startdate, eventtype,
    eventsubtype, deleted)
    SELECT complaintid, complainteventid, startdate, eventtype,
    eventsubtype, deleted
    FROM complaintevents
    WHERE complaintid = tempdata.v_complaintids (loop_index);
    END IF;
    DELETE tempevents
    WHERE deleted = 1;
    INSERT INTO tempevents
    (complaintid, complainteventid, startdate, eventtype,
    eventsubtype, deleted)
    SELECT complaintid, complainteventid, startdate, eventtype,
    eventsubtype, 0
    FROM complaintevents
    WHERE complainteventid =
    tempdata.v_complainteventids (loop_index);
    hold_complaintid := complaintid;
    END LOOP;
    ELSIF tempdata.v_triggertype = 'S'
    THEN /*date change*/
    hold_complaintid := ' ';
    FOR loop_index IN 1 .. tempdata.v_numentries
    LOOP
    SELECT complaintid
    INTO complaintid
    FROM complaintevents
    WHERE complaintid = tempdata.v_complaintids (loop_index)
    AND complainteventid = tempdata.v_complainteventids (loop_index);
    IF hold_complaintid != complaintid
    THEN
    INSERT INTO tempevents
    (complaintid, complainteventid, startdate, eventtype,
    eventsubtype, deleted)
    SELECT complaintid, complainteventid, startdate, eventtype,
    eventsubtype, deleted
    FROM complaintevents
    WHERE complaintid = tempdata.v_complaintids (loop_index);
    END IF;
    DELETE tempevents
    WHERE deleted = 1;
    hold_complaintid := complaintid;
    END LOOP;
    ELSE
    RETURN;
    END IF;
    END;

    CREATE GLOBAL TEMPORARY TABLE test_glb ON COMMIT DELETE ROWS
    AS SELECT * FROM test1;btw I'm assuming you are just using the SELECT statement to copy the definition of test1. Since DDL statements commit, there will be no rows in test_glb after creating it.
    Edited by: William Robertson on Feb 23, 2009 7:31 AM

  • How to use single installation of Nokia Ovi Suite ...

    I want to help my adult son set up his (different model-) Nokia Smartphone.
    How to use the Nokia Ovi Suit already installed for a different user?

    Hey,
    If i hav understood ur issue correctly, u want to share the same nokia ovi suite(Contacts, Messages, Calendar and tasks etc) of ur entire family with single nokia ovi suite? If my understaning is wrong, please do correct/educate me.
    AFAIK, nokia ovi suite does not work the same way how pc suite works at least when it comes to PIM sync like Contacts, Messages, calendar and tasks etc.,
    In case of PC Suite, u can connect more than a device and u can sync/store messages/contacts/calendar etc separately and u can switch to the next device, there is no dedicated database to store any of ur data, if u r synching with pc suite.
    In case of Ovi Suite, it has got a database where it stores user's info in a database, so u cannot sync multiple phone's data with single nokia ovi suite.
    If this is what u xpct from our forum, read my signature and do the needful
    If my post helped you, click on Kudos button and if my solution provided is opt 2 u, accpt my solution

Maybe you are looking for

  • No sound from DVD on HP dv9000

    Hello. I cannot get any audio when I put a DVD into the combo CD-DVD drive. The picture is fine, just no sound. A music CD plays in the same drive with excellent sound. I downloaded the latest Conexant drivers. Am I missing a key setting? Thanks.

  • VERY IMPORTANT QUESTION: Backup/Storage problems

    Please help!!! Ok. I'll try to explain this but it's kind of confusing so I'll do my best. OK, so a few weeks ago, (maybe 2 or 3 weeks I don't remember) I had about 30 GB free on my MacBook Air's Macintosh HD storage drive. I have a Windows partition

  • Constant Crashing of MacBook Pro when connected to Iphone 3GS 4.3 via USB

    The USB Personal Hotspot works great with 3GS and MacbookPro EXCEPT for the constant crashes where you have to reboot. This ONLY Happens while connected via USB to 3GS with Personal Hotspot. It never happens via regular wifi connections. Is there a b

  • Can I get the pictures back I lost durning the back up?

    Hello, I just recently plugged my ipod into the computer and it started a backup. My computer blacked out on me. When i finally got it to start up again the backup started all over again and all my pictures on my Ipod got deleted. Is there a way to g

  • How to Check if you are running Lync Server Evaluation or Licensed Version

    I was not sure if our Lync environment was running Evaluation Version of the Lync Front End server or the Volume Licensed Version. I was looking to migrate from PoC to production so I had to make sure that the services didn’t stop in the middle of pr