DB Aggregate Locking

Could someone explain how the DB handles record locking when an aggregate function is called? For instance:
select count(*)
into   v_count
from   x;
...Is there a lock maintained on table x for the duration of the transaction so no rows can be inserted or deleted?

Hi,
crottyan wrote:
Could someone explain how the DB handles record locking when an aggregate function is called? For instance:
select count(*)
into   v_count
from   x;
...Is there a lock maintained on table x for the duration of the transaction so no rows can be inserted or deleted?No, a query by itself never keeps anyone from changing the table. Whether or not the query uses an aggregate function doesn't change that.
The output you get will reflect the contents of the table at the time you started the query. That's called Read Consistency , and it's been a feature of Oracle since version 4.
Edited by: Frank Kulash on Jul 17, 2012 12:37 PM
You can use "SELECT ... *FOR UPDATE* " if you really want to keep other sessions from changing rows in your query, but the only reason for doing this is if you might change them yourself. You can't use FOR UPDATE and aggregate functions together.

Similar Messages

  • Aggregate Locking

    I'm trying to understand the aggregate locking concept. On the planning area, I have not set either livecache lock or the detailed lock. Here are my CVCs:
    Location Product ShipTo
    L1           P1         SH1
    L2           P1         SH1
    When I open the planning book, with show locations for filter product P1, I can open in change mode on double clicking L1. In another session, when I again do show locations for product P1 and double click on L2, it also opens in change mode. I thought at the aggregate lock, the first one sets a lock on location L2 as well since P1 exists with L2 in the CVC. Any ideas?

    Hi Sajeev,
    This is correct behaviour and It depends what steps you follow and the selection you make .
    If you specify one or more values for a certain characteristic, the system searches the large number of  possible values for a unique assignment. Subsequently, the system locks the data corresponding to the determined values depending on the selected locking method. (However, this does not apply to mass processing.)
    For example, product A is only available in location 0001.  If, in this case, you specify product A in a selection, the system automatically enhances the selection by adding 'location 0001' and locks the data for product A in location 0001. However, a different user can simultaneously create product A in location 0002 and enter data for this.  If, subsequently, you make a new selection with product A, the system locks all data for product A in locations 0001 and 0002.
    Please refer link  [Locking in Demand Planning |http://help.sap.com/saphelp_scm50/helpdata/en/77/d4103d2669752de10000000a114084/frameset.htm] for detailed understanding .
    I am sure this will solve your query..
    Regards,
    Digambar
    Edited by: Digambar Narkhede on Jun 22, 2010 8:47 AM

  • Timestamp Locking with Aggregate

    Hi,
    I’ve run into a little problem whilst upgrading to Toplink 904. I have several classes mapped to tables with timestamp locking enabled. The timestamp field is mapped through an aggregate.
    If I enable ‘Store Version in Cache’ at class level, new objects and their timestamps are correctly inserted to database on commit. However, the timestamps are not updated in the respective Toplink objects (they seem to be cached as null). This is problematic, as our app needs this info.
    If I disable the ‘Store Version in Cache’ option (which worked for us in Toplink 903), all inserts fail with the message:
    2004.07.26 10:47:42.500--UnitOfWork(2086370)--Thread[main,5,main]--java.lang.NullPointerExceptionjava.lang.NullPointerException
         at oracle.toplink.internal.descriptors.ObjectBuilder.mergeChangesIntoObject(ObjectBuilder.java:1458)
         at oracle.toplink.internal.sessions.MergeManager.mergeChangesOfWorkingCopyIntoOriginal(MergeManager.java:516)
         at oracle.toplink.internal.sessions.MergeManager.mergeChanges(MergeManager.java:173)
         at oracle.toplink.publicinterface.UnitOfWork.mergeChangesIntoParent(UnitOfWork.java:2501)
         at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(UnitOfWork.java:931)
         at oracle.toplink.publicinterface.UnitOfWork.commit(UnitOfWork.java:743)
    A solution is to put the timestamp field in a base class, and let the subclasses inherit it. Inserts work and timestamps are correctly updated in the Toplink objects post commit. Before I go change everything around, does anyone know why this doesn’t work when the timestamp fields are mapped through aggregates in Toplink 904?
    Thanks,
    Steve

    Hi,
    I’ve run into a little problem whilst upgrading to Toplink 904. I have several classes mapped to tables with timestamp locking enabled. The timestamp field is mapped through an aggregate.
    If I enable ‘Store Version in Cache’ at class level, new objects and their timestamps are correctly inserted to database on commit. However, the timestamps are not updated in the respective Toplink objects (they seem to be cached as null). This is problematic, as our app needs this info.
    If I disable the ‘Store Version in Cache’ option (which worked for us in Toplink 903), all inserts fail with the message:
    2004.07.26 10:47:42.500--UnitOfWork(2086370)--Thread[main,5,main]--java.lang.NullPointerExceptionjava.lang.NullPointerException
         at oracle.toplink.internal.descriptors.ObjectBuilder.mergeChangesIntoObject(ObjectBuilder.java:1458)
         at oracle.toplink.internal.sessions.MergeManager.mergeChangesOfWorkingCopyIntoOriginal(MergeManager.java:516)
         at oracle.toplink.internal.sessions.MergeManager.mergeChanges(MergeManager.java:173)
         at oracle.toplink.publicinterface.UnitOfWork.mergeChangesIntoParent(UnitOfWork.java:2501)
         at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(UnitOfWork.java:931)
         at oracle.toplink.publicinterface.UnitOfWork.commit(UnitOfWork.java:743)
    A solution is to put the timestamp field in a base class, and let the subclasses inherit it. Inserts work and timestamps are correctly updated in the Toplink objects post commit. Before I go change everything around, does anyone know why this doesn’t work when the timestamp fields are mapped through aggregates in Toplink 904?
    Thanks,
    Steve

  • Optimistic Locking fails when version field is part of a Aggregate

    I'm trying to persist a Mapped Object using 9.0.3 Toplink.
    The object uses optimistic locking while the Timestamp versioning field is part of an Aggreate Descriptor. This works well in the Workbench (does not complain).
    Unfortunally it does not work whenever I use the UnitOfWork to register and commit the chances.
    Sample code:
    Object original;
    UnitOfWork unitOfWork = ...          
    Object clone =   unitOfWork.registerExistingObject(original);
    clone.setBarcode("bliblalbu");
    unitOfWork.commit();This throws an nasty OptimisticLockException, complaining about a missing versioning field:
    LOCAL EXCEPTION STACK:
    EXCEPTION [TOPLINK-5004] (TopLink - 9.0.3 (Build 423)): oracle.toplink.exceptions.OptimisticLockException
    EXCEPTION DESCRIPTION: An attempt was made to update the object [BusinessObject:{id:12382902,shorttext:null,barcode:bliblablu,ownerLocation:null,IdEntryName:0,idCs:20579121}], but it has no version number in the identity map.
    It may not have been read before the update was attempted.
    CLASS> de.grob.wps.domain.model.BusinessObjectBO PK> [12382902]
         at oracle.toplink.exceptions.OptimisticLockException.noVersionNumberWhenUpdating(Unknown Source)
         at oracle.toplink.descriptors.VersionLockingPolicy.addLockValuesToTranslationRow(Unknown Source)
         at oracle.toplink.internal.queryframework.DatabaseQueryMechanism.updateObjectForWrite(Unknown Source)
         at oracle.toplink.queryframework.WriteObjectQuery.executeCommit(Unknown Source)
         at oracle.toplink.internal.queryframework.DatabaseQueryMechanism.executeWrite(Unknown Source)
         at oracle.toplink.queryframework.WriteObjectQuery.execute(Unknown Source)
         at oracle.toplink.queryframework.DatabaseQuery.execute(Unknown Source)
         at oracle.toplink.publicinterface.Session.internalExecuteQuery(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(Unknown Source)
         at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
         at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
         at oracle.toplink.internal.sessions.CommitManager.commitAllObjects(Unknown Source)
         at oracle.toplink.publicinterface.Session.writeAllObjects(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitAndResume(Unknown Source)
         at de.grob.wps.dwarf.domainstore.toplink.ToplinkTransaction.commit(ToplinkTransaction.java:60)
         at de.grob.wps.dwarf.domainstore.toplink.ToplinkPersistenceManager.commit(ToplinkPersistenceManager.java:396)
         at de.grob.wps.dwarf.domainstore.toplink.ToplinkPersistenceManagerTest.testPersistSerializableWithBusinessObjects(ToplinkPersistenceManagerTest.java:87)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at junit.framework.TestCase.runTest(TestCase.java:154)
         at junit.framework.TestCase.runBare(TestCase.java:127)
         at junit.framework.TestResult$1.protect(TestResult.java:106)
         at junit.framework.TestResult.runProtected(TestResult.java:124)
         at junit.framework.TestResult.run(TestResult.java:109)
         at junit.framework.TestCase.run(TestCase.java:118)
         at junit.framework.TestSuite.runTest(TestSuite.java:208)
         at junit.framework.TestSuite.run(TestSuite.java:203)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:392)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:276)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:167)So what can I to fix this problem? BTW the Object I try to persists has been read from database and the IDE debugger shows what that the aggregate object contains java.sql.Timestamp instances.

    Sorry guys. My debugger fooled me. The locking field wasn't initialized in the database. This caused the problem which is fixed now.
    Thx anyway.
    Bye
    Toby

  • Aggregate-collection-mapping combined with optimistic-locking

    I've tried to set up an aggregate-collection-mapping and everything seems to be okay. But one thing still lacks: use this aggregate-collection in combination with optimistic-locking. Defining to use optimistic-locking (exactly: timestamp locking) doesn't work. The generated sql-statement doesn't have the wanted timestamp field. Any idea is recommend.

    Hi,
    Aggregate collection mappings do support optimistic locking. You MUST map and store the version value in the object, not the cache. Aggregate objects are not cache independently of their parents so cannot store their version value in the cache.
    Make sure that you have:
    - provided a direc-to-field (non-readonly) mapping for the version field
    - define the locking policy to store the version value in the object, not the cache
    Example:
         descriptor.descriptorIsAggregateCollection();
         descriptor.useTimestampLocking("VERSION", false);
         // SECTION: DIRECTTOFIELDMAPPING
         DirectToFieldMapping directtofieldmapping3 = new DirectToFieldMapping();
         directtofieldmapping3.setAttributeName("version");
         directtofieldmapping3.setIsReadOnly(false );
         directtofieldmapping3.setFieldName("VERSION");
         descriptor.addMapping(directtofieldmapping3);

  • Key figure fixing in aggregate level partially locking

    Hi Guys,
    When fix the cell in the planning book, getting "One or more cells could not be completely fixed" message.
    1. If a material having only one CVC in the MPOS those quantity can be fixed correctly without any issues.
    2. If a material having more than 1 CVC combination and try to fix one of the CVC combination quantity, it is fixing partially and getting the above message.
    3. Even, it is not allowing to fix the quantity in aggregate level also.
    We are in SCM 7.0.
    Is there precondition that need to fix the material having only one CVC combination.
    Even a material having multiple CVC combination why it is not allowing to fix one CVC combination in detail level.
    Is aggregate level key figure fixing is not allowed ?
    Please clarify.
    Thanks
    Saravanan V

    Hi,
    It is not mandatory to assign Standard KF to be able to fix. However your custom infoobject that you created must be of type APO KF and not BW KF.
    That said, Let us try and address your first problem.
    You can fix at an aggregate level. However there a few points to remember.
    Let us consider a couple of scenarios.
    1) In your selection id, it is showing a number of products. You are selecting all the products at one go and load the data and try to fix at this level. This is not possible.
    2) In your selection id, you have selected a product division. For a single product division you are loading data and try to fix at this level. This is the true aggregate level and it should be possible at this level.
    Hope this helps.
    Thanks
    Mani Suresh

  • Keyfigure fixing at aggregate level

    HI,
    I am using APO V4,
    1. I  am planning to fix my keyfigure only at aggregate level,  My keyfigure is open at all levels like plant or material group or material, i can enter the data at any point manually. but when come to keyfigure fixing, i want to fix the keyfigure only at aggregate level, i mean if the keyfigure is at material level i need fix, any other level it should not allow me to fix.
    I was thinking for a drill down macro, but my keyfigure should open at all levels irrespective of keyfigure fixing, is it manageable any how that keyfigure can only be fixed at material level?
    2.  this is a different issue from above, i have fixed the keyfigure for a month, after that i had run a copy macro to the fixed key figure, but the fixed values are overwritten once the copy job is completed, and when i check in the planning book, it is visible as fixed agian. do i need to have any macro specific setting for this during the copy macro?
    Thanks in Advance
    ARUN R Y

    Hi Gurus,
    I am having some problems with Key Figure Fix.
    The business requirement:
    Some fields must be kept fixed and locked during the interactive planning performed in tc /SAPAPO/SDP94.
    What was done in order to do it:
    We created macros using standard features in order to fix and block some cells in accordance with the business requirements. The first macro disaggregates the planning to the last level available for collaboration. Another macro using FIX_CALC and ROW_INPUT functionalities fixes and locks some cells (3 levels). After these activities the planning view is aggregated and it will be ready for collaboration.
    These macros run automatically every time that the user open an specific planning view.
    The problem:
    After the aggregation when the disaggregated level is called again all cells have been showed fixed and locked. If we perform the fix and lock manually the system run correctly, i.e., only the cells fixed and locked are kept in this status after an aggregating and disaggregating processes.
    Step-by-step
    u2022 Set the macro in /SAPAPO/ADVM u2013 MacroBuilder. To use the macro FIX_CALC in aggregated level. The first attribute set with ROW ATRIBUTES, the second attribute set with VALUES. FIX_CALC (Cota de Vendas; Cota de Vendas). With this setting, the aggregated line will be fixed.
    u2022 Set the macro in /SAPAPO/ADVM u2013 MacroBuilder. To use the macro FIX_CALC in level 1. The first attribute set with ROW ATRIBUTES, the second attribute set with VALUES. FIX_CALC (Cota de Vendas; Cota de Vendas). With this setting, the level 1 line will be fixed.
    u2022 Set the macro in /SAPAPO/ADVM u2013 MacroBuilder. To use the macro ROW_INPUT in aggregated level. Cota de vendas = ROW_INPUT ( 0 ). With this setting, the aggregated line will be locked.
    u2022 Set the macro in /SAPAPO/ADVM u2013 MacroBuilder. To use the macro ROW_INPUT in level 1. Cota de vendas = ROW_INPUT ( 0 ). With this setting, the level 1 line will be locked.
    u2022 Now, go to in /SAPAPO/SDP94.
    u2022 Disaggregated the planning view.
    u2022 Run the FIX_CALC and ROW_INPUT macros.
    u2022 Aggregated the planning view.
    u2022 Disaggregated the planning view again. You can see the problem.
    Thanks,

  • Standard tables used for Aggregates

    Could anyone please help give me a list of all the STANDARD TABLE which are associated with AGGREGATES in BIW 3.x.

    Hi,
    RSDDAGGR                       Status of the active aggregates in the Infocube
    RSDDAGGRCOMP                   Description of the aggregates
    RSDDAGGRDIR                    Directory of the aggregates
    RSDDAGGRDIR_M                  Directory of the aggregates
    RSDDAGGREF                     Aggregates, useable InfoObjects
    RSDDAGGRENQUEQUE               Table to define lock argument
    RSDDAGGRMODSTATE               Status of change run for aggregates
    RSDDAGGRT                      Aggregate texts
    RSDDCVERREPAGGR                Aggregates that should be refilled after c
    RSDDSTATAGGR                   Statistics data BW for aggregate selection
    RSDDSTATAGGRDEF                Statistics data OLAP: Navigation step / ag
    RSDMESC_AGGR_IND               Aggregated Data: Indexes
    RSDMESC_AGGR_VAL               Aggregated Data: Values for Indexes
    RSDPAGGR                       Index of Dummy Aggregate from InfoCubes wi
    RSICAGGR                       Aggregation management of the IC for the M
    RSICAGGR2                      Aggregation administration for aggregates
    Thanks,
    -VIjay

  • Locking a Table

    I just wanted to know , how the lock works on a table. I am very much confused..
    Let me put down my requirement...
    I am inserting a new record in a Table(Batches) for which one of the columns(no_batch) is an incrementing value. To acheive this, I select the max value of the above column from the same table and add 1 to it.
    (Pl note: I cannot use sequence for this
    nor I cannot have an UNIQUE constraint on this Column... )
    Then I insert this new Record.
    The problem what i faced was...
    When Multiple users accessed to insert the record at the same time,
    Selecting and inserting on the table allowed duplicate values to be inserted on the no_batch column in the batches Table.
    Is there any way out to lock the Table exclusively, when a user is accessing this Table until he/she gives a COMMIT or ROLLBACK operation on the Table.
    so that the next user should wait for the Resources.
    The Time delay is okay for me.
    Here is the skeleton of my insert procedure..
    PROCEDURE insert_record IS
    BEGIN
    <set of statements>
    SELECT max(no_batch) + 1
    into var_no_batch
    from batches
    where <conditions> ;
    insert into batches values
    (val1, val2, val3,var_no_batch,........);
    commit;
    END;
    I have an alternative, but I dont know whether this will work out.
    Can I give an UPDATE statement which will update 0 rows on the same Table, so that it will be locked, and then, I SELECT and INSERT on the same table, which I guess will work, and a COMMIT will release the LOCK on this Table..
    Let me know if this is correct...
    This is very urgent.....
    Any help and suggestions would be greatly appreciated. and Thanks in Advance..
    null

    Thanks for your Help and Suggestions...
    Sorry,I cannot use a select for update Clause, since the select has an aggregate function.
    There are some other procedures which access this Table(only select statements). Will this LOCK stop the other users also who access this Table through the SELECT procedures. Please let me know about that.

  • UNCAUGHT_EXCEPTION in Aggregate activation

    Hi,
    We are activating and filling up the aggregate and we got the following error message:
    ABAP/4 processor: UNCAUGHT_EXCEPTION     
    I checked the dump and the meeage is:
    Runtime Errors         UNCAUGHT_EXCEPTION                                                        
    Exceptn                CX_RSR_X_MESSAGE                                                                               
    ShrtText                                                                               
    An exception that could not be caught occurred.                                                                               
    What happened?                                                                               
    The exception 'CX_RSR_X_MESSAGE' was raised but was not caught at any stage in the call hierarchy.                                                                               
    Since exceptions represent error situations, and since the system could  not react adequately to this error, the current program, 'SAPLRRMS', had to be terminated.                                                                               
    Any help would be greatly appreciated.
    Regards
    Nagendra

    Hi,
      It is a common memory error. fill the aggregates one at a time.
    Also check the system load in SM37, when there are lot of processes running
    in the system, trigger after some time when system load is less.
    Check whether any dependent rollup or changerun is runnning before refilling.
    If there are any running, refill after that.
    there may be locks in the system, check for db locks and user locks.
    ****Don't cancel the job....inconsistency may occur for data.
    Warm Regards,
    Vijay

  • ROLLUP locked resource to build indexces

    Hi,
    I am running a rollup where some aggregates are filling, there was a direct sql statement running to fill the aggregate. while running this rollup one more process to drop index for another cube also started.
    The rollup took 2 hours, after finishing the rollup only the drop indexces are executed.
    Actuall both are different data targets, why drop indexces are waiting to finish rollup process.
    Is there any data base level tuning required to run these parallel !
    Could any one advise...
    Thanks & Regards
    Srini

    Hi,
    There should have been a common Master Data thats been shared between both the Infocubes...Like Material, Cutomer should have been there common for both the Info Cubes.
    When the Roll-up is in Progress the Aggregates will be updated/Adjusted with the new transactions(Which have MD &TD) and this will lead to locking up of Master data. This should have lead to locking and only one procces has been executed.
    Thanks/Tarak.

  • What are aggregate objects ?

    Hi,
       Can anybody please tell me what are aggregate objects?
    Regards,
    Anirban.

    Views matchcode and lockobjects are called aggregate objects since they are formed using several tables
    Aggregate objects
    Views , matchcodes and lock objects are called aggregate objects since they are formed using several related objects
    Views: A view is an imaginary table. It contains data, which is really stored in other tables. The contents for the view are dynamically generated when called from program.
    Lock objects: These types of objects are used for locking the access to database records in table. This mechanism is used to enforce data integrity that is two users cannot update the same data at the same time. With lock objects you can lock table-field or whole table.
    Match codes :
    Tool that helps us to search data records in the system.
    http://www.sappoint.com/faq/faqabdic.pdf
    Message was edited by: Rahul Kavuri

  • Cubes locked in terminated change runs

    Hi guys,
    Couple of cubes are locked in terminated change run......... how do i resolve this as there are no locks set form them in sm12 or db01
    what is the way to go..........
    Thanks,
    Your help will be rightly appreciated

    Please try going to the monitor for Attribute change run from RSA1 >> tools >> apply change run or from Transaction RSATTR which will schedule/display attribute change run. Here you can have an option called change run monitor (Monitor icon). this wil show the status of change run including which all info objects are affected, and if its hitting the aggregates of any cubes.,etc.
    Also you can try Tcode CHANGERUNMONI/program RSDDS_CHANGERUN_MONITOR to view this data.
    The Problem is that there might be another change run triggered parallely to yours.In any SAP System only One attribute change run can run at one point of time. i. e If one attribute change run is running in system from any process chain or for any project and 2nd one fails, if start at same time due to locking problem. Due to this entire data load fails.
    This blog will explain how to prevent this from happening.
    http://sapbwneelam.blogspot.com/2007/08/how-to-avoid-attribute-change-run.html
    Hope this helps.
    Award points if useful.

  • Netweaver scroll lock

    Hey Guys,
    We have a Netweaver output based on an aggregate of several data.
    The problem is that when a user clicks on a line to open a lower layer, the browser always refreshes and goes back to the upper part of the page. This is very annoying.
    Therefore my question: Is it possible to 'lock' the view that the user stays on his same line?
    Thank you very much,
    Filip

    I got this problem on excel running under Parallels
    Finally got it, I hope it helps:
    Or, if you're working on Windows on a mac via bootcamp or virtualization software (Fusion or Parallels), you can either do:
    fnaltF12
    Or go to:
    Start->All Programs->Accessories->Accessibility->On-Screen Keyboard

  • Table Row Frequent Lock

    Dear All,
    Recently we had changed our remote connection from 128KB Leased Line to high speed IP VPN (1MB) MPLS based lines. After that onwards Oracle table rows are getting locked frequently. We are using Windows 2003 Server Std with 9i 2.0.7 database loaded. Server Total Memory is 4GB with 3.2 MHz dual processor.
    When we increase the sga_max_size (1400MB or above), ORA-12500:TNS:Listener failed...... error appears and users can't connect to our database server.
    Memory Configuration is:-
    Shared Pool - 352MB
    Buffer Cache - 584MB
    Large Pool - 128MB
    Java Pool - 16MB
    SGA Max Size - 1281.573 MB
    Aggregate PGA Target - 259MB
    Please help to correctly calculate the Memory configuration if its wrong. Or is there any other solution to sort out the table row locking problem.
    Thanking you all in advance,
    Manesh

    On initial guess, it sounds like you are chasing the wrong aspect of the problem.
    You do not describe what the application is doing, but logically it appears that you are getting more transactions happening because of the higher access speed. In other words, the client is no longer being throttled back.
    You also do not decribe the actual kind of lock. It intrigues me that specific rows would need to be locked - that implies that the application is attempting to capture the rows ahead of time. This sort-of feels like the application may have been ported from another system, or designed based on a totally different locking machanism that what Oracle usess, forcing Oracle to try to work in a way it is not supposed to work.
    Simply throwing more resources at the problem will [probably] not solve the problem - it may alleviate it for a while. Better would be to understand what is actually happening and correct the root cause.
    I encourage you to read Tom Kyte's "Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions" to get some tools to help undertsnad and isolate the actual problem. (http://apress.com/book/bookDisplay.html?bID=10008)
    As for the ORA-12500, I believe Metalink has a few notes about that.

Maybe you are looking for

  • Home Hub 3 Wireless Issue max 11Mb/s DL for b/g/n,...

    Hello people, I think there maybe a bug on the Home Hub 3, when i use wireless I get max speed of 11Mb/s for b/g/n and max speed 14 Mb/s for b/g. When i test the speed on on PS3 which is wired i get 32Mb/s. Ive checked all the other settings and che

  • WEB.show_document in Browser Tab Pages

    Hi This problem occurs in IE7 which makes use of tab pages on the web browser. I'm using forms 10g, web.show_document(v_url, '_BLANK') to pop up a browser window in IE to display a PDF file residing on the App Server. The new browser window pops up i

  • Problem with Photoshop Elements 6

    The paint brush has disappeared from my tool bar.

  • Mp4 won't play in quicktime player

    I have an mp4 file that I'm trying to watch on my quicktime player but I just get an error that says it cannot be opened.  The codecs are H264 - MPEG-4 - AVC, and A52 audio (aka AC3). I have Perian installed.  Any suggestions?

  • Connect 27" iMac to audio equipment WITHOUT losing built-in speaker audio.

    Is it possible to connect my 27" Intel iMac to audio equipment (receiver, pre-amp, integrated amp, speakers) WITHOUT losing the audio from the built-in speakers (they make a nice center channel)? If so, how? Are there USB or FireWire devices that do