Mass Change in Recipe management (Transaction RMWB) - change specification

Hello ALL,
I am working on a Requirement - Mass Change in Recipe management (Transaction RMWB)
Requirement - for a given specification number get the list of all recipes where this particular specification is used (this is achieved by standard functionality in RMWB)
When a particular recipe is changed in a recipe - perform a mass change of specification across all the recipes.
Things I have found already:
This has to be done in similar fashion as status change is done using ABAP Class -CL_RCP_MSC_STATUS
I would appreciate your help and suggestion in this.
Regards,
Nikhil

Hi Beth,
Yeah actually we did the same thing per your specs: non-status related although confined to preferred recipes, substituting substance X for Y in dependent formulas of recipes.
The only option right I am aware of currently requires custom ABAP work using the Mass-Change framework supplied. 
Regards

Similar Messages

  • BAPI or Fuction Module name for Recipe Management (tcode RMWB).

    Hi All,
    Please can anyone let me know a bapi or a function module name to map the transaction RMWB (Recipe Management Workbench).
    Thanking you in advance for your reply.
    Regards,
    Chirag Mehta.

    Hi Pavan,
    Thanks first for Replying.
    Sorry that i did not specify the details of the requirement before.
    My requirement is that i have to either change an existing formula or create a new formula for the recipe.
    Can you please advice me on any FM or Bapi for this requirement?
    Regards,
    Chirag Mehta.

  • Mass Change in Recipe Management

    Is anyone doing non-status related mass changes in RM?  Specifically, is anyone substituting substance X for substance Y in dependent formulas inside recipes?  If so, was the related work in-house ABAP development or SAP CDP?  Any helpful hints would be very much appreciated.
    Beth Perry

    Hi Beth,
    Yeah actually we did the same thing per your specs: non-status related although confined to preferred recipes, substituting substance X for Y in dependent formulas of recipes.
    The only option right I am aware of currently requires custom ABAP work using the Mass-Change framework supplied. 
    Regards

  • Change of call manager, if I change a call manager I need to bought the migration license?

    hello;
    I client want to change the box of the call manager, my question it's I need to bought the migration license to?

    Hi,
    What I understand that you are changing the hardware of Call Manager and same Call manager version would be installed on new H/w?
    If that is the case, u do not require any migration license.You just need to get the license  rehosted by writing mail to [email protected]
    regds,
    aman

  • Recipe Management error while creating Recipe

    Hi,
    While creating recipe in Recipe Management (Tcode: RMWB), in the process tab, I am unable to create a STAGE no.
    When I try to create stage (4 digit numeric) it gives me an error message "Change number 500000000000 does not exist"
    Long text of error message
    Diagnosis
    One of the following situations caused the error message:
    1. You want to edit a BOM or routing using change number 500000000000.
    2. You entered change number 500000000000 in order to display or change the change master.
    Change number 500000000000 which you entered does not exist in the system.
    Procedure
    Check your entry. Correct the change number if appropriate.
    Please help in solving the above issue.
    Rgd,
    Jag

    Hi Jag,
    Are there recipes in the system (after an upgrade)?
    When you start with the recipe management  the system create automatically a dummy change number (with the profil setting RCP01). The change number  is stored in the table RCPC_AENNR; to read the change number , please use the function module  RCP899_DUMMY_AENNR_READ.
    If the change number 500000000 the dummy, please correct the settings of the change managment if the change number is missing.
    Best regards,
    Roland Freudenberg

  • Mass change function in Recipe Management

    Hi All,
    Please provide the configuration details of Mass change function recipe management

    Are you asking for the IMG path?
    For starters, see - http://help.sap.com/erp2005_ehp_04/helpdata/EN/72/9a167ae92b46fca46751c976babe5d/content.htm?frameset=/EN/be/e1763ae5d73023e10000000a11402f/frameset.htm

  • Mass change of Recipe with Change Number for Future

    Hi ,
    I would like to do a Mass change of Recipes with Change Number for Future Plan,
    is there any way I can do that? or Transaction where I can get that info as I have to do it for lot of recipes,
    Regards,
    Paartha

    Rajesha,
    MM12 will get the change Number From CC01, where we define the Change Number for various things Like Mat master, BOM, Document etc,But I would like to Provude the ECN number when ever I change for example MM12(Mat master), CS02(BOM),CA02(ROUTING),C201(RECIPE), every time I can enter the same ECN number obtained from CC01 create change Master,you can see ECN number field on C202 screen also,
    so Now I would like to change Recipes by giving this ECN number for Mass change of one Filed called LAbor Hours inside C201,
    is there any way I can do mass change instead(avoid) of Process ,,,,,,,,, go to  C201 -enter ECN number--go in side change Labor hours for each and every recipe .
    Hope you understood,,, Looking for Reply from you and SDN Gurus
    Regards
    Paartha

  • Mass change in fk02-payment transaction-bank name

    Dear All :
    i want to do mass changes in fk02 - payment transaction tab - bank data tab - bank name field
    table name is BNKA , field name is BANKA
    Please suggest t code for same
    vijay

    As far as i know the Bank name field is non editable and it flows from Bank key which means the name is bank key specific...so you have to change the bank name of the Bank key...for this i think LSMW or BAPI needs to be created for the Bank key change transaction code FI02.
    Regards,
    Indranil

  • Issue on Saving Changed List UIBB contents in PLM - Recipe Management

    Dear All
    I am having a requirement to enhancement the standard PLM Change/ Display Recipe Screen. Enhanced to have two buttons - Get Moisture Loss Button and Calculate Moisture Loss Button. On selecting Get Moisture, system will show a pop up where the user can enter Moisture Loss value. and on selecting the button Calculate Moisture Loss, system will perform some actions based on the value entered on the pop up and updates the Nutrient list in the Calculation tab.
    I am able to see the calculated, or the updated values in the list, but on selecting the save button I am getting an error as Data has not been saved since it was last saved.
    Can anybody please guide me how to solve this issue in Saving the data back.?
    I have done enhancement to the GET_DATA method of the standard feeder class /PLMU/CL_FRW_G_FEEDER_LIST to update the Nutrient list.
    Thanks in advance
    Rinzy Deena Mathews.

    Hi All
    Solved the issue by myself. First and for most thing here is that, the List values are the ones which gets calculated at run time. We can only save the enhanced field values. In order to make the SAVE trigger on we need to enhance the standard structure for the new fields.
    Thanks and Regards
    Rinzy Deena Mathews.

  • While trying to change a BOM with transaction CS02, a runtime error appears

    While trying to change a BOM with transaction CS02, a runtime error appears.
    In intial screen he entered material ,plant BOM usage and date valid from  after executed then id displayed item list in that he wantu2019s delete one item, he has been deleted selected item after that when he was saving he is getting runtime error
    Developer trace
    ABAP Program SAPLKED1_WRITE_CE4_BPS1                 .
    Source LKED1_WRITE_CE4_BPS1U01                  Line 30.
    Error Code SAPSQL_ARRAY_INSERT_DUPREC.
    Module  $Id: //bas/640_REL/src/krn/runt/absapsql.c#17 $ SAP.
    Function HandleRsqlErrors Line 775.
    RABAX: level LEV_RX_STDERR completed.
    RABAX: level LEV_RX_RFC_ERROR entered.
    RABAX: level LEV_RX_RFC_ERROR completed.
    RABAX: level LEV_RX_RFC_CLOSE entered.
    RABAX: level LEV_RX_RFC_CLOSE completed.
    RABAX: level LEV_RX_IMC_ERROR entered.
    RABAX: level LEV_RX_IMC_ERROR completed.
    RABAX: level LEV_RX_DATASET_CLOSE entered.
    RABAX: level LEV_RX_DATASET_CLOSE completed.
    RABAX: level LEV_RX_RESET_SHMLOCKS entered.
    RABAX: level LEV_RX_RESET_SHMLOCKS completed.
    RABAX: level LEV_RX_ERROR_SAVE entered.
    RABAX: level LEV_RX_ERROR_SAVE completed.
    RABAX: level LEV_RX_ERROR_TPDA entered.
    RABAX: level LEV_RX_ERROR_TPDA completed.
    RABAX: level LEV_RX_PXA_RELEASE_RUDI entered.
    RABAX: level LEV_RX_PXA_RELEASE_RUDI completed.
    RABAX: level LEV_RX_LIVE_CACHE_CLEANUP entered.
    RABAX: level LEV_RX_LIVE_CACHE_CLEANUP completed.
    RABAX: level LEV_RX_END entered.
    RABAX: level LEV_RX_END completed.
    RABAX: end RX_RFC
    In sm21
    Perform rollback
    Run-time error "SAPSQL_ARRAY_INSERT_DUPREC" occurred
         Short dump "090618 110101 donalda 11557 " generated
    Runtime Error          SAPSQL_ARRAY_INSERT_DUPREC
    Exception              CX_SY_OPEN_SQL_DB
           Occurred on     18.06.2009 at   11:01:01
    The ABAP/4 Open SQL array insert results in duplicate database records.
    What happened?
    Error in ABAP application program.
    The current ABAP program "SAPLKED1_WRITE_CE4_BPS1" had to be terminated because
    one of the
    statements could not be executed.
    This is probably due to an error in the ABAP program.
    What can you do?
    Print out the error message (using the "Print" function)
    and make a note of the actions and input that caused the
    error.
    To resolve the problem, contact your SAP system administrator.
    You can use transaction ST22 (ABAP Dump Analysis) to view and administer
    termination messages, especially those beyond their normal deletion
    date.
    Error analysis
    An exception occurred. This exception is dealt with in more detail below
    . The exception, which is assigned to the class 'CX_SY_OPEN_SQL_DB', was
    neither
    caught nor passed along using a RAISING clause, in the procedure
    "RKE_WRITE_CE4__BPS1" "(FUNCTION)"
    Since the caller of the procedure could not have expected this exception
    to occur, the running program was terminated.
    The reason for the exception is:
    If you use an ABAP/4 Open SQL array insert to insert a record in
    the database and that record already exists with the same key,
    this results in a termination.
    (With an ABAP/4 Open SQL single record insert in the same error
    situation, processing does not terminate, but SY-SUBRC is set to 4.)
    How to correct the error
    The exception must either be prevented, caught within the procedure
    "RKE_WRITE_CE4__BPS1"
    "(FUNCTION)", or declared in the procedure's RAISING clause.
    To prevent the exception, note the following:
    Use an ABAP/4 Open SQL array insert only if you are sure that none of
    the records passed already exists in the database.
    You may able to find an interim solution to the problem
    in the SAP note system. If you have access to the note system yourself,
    use the following search criteria:
    "SAPSQL_ARRAY_INSERT_DUPREC" CX_SY_OPEN_SQL_DBC
    "SAPLKED1_WRITE_CE4_BPS1" or "LKED1_WRITE_CE4_BPS1U01"
    "RKE_WRITE_CE4__BPS1"
    If you cannot solve the problem yourself, please send the
    following documents to SAP:
    1. A hard copy print describing the problem.
       To obtain this, select the "Print" function on the current screen.
    2. A suitable hardcopy prinout of the system log.
       To obtain this, call the system log with Transaction SM21
       and select the "Print" function to print out the relevant
       part.
    3. If the programs are your own programs or modified SAP programs,
       supply the source code.
       To do this, you can either use the "PRINT" command in the editor or
       print the programs using the report RSINCL00.
    4. Details regarding the conditions under which the error occurred
       or which actions and input led to the error.

    Hi ,
    you are getting beacuse u are trying to do mass update to database.
    Please check that below note are applicable to your system.
    Note 453313 - DBIF_RSQL_ERROR_INTERNAL for mass insert
    Note 869534 - AFS MRP doesn't work properly with all BOM item categories
    Thanks Rishi Abrol

  • PSM-FM-Reversal of funds management document after change in acc.assignment

    Good day we recently went live with a component of Funds Management i.e. Availability Control on Cost centers and cost elements. We update BCS with the plan values and actual/commitment items in Finance/CO. We do our planning in CO.
    The solution is working reasonably well but we have encountered a problem for which we are unable to find a solution. The problem is best illustrated with an example;
    1. Create purchase requisition with Account Assignment Category "K" = Cost Center with transaction ME51N
    2. Account assignment = cost center and cost element. A funds management document is created upon saving of the transaction.
    3. Purchase Req. is released with a release strategy (transaction ME54N)
    4. After purchase requisition release the account assignment for the cost element is changed by the user. This occurs sometimes during the execution of the business process.
    5. When a purchase order is created with reference to this purchase requisition with transaction ME59N  the system references the original funds management document which means when the account assignment was changed the funds management document was not reversed and a new funds management document created for the changed account assignment.
    6. The BUDCON report transaction FMRP_RW_BUDCON thus displays the commitment under the incorrect commitment item as the change in account assignment is not reflected in Funds Management. We have a one to one relationship between cost elements/commitment items and cost centers/funds centers as per the derivation strategy.
    My question is;
    Is this normal standard SAP standard behavior or are we missing some configuration that will enable the creation of a new funds management document and if so where do we configure such?  I must mention our solution is completely SAP standard. 
    Thank you in advance.
    Best Regards
    Mike Olwagen
    Manager SAP Solution Support
    City Power (JHB) (Pty) Ltd

    Hi Mike, nice to meet you.
    I think u should do an entire test with TRACE on, and u will find the problem, and how to fix it (at FMDERIVE of course).
    First: Have a look at note [666322|https://service.sap.com/sap/support/notes/666322], go to FMDERIVE and turn ON the trace.
    Then, run all the process ... from ME51N to the end...
    I agree with Eli, u must allow overwriting of existing values at FMDERIVE rule...
    Regards,
    Osvaldo.

  • Not flushing changes made in current transaction

    I encounter a problem that KODO doesn't flush changes made in current
    transaction. I'm using an external transaction manager (JOTM) and XA
    datasource (xapool). From the console output, I know I have an active
    transaction. According to the KODO automatic flush behaviour
    (http://solarmetric.com/Software/Documentation/3.1.2/docs/ref_guide_dbsetup_retain.html),
    given kodo.FlushBeforeQueries is true and kodo.ConnectionRetainMode is
    transaction, flush should happen before query.
    The code is something like this:
    userTransaction.begin();
    String field1 = "abc";
    long field2 = 10L;
    String field3 = "123";
    Foo foo = new Foo(); // Foo's PK is field1, field2 and field3.
    foo.setField1(field1);
    foo.setField2(field2);
    foo.setField3(field3);
    foo1(foo);
    Collection foos = foo2(field1, field2);
    System.out.println("foos.isEmpty()? : " + foos.isEmpty());
    userTransaction.commit();
    public void foo1(Foo foo)
    PersistenceManager pm = null;
    try
    pm = getPersistenceManager();
    System.out.println("PM.TX: " + pm.currentTransaction() + " active:
    " + pm.currentTransaction().isActive());
    pm.makePersistent(foo);
    catch (ResourceException e)
    e.printStackTrace();
    finally
    if (pm != null && !pm.isClosed())
    pm.close();
    public Collection foo2(String field1, long field2)
    PersistenceManager pm = null;
    StringBuffer filter = new StringBuffer();
    HashMap parameters = new HashMap();
    StringBuffer paramList = new StringBuffer();
    Extent ex = null;
    Query query = null;
    Collection result = null;
    Collection result1 = new ArrayList();
    Long field2Long = new Long(field2);
    try
    pm = getPersistenceManager();
    System.out.println("PM.TX: " + pm.currentTransaction() + " active:
    " + pm.currentTransaction().isActive());
    ex = pm.getExtent(Foo.class, true);
    filter.append("field1 == paramField1");
    filter.append(" && field2 == paramField2");
    paramList.append("String paramField1");
    paramList.append(" , Long paramField2");
    parameters.put("paramField1", field1);
    parameters.put("paramField2", field2Long);
    query = pm.newQuery(ex, filter.toString());
    query.declareParameters(paramList.toString());
    result = (Collection) query.executeWithMap(parameters);
    catch (ResourceException e)
    e.printStackTrace();
    finally
    if (pm != null && !pm.isClosed())
    pm.close();
    return result;
    From the console,
    PM.TX: kodo.runtime.PersistenceManagerImpl@b34b1 active: true
    PM.TX: kodo.runtime.PersistenceManagerImpl@b34b1 active: true
    foos.isEmpty()? true
    If the userTransaction is committed before invoking foo2(), then, the Foo
    object is created and foos.isEmpty() returns false.
    I am using KODO version: 3.1.2. Here is the kodo.properties:
    javax.jdo.PersistenceManagerFactoryClass:
    kodo.jdbc.runtime.JDBCPersistenceManagerFactory
    javax.jdo.option.Optimistic: true
    javax.jdo.option.RetainValues: true
    javax.jdo.option.NontransactionalRead: true
    javax.jdo.option.ConnectionFactoryName: jdbc/datasource
    javax.jdo.option.IgnoreCache: false
    kodo.Connection2UserName: <some user>
    kodo.Connection2Password: <some password>
    kodo.Connection2URL: jdbc:oracle:thin:@<some host>:1521:<some db>
    kodo.Connection2DriverName: oracle.jdbc.driver.OracleDriver
    kodo.jdbc.DataSourceMode: enlisted
    kodo.jdbc.ForeignKeyConstraints: true
    kodo.FlushBeforeQueries: true
    kodo.ConnectionRetainMode: transaction
    kodo.jdbc.VerticalQueryMode=base-tables
    kodo.TransactionMode: managed
    kodo.ManagedRuntime :
    invocation(TransactionManagerMethod=foo.TransactionManagerUtil.getTransactionManager)
    kodo.jdbc.DBDictionary : oracle(BatchLimit=0)
    Any ideas are appreciated.
    Regards,
    Willie Vu

    Abe White wrote:
    Do you get the same behavior with local transactions?
    I don't see anything immediately wrong with your code, but all our internal
    tests are passing, and no other user has reported a problem.In a single transaction, I'm using multiple persistence managers which are
    closed after usage (method foo1() and foo2() gets different persistence
    managers and close them before return). I don't think I can use local
    transactions, can I?

  • Changing Isolation Level Mid-Transaction

    Hi,
    I have a SS bean which, within a single container managed transaction, makes numerous
    database accesses. Under high load, we start having serious contention issues
    on our MS SQL server database. In order to reduce these issues, I would like
    to reduce my isolation requirements in some of the steps of the transaction.
    To my knowledge, there are two ways to achieve this: a) specify isolation at the
    connection level, or b) use locking hints such as NOLOCK or ROWLOCK in the SQL
    statements. My questions are:
    1) If all db access is done within a single tx, can the isolation level be changed
    back and forth?
    2) Is it best to set the isolation level at the JDBC level or to use the MS SQL
    locking hints?
    Is there any other solution I'm missing?
    Thanks,
    Sebastien

    Galen Boyer wrote:
    On Sun, 28 Mar 2004, [email protected] wrote:
    Galen Boyer wrote:
    On Wed, 24 Mar 2004, [email protected] wrote:
    Oracle's serializable isolation level doesn't offer what most
    customers I've seen expect it to offer. They typically expect
    that a serializable transaction will block any read-data from
    being altered during the transaction, and oracle doesn't do
    that.I haven't implemented WEB systems that employ anything but
    the default concurrency control, because a web transaction is
    usually very long running and therefore holding a connection
    open during its life is unscalable. But, your statement did
    make me curious. I tried a quick test case. IN ONE SQLPLUS
    SESSION: SQL> alter session set isolation_level =
    serializable; SQL> select * from t1; ID FL ---------- -- 1 AA
    2 BB 3 CC NOW, IN ANOTHER SQLPLUS SESSION: SQL> update t1 set
    fld = 'YY' where id = 1; 1 row updated. SQL> commit; Commit
    complete. Now, back to the previous session. SQL> select *
    from t1; ID FL ---------- -- 1 AA 2 BB 3 CC So, your
    statement is incorrect.Hi, and thank you for the diligence to explore. No, actually
    you proved my point. If you did that with SQLServer or Sybase,
    your second session's update would have blocked until you
    committed your first session's transaction. Yes, but this doesn't have anything to do with serializable.
    This is the weak behaviour of those systems that say writers can
    block readers.Weak or strong, depending on the customer point of view. It does guarantee
    that the locking tx can continue, and read the real data, and eventually change
    it, if necessary without fear of blockage by another tx etc.
    In your example, you were able to change and commit the real
    data out from under the first, serializable transaction. The
    reason why your first transaction is still able to 'see the old
    value' after the second tx committed, is not because it's
    really the truth (else why did oracle allow you to commit the
    other session?). What you're seeing in the first transaction's
    repeat read is an obsolete copy of the data that the DBMS
    made when you first read it. Yes, this is true.
    Oracle copied that data at that time into the per-table,
    statically defined space that Tom spoke about. Until you commit
    that first transaction, some other session could drop the whole
    table and you'd never know it.This is incorrect.Thanks. Point taken. It is true that you could have done a complete delete
    of all rows in the table though..., correct?
    That's the fast-and-loose way oracle implements
    repeatable-read! My point is that almost everyone trying to
    serialize transactions wants the real data not to
    change. Okay, then you have to lock whatever you read, completely.
    SELECT FOR UPDATE will do this for your customers, but
    serializable won't. Is this the standard definition of
    serializable of just customer expectation of it? AFAIU,
    serializable protects you from overriding already committed
    data.The definition of serializable is loose enough to allow
    oracle's implementation, but non-changing relevant data is
    a typically understood hope for serializable. Serializable
    transactions typically involve reading and writing *only
    already committed data*. Only DIRTY_READ allows any access to
    pre-committed data. The point is that people assume that a
    serializable transaction will not have any of it's data re
    committed, ie: altered by some other tx, during the serializable
    tx.
    Oracle's rationale for allowing your example is the semantic
    arguement that in spite of the fact that your first transaction
    started first, and could continue indefinitely assuming it was
    still reading AA, BB, CC from that table, because even though
    the second transaction started later, the two transactions *so
    far*, could have been serialized. I believe they rationalize it by saying that the state of the
    data at the time the transaction started is the state throughout
    the transaction.Yes, but the customer assumes that the data is the data. The customer
    typically has no interest in a copy of the data staying the same
    throughout the transaction.
    Ie: If the second tx had started after your first had
    committed, everything would have been the same. This is true!
    However, depending on what your first tx goes on to do,
    depending on what assumptions it makes about the supposedly
    still current contents of that table, it may ether be wrong, or
    eventually do something that makes the two transactions
    inconsistent so they couldn't have been serialized. It is only
    at this later point that the first long-running transaction
    will be told "Oooops. This tx could not be serialized. Please
    start all over again". Other DBMSes will completely prevent
    that from happening. Their value is that when you say 'commit',
    there is almost no possibility of the commit failing. But this isn't the argument against Oracle. The unable to
    serialize doesn't happen at commit, it happens at write of
    already changed data. You don't have to wait until issuing
    commit, you just have to wait until you update the row already
    changed. But, yes, that can be longer than you might wish it to
    be. True. Unfortunately the typical application writer logic may
    do stuff which never changes the read data directly, but makes
    changes that are implicitly valid only when the read data is
    as it was read. Sometimes the logic is conditional so it may never
    write anything, but may depend on that read data staying the same.
    The issue is that some logic wants truely serialized transactions,
    which block each other on entry to the transaction, and with
    lots of DBMSes, the serializable isolation level allows the
    serialization to start with a read. Oracle provides "FOR UPDATE"
    which can supply this. It is just that most people don't know
    they need it.
    With Oracle and serializable, 'you pay your money and take your
    chances'. You don't lose your money, but you may lose a lot of
    time because of the deferred checking of serializable
    guarantees.
    Other than that, the clunky way that oracle saves temporary
    transaction-bookkeeping data in statically- defined per-table
    space causes odd problems we have to explain, such as when a
    complicated query requires more of this memory than has been
    alloted to the table(s) the DBMS will throw an exception
    saying it can't serialize the transaction. This can occur even
    if there is only one user logged into the DBMS.This one I thought was probably solved by database settings,
    so I did a quick search, and Tom Kyte was the first link I
    clicked and he seems to have dealt with this issue before.
    http://tinyurl.com/3xcb7 HE WRITES: serializable will give you
    repeatable read. Make sure you test lots with this, playing
    with the initrans on the objects to avoid the "cannot
    serialize access" errors you will get otherwise (in other
    databases, you will get "deadlocks", in Oracle "cannot
    serialize access") I would bet working with some DBAs, you
    could have gotten past the issues your client was having as
    you described above.Oh, yes, the workaround every time this occurs with another
    customer is to have them bump up the amount of that
    statically-defined memory. Yes, this is what I'm saying.
    This could be avoided if oracle implemented a dynamically
    self-adjusting DBMS-wide pool of short-term memory, or used
    more complex actual transaction logging. ? I think you are discounting just how complex their logging
    is. Well, it's not the logging that is too complicated, but rather
    too simple. The logging is just an alternative source of memory
    to use for intra-transaction bookkeeping. I'm just criticising
    the too-simpleminded fixed-per-table scratch memory for stale-
    read-data-fake-repeatable-read stuff. Clearly they could grow and
    release memory as needed for this.
    This issue is more just a weakness in oracle, rather than a
    deception, except that the error message becomes
    laughable/puzzling that the DBMS "cannot serialize a
    transaction" when there are no other transactions going on.Okay, the error message isn't all that great for this situation.
    I'm sure there are all sorts of cases where other DBMS's have
    laughable error messages. Have you submitted a TAR?Yes. Long ago! No one was interested in splitting the current
    message into two alternative messages:
    "This transaction has just become unserializable because
    of data changes we allowed some other transaction to do"
    or
    "We ran out of a fixed amount of scratch memory we associated
    with table XYZ during your transaction. There were no other
    related transactions (or maybe even users of the DBMS) at this
    time, so all you need to do to succeed in future is to have
    your DBA reconfigure this scratch memory to accomodate as much
    as we may need for this or any future transaction."
    I am definitely not an Oracle expert. If you can describe for
    me any application design that would benefit from Oracle's
    implementation of serializable isolation level, I'd be
    grateful. There may well be such.As I've said, I've been doing web apps for awhile now, and
    I'm not sure these lend themselves to that isolation level.
    Most web "transactions" involve client think-time which would
    mean holding a database connection, which would be the death
    of a web app.Oh absolutely. No transaction, even at default isolation,
    should involve human time if you want a generically scaleable
    system. But even with a to-think-time transaction, there is
    definitely cases where read-data are required to stay as-is for
    the duration. Typically DBMSes ensure this during
    repeatable-read and serializable isolation levels. For those
    demanding in-the-know customers, oracle provided the select
    "FOR UPDATE" workaround.Yep. I concur here. I just think you are singing the praises of
    other DBMS's, because of the way they implement serializable,
    when their implementations are really based on something that the
    Oracle corp believes is a fundamental weakness in their
    architecture, "Writers block readers". In Oracle, this never
    happens, and is probably one of the biggest reasons it is as
    world-class as it is, but then its behaviour on serializable
    makes you resort to SELECT FOR UPDATE. For me, the trade-off is
    easily accepted.Well, yes and no. Other DBMSes certainly have their share of faults.
    I am not critical only of oracle. If one starts with Oracle, and
    works from the start with their performance arcthitecture, you can
    certainly do well. I am only commenting on the common assumptions
    of migrators to oracle from many other DBMSes, who typically share
    assumptions of transactional integrity of read-data, and are surprised.
    If you know Oracle, you can (mostly) do everything, and well. It is
    not fundamentally worse, just different than most others. I have had
    major beefs about the oracle approach. For years, there was TAR about
    oracle's serializable isolation level *silently allowing partial
    transactions to commit*. This had to do with tx's that inserted a row,
    then updated it, all in the one tx. If you were just lucky enough
    to have the insert cause a page split in the index, the DBMS would
    use the old pre-split page to find the newly-inserted row for the
    update, and needless to say, wouldn't find it, so the update merrily
    updated zero rows! The support guy I talked to once said the developers
    wouldn't fix it "because it'd be hard". The bug request was marked
    internally as "must fix next release" and oracle updated this record
    for 4 successive releases to set the "next release" field to the next
    release! They then 'fixed' it to throw the 'cannot serialize' exception.
    They have finally really fixed it.( bug #440317 ) in case you can
    access the history. Back in 2000, Tom Kyte reproduced it in 7.3.4,
    8.0.3, 8.0.6 and 8.1.5.
    Now my beef is with their implementation of XA and what data they
    lock for in-doubt transactions (those that have done the prepare, but
    have not yet gotten a commit). Oracle's over-simple logging/locking is
    currently locking pages instead of rows! This is almost like Sybase's
    fatal failure of page-level locking. There can be logically unrelated data
    on those pages, that is blocked indefinitely from other equally
    unrelated transactions until the in-doubt tx is resolved. Our TAR has
    gotten a "We would have to completely rewrite our locking/logging to
    fix this, so it's your fault" response. They insist that the customer
    should know to configure their tables so there is only one datarow per
    page.
    So for historical and current reasons, I believe Oracle is absolutely
    the dominant DBMS, and a winner in the market, but got there by being first,
    sold well, and by being good enough. I wish there were more real market
    competition, and user pressure. Then oracle and other DBMS vendors would
    be quicker to make the product better.
    Joe

  • AGIS Inbound 'Transaction Status' changes to error on clicking Approve

    Hi,
    AGIS inbound 'transaction status' changes to error on clicking Approve or Apply button? how to find what is that error as I am not able to find any error message.
    What are the causes for this error ?
    I manage to generate the outbound transaction. When the recipient has received and approve, it shows an error under - Transaction Status and in the outbound side it shows Batch Status as Error.
    Can some one help to to find out the way to fix it.
    Thanks in Advance,
    Thejas

    Something might have gone wrong in the Workflow. Please check the metalink document 785167.1
    Thanks,
    John.

  • Quality management - can I change description of quality lot after UD ?

    Quality management - can I change description of quality lot after Usage Decision ?

    Hi,
    Try in transaction QA12 to change the usage decision & then try to change to description of the lot.
    Regards,
    Prashant

Maybe you are looking for