Database update

Hi Gurus,
I want to know how I can see the change in update of database. For eg. I made some change in vendor master, i want to see those changes in table of vendor master. well. if I put USD in the currency field of vendor master, by T.code Se16, will that be visible or if there is any enhancement.,  will that be visible in the table ? Since the table contains fields, their desription, data types, etc.   Well, I am  very confused. Please advise.
Thanks,
Kumar

Hi Kumar,
Try this:
Hi,
You can use the following transactions:
1.ST05
open 2 sessions with ST05 Activate trace
with the other do your transaction fucnction.
after the transaction is completed go to the one with ST05 and hit deactivate trace
and then hit Display Trace button and you will see what happened during your transaction (which tables are read, which are updated which are written etc.)
2.SE30
write to transaction field the one which you want to analyze then execute then after finished open SE30 again and hit analyze.
then hit list and from type filter the ones with DB.And you will see which tables are used in DataBase..
Regards
Hope it helps

Similar Messages

  • Locking issue in workflow with conseutive database update

    Dear Workflowers,
    We are in ECC 5.0 and release 6.40. We went live for SAP in February and we are currently using workflow in PLM module for DMS and ECM.
    We have been facing this locking issue randomly happened in our production and quality system. The error from workflow log is "Document XXXX is locked by WF-BATCH". I have two steps in workflow one is to update the document user( from originator to editor with custom BO "zdraw" new method "setuser") and the next step is to update the document status( BO "zdraw" "setstatus" method which inherited form standard BO "draw").  
    I have tried to use "wait" (1st try) , statements  "BAPI_DOCUMENT_ENQUEUE", "BAPI_DOCUMENT_DEQUEUE" (2nd try) and  "Commit work and wait" (3rd try) to add one step in between, however the issue remains.
    The other question I had was we need to write "commit work" when we use BAPI to perform database update in the ABAP program. But I don't see "commit work" in the method of BO(for example "setstatus" in "draw" object) which performs database update. How does workflow perform DB update properly without "commit work" by referencing standard method?
    Could anyone please share your expertise with the issue I am facing?
    Thank you in advance,
    Merta

    Hi Merta,
    Regarding COMMITs: theoretically you should never use COMMIT statements because the Workflow runtime handles that - the transaction of executing the task is the LUW, not your method. By adding COMMIT WORK you are also committing the workflow task execution.
    In practice however there are the occasional exceptions where something just won't work without an explicit commit - but the theory remains that you should always try it without.
    Regarding your problem, the one way to be certain that a DB update is complete is to use a terminating event - either through change documents or status management.
    Failing that, you can write a wrapper method for SETSTATUS that does something like:
    do 10 times.
      try to lock it.
      if success.
        unlock.
        swc_call_method self 'SetStatus' container.
        set success flag.
      else.
        wait up to 3 seconds.
      endif.
    enddo.
    if no success, raise exception.
    Cheers,
    Mike

  • How to "roll back" the model if a database update fails

    I'd like to know if there's a way to "roll back" the model (the values in the managed beans) if a service call to a database fails during the Invoke Application phase. My understanding of the JSF lifecycle is that the values in the model are updated before a service call to a database would happen. If the database update operation fails, it seems to me that the data in the model and the database would be inconsistent.
    For example, say I have a ridiculously simple application that updates a names database. The fields are id, last name, first name. The application has a simple view that allows the user to update a name in the database. The user updates the last name field for a particular record from "Doe" to "Smith" and clicks "submit". The request goes through the lifecycle, the model values are updated from the web form during the Update Model Values phase, then the call to update the database fails during the Invoke Applications phase (the database connection failed). When the Render Response phase completes, the UI would display an error message where I put <h:messages> but the last name value in the UI would show "Smith" instead of "Doe". Is there something I can do to roll the model value back to "Doe" if the database call fails?
    I know that I could redirect to a technical error page, but I'd like to avoid it unless it is the best thing to do.
    I'd appreciate any advice you can offer.
    Thanks.
    - Luke

    I'd like to know if there's a way to "roll back" the model (the values in the managed beans) if a service call to a database fails during the Invoke Application phase. My understanding of the JSF lifecycle is that the values in the model are updated before a service call to a database would happen. If the database update operation fails, it seems to me that the data in the model and the database would be inconsistent.
    For example, say I have a ridiculously simple application that updates a names database. The fields are id, last name, first name. The application has a simple view that allows the user to update a name in the database. The user updates the last name field for a particular record from "Doe" to "Smith" and clicks "submit". The request goes through the lifecycle, the model values are updated from the web form during the Update Model Values phase, then the call to update the database fails during the Invoke Applications phase (the database connection failed). When the Render Response phase completes, the UI would display an error message where I put <h:messages> but the last name value in the UI would show "Smith" instead of "Doe". Is there something I can do to roll the model value back to "Doe" if the database call fails?
    I know that I could redirect to a technical error page, but I'd like to avoid it unless it is the best thing to do.
    I'd appreciate any advice you can offer.
    Thanks.
    - Luke

  • To perform database update in a module with AT EXIT-COMMAND addition

    Dear All,
    I have a function code 'EXIT' with function type 'E'. When this function code is triggered, my screen should close.
    In the module that handles this function code (defined with AT EXIT-COMMAND addition), I will prompt the user whether he/she want's to save the data before exiting with the POPUP_TO_CONFIRM_STEP function module. The text message in the dialog box is "Do you want to save before exiting?".
    When the user wants to save the data, a simple UPDATE statement will be executed to write the data on screen to database.
    The problem here is since the module is defined with AT EXIT-COMMAND addition, the data on screen won't be copied to their corresponding variable on the code (correct me if I'm wrong). Therefore, even though database update is performed, the data written to database are no different that the original.
    How to perform database update in a module with AT EXIT-COMMAND addition?
    or
    Is it even a "custom" or a "good practice" to prompt user to save data before exiting?
    Thanks in advance,
    Haris

    With an exit command, if there's anything that would be lost, I would prompt "Data will be lost, do you wish to continue?". If they do, the database is not updated, if they don't, they stay in the transaction.
    This is because the exit command runs before validation. So how can you know the data is correct?
    If you have a button to leave the transaction that isn't an exit command, then you could prompt to save instead. There the choices should be - quit without saving, save and quit, don't quit.
    Doing a database update in an exit command is not a good idea.
    matt
    Edited by: Matt on Mar 15, 2011 11:53 AM

  • Multiple database updates vesus Tansaction

              Hi,
              I need some help from you great minds. Here us what I am trying to accomplish:
              I have a message driven bean which does a multiple update calls to an Oracle database.
              I want to commit all the db updates only at the end (after all update calls execute
              okay) and if there occurs any problem in any of the update calls, I want to rollback
              all the previously successful update calls I have already made. I am using container
              managed transaction via a mdb.
              I tried to use UserTransaciom.setRollbackOnly() call but that did not completely
              help. This call rolled back the message altogether. All I wanted to do is, if
              there occurs any error during a database update, rollback just the database changes
              and just throw away the message. Is there a way I can just rollback the database
              changes? Any suggestions?? please. Thanks
              

    If I understand correctly what you want to do, is the purpose to do some task or run some script in multiple databases on the same server?
    If so, this is done easily by listing the database (sids) in a file and reading the file in a loop statement.
    In my case, I simply create a file on the server called localsids. I keep this in /var/opt/oracle directory.
    Then, in my script, I set:
    SIDFILE='/var/opt/oracle/localsids'
    NEWPASS=`cat $HOME/.xlh/sys`
    # This loop reads through the 'sidlist' and then looks for a password
    # stored in a separate directory for each sid, but if individual
    # directories do not exist, then it uses the standard system password.
    # It then opens a sqlplus session for each sid (as it loops through the
    # sidfile and executes some sql statement(s), or executes a sql script.
    cat $SIDFILE | while read SID
    do
    ORACLE_SID=$SID
    export ORACLE_SID
    echo $SID # this is only for my own verbose purposes
    sqlplus -s system/manager@$SID <<EOF > /tmp/chg_passwd_${SID}.sql
    alter user system identified by $NEWPASS
    alter user sys identified by $NEWPASS
    EOF
    done
    exit
    # In the above example, i am changing the sys and system passwords for all databases listed in the localsids file.
    Hope this helps...
    ji li
    Message was edited by: ji li to simplify the example...
    I have simplified the above example to hardcode the system password into this script, however, normally I would never do this in real practice. This is just as an example to simplify how to run a loop to run a common script or sql statement in each database.

  • CcBPM, Async Messages and database update performance

    We have several business processes defined and running in XI 3.0. Some are being invoked as web services.
    Our performance/scalability is extremely poor. We believe the problem is the database, which apparently is being updated at a phenomenal rate. The database disk I/O is exceptionally high, and has been tracked to writing to the database container/table files and tranaction logs. The database has been carefully tuned, and we are rushing to migrate the db to a higher capacity machine.
    It would seem, based on documentation, that the database updates are a result of the asynch message interfaces we are using through the business processes. 
    The first question is are we correct in the above statement?
    The second question is are the db updates being done under a transactional scope, if so what is the isolation level of the update, and can we change the isolation level?
    Third question, if the updates are due to the asynch message interfaces, can we use synch message interfaces and reduce the database work? If so, how?
    Finally,

    I don't think you'll find a documented general number of application variables you can have on a single server. But I think you could get away with 400 variables without any trouble.
    Where you might run into trouble, though, is when you have multiple concurrent requests trying to read and change these values. You'll have to ensure that you single-thread write access to these variables using CFLOCK.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/
    Fig Leaf Software is a Veteran-Owned Small Business (VOSB) on
    GSA Schedule, and provides the highest caliber vendor-authorized
    instruction at our training centers, online, or onsite.

  • Constructing Database Update DML Statements

    This is a very common problem I am sure but want to know how others handle it. I am creating a web application via Servlets and JSPs. I am working with Oracle 10g.
    Here's the problem/question.
    When I present an html form to a user to update an existing record, how do I know what has changed in the record to create the dml statement? I could just update the entire record with the values supplied via the POST data on submit but that does not seem right since maybe only 1 out of 10 fields actually changed. This would also write needless audit and logging information to the database. I have thought about comparing the Request parameters sent with the post with the original java bean used to populate the form in the first place by adding it back to the request as an attribute. Is this the only way to do it or is there a better way?
    Thanks everyone,
    -Brian

    What database drivers are you using? I recall there being some bug in
    sp2 that caused delayed transaction commits for a very specific
    combination of transaction settings and database drivers.
    I think it was third-party type II Oracle drivers and local
    transactions, but I'm not sure.
    In any case, I'd contact support as there is a patch available.
    David
    Gurjit wrote:
    Hi all,
    I have posted something of this sort earlier too. The problem is that
    the database is not reflecting what has been updated using queries from the
    application.
    THe structure of the application is as follows.
    DB = Oracel 8.1.6
    Web Server = iPlanet 4.0
    App Server = iPlanet 6.0 sp2
    The database is connected to via the EJB's. No Database transactions are
    being used except for Bean transactions which are container managed. The
    setAutoCommit flag is set to true. Each query is a transaction. The jsp's
    call these ejb's through wrapper classes. As stated earlier the database
    updates (Inserts, update, delete statements) are not getting reflected in
    the database immediately. The updates happen after a gap of about 7-10
    minutes. Why is this sort of behaviour coming up. This is almost like batch
    updates happening on the database. Is there a setting in IPlanet for
    removing this kind of problem.
    Regards,
    Gurjit

  • Database update failed for some organizations when installing Update Rollup 1 for Microsoft Dynamics CRM 2013 Service Pack 1

    Hi, 
    We get the following error in the logfile when we try to install the latest update for CRM: (KB2953252)
    Does anyone know how to fix this problem? 
    09:10:41|   Info| Database update install failed for orgId = 35e7ca08-43fb-4440-ba18-acfc3f42e115.  Continuing with other orgs.  Exception: System.ArgumentNullException: Value cannot be null.
    Parameter name: type
       at System.Activator.CreateInstance(Type type, Boolean nonPublic)
       at System.Activator.CreateInstance(Type type)
       at Microsoft.Crm.Setup.Database.DllMethodAction.Execute(Guid organizationId)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.ExecuteReleases(ReleaseInfo releaseInfo, Boolean isInstall)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.Install(Int32 languageCode, String configurationFilePath, Boolean upgradeDatabase, Boolean isInstall)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.InstallUpdate(String configurationFilePath, Boolean upgradeDatabase)
       at Microsoft.Crm.Setup.Common.Update.DBUpdateDatabaseInstaller.OrgInstall(ArrayList orgIdArray)

    Hi, 
    We get the following error in the logfile when we try to install the latest update for CRM: (KB2953252)
    Does anyone know how to fix this problem? 
    09:10:41|   Info| Database update install failed for orgId = 35e7ca08-43fb-4440-ba18-acfc3f42e115.  Continuing with other orgs.  Exception: System.ArgumentNullException: Value cannot be null.
    Parameter name: type
       at System.Activator.CreateInstance(Type type, Boolean nonPublic)
       at System.Activator.CreateInstance(Type type)
       at Microsoft.Crm.Setup.Database.DllMethodAction.Execute(Guid organizationId)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.ExecuteReleases(ReleaseInfo releaseInfo, Boolean isInstall)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.Install(Int32 languageCode, String configurationFilePath, Boolean upgradeDatabase, Boolean isInstall)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.InstallUpdate(String configurationFilePath, Boolean upgradeDatabase)
       at Microsoft.Crm.Setup.Common.Update.DBUpdateDatabaseInstaller.OrgInstall(ArrayList orgIdArray)

  • Multiple Database Updates

    Hi
    In development environment I have many branches(copies) of a database.
    For every change (ddl, dml) i have to login every database manually to execute statements, Is there a client tool that support multiple database update or if any expert ever make a customized routine for that?
    And ideally as i have different named branches i wish i can define the scope of change too i.e. change execute on databases of "abc" & "pqr" branches only.
    Wishes

    If I understand correctly what you want to do, is the purpose to do some task or run some script in multiple databases on the same server?
    If so, this is done easily by listing the database (sids) in a file and reading the file in a loop statement.
    In my case, I simply create a file on the server called localsids. I keep this in /var/opt/oracle directory.
    Then, in my script, I set:
    SIDFILE='/var/opt/oracle/localsids'
    NEWPASS=`cat $HOME/.xlh/sys`
    # This loop reads through the 'sidlist' and then looks for a password
    # stored in a separate directory for each sid, but if individual
    # directories do not exist, then it uses the standard system password.
    # It then opens a sqlplus session for each sid (as it loops through the
    # sidfile and executes some sql statement(s), or executes a sql script.
    cat $SIDFILE | while read SID
    do
    ORACLE_SID=$SID
    export ORACLE_SID
    echo $SID # this is only for my own verbose purposes
    sqlplus -s system/manager@$SID <<EOF > /tmp/chg_passwd_${SID}.sql
    alter user system identified by $NEWPASS
    alter user sys identified by $NEWPASS
    EOF
    done
    exit
    # In the above example, i am changing the sys and system passwords for all databases listed in the localsids file.
    Hope this helps...
    ji li
    Message was edited by: ji li to simplify the example...
    I have simplified the above example to hardcode the system password into this script, however, normally I would never do this in real practice. This is just as an example to simplify how to run a loop to run a common script or sql statement in each database.

  • SAP database updations

    Hi all
         I just want to know whether we can do the SAP database updations through a JAVA program.If we can do this then please explain to me how we can do this.I just want to access the data in the database(SAP) and do the updations whenever we need it through the JAVA program.
    Thank You.
    Regards
    Giri

    Giri,
    Ya...Java APP sents data to XI and XI needs to insert the data into the database.
    Before the next round of data is sent by the Application, Xi should sent back info on the status of the records.
    Is this what you want?
    This will be like I have pointed earlier possible without a BPM. But, make the call from the sending application Synchronous. And then map the JDBC response to the calling aplication.
    But this updation is only possible through XI.
    If i got the requirement wrong can you let me know in more detail, what is it that you are trying?
    Reward if solved

  • AUXILIARY database update using full backup from target database

    Hi,
    I am now facing the problem with how to implement AUXILIARY database update to be consistent with the target database during a certain period (a week). I did a fully backup on our target database everyday using rman. I know it is possible to use expdp to realize it but i want to use the current fully backup to do it. Does anybody has idea or experience with that? Thanks in advance!
    Regards,
    lik

    That's OK. If you don't use RMAN to clone your database. You can create a database just using the cold backup of the primary database simply.
    Important things are
    1) you must catalog all datafiles as image copy level 0 in the cloned database
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@clonedb (in host 2)
    RMAN> catalog datafilecopy
    '/oracle/oradata/CLONE/datafile/abc.dbf',
    '/oracle/oradata/CLONE/datafile/def.dbf',
    '/oracle/oradata/CLONE/datafile/ghi.dbf'
    level 0 tag 'CLONE';
    2) You need to make incrementals of the primary database to refresh the clone database.Make sure that you need to specify a tag for the incremental and the name of tag is the exactly same as the one used step (1).
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@prod (in host 3)
    RMAN> backup incremental level 1 tag 'CLONE' for recover of copy with tag 'CLONE' database format '/backup/%u';
    3) Copy the newly created incrementals (in host 3) to the clone database site (host 2). Make sure the directory must be exactly same.
    $ rcp /backup/<incr_backup> /backup/
    -- rcp <the loc of a incremental in host 3> <the loc of a incremental in host 2>
    4) Apply incrementals to update the clone database. Make sure you provide the tag you specified.
    RMAN> connect catalog rman/rman@rcvcat
    RMAN> connect target sys/manager@clone
    RMAN> recover copy of database with tag 'CLONE';
    5) After update the clone database, then delete the incremental backups and uncatalog the image copies
    RMAN> delete backup tag 'CLONE';
    RMAN> change copy like '/oracle/oradata/CLONE/datafile/%' uncatalog;
    *** As you can see, you can clone a database using any methods. The key is you have to catalog the clone database when you refresh it. After finishing it, then uncatalog..

  • Database updates statistics maintenance plan issue.

    Hi team,
    We are configured one job through maintenance plan that job name is “database update statistics” and database size is 280 Gb, this job executing 13 to 15 hours but job was not finished  still it’s continually running.
    This same job I am running through below script it’s executing within 2 hours.
    Use database
    Go
    Exec sp_updatestats
    What is the main problem if this maintenance plan.
    Note: on this server no jobs and no traffic, only abc_update subpaln1 Job.

    Hello,
    Updating stats for whole database which is 280 G will always result in problem.It is better to run update statistics for tables and indexes which are changed frequently.
    Now to your question few points which sp_Updatestas list in BOL
    http://technet.microsoft.com/en-us/library/ms173804.aspx
    sp_updatestats updates statistics on disabled nonclustered indexes and does not update statistics on disabled clustered indexes.
    sp_updatestats updates only the statistics that require updating based on the rowmodctr information in the sys.sysindexes catalog view, thus avoiding unnecessary updates of statistics
    on unchanged rows.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Database Update Pending

    Hello Everyone,
    I've recently redistributed the workload for my installation of File Reporter 2.0 so that of the 60-something servers I am scanning, none of the scan workload happens on the engine server. This does seem to have improved the performance of the engine in generating scheduled reports. However, I am still having performance problems on the last phase of scanning -- the point at which the scan says "Database Update Pending". There are scans from the 15th still stuck in this phase (the last completed scans are from the 14th).
    I assume that this is because I am using the internal postgres database on the engine, and that all of the scans trying to check in (from the 15th to today) are causing the database to spin. Which is a pity, because I would have thought that a product like NFR would have some nice means of letting each scan to be processed by the database in batch. Or maybe they do, in which case the problem is that the database is just taking a long time for each individual dataset to be processed.
    Have any of you seen this? Any advice for kicking it along? My engine server is not underpowered -- it has a couple of processors and 12GB of RAM (I really should take that down to 6, actually ...) and right now the CPU shows as nearly idle -- the NFRENgine is using 2% and system idle is using the rest. I'd prefer not to move to an external DB (especially since utilization doesn't seem to be the issue) but I'm willing to try if a case can be made.
    Johnnie Odom
    School District of Escambia County

    Hi Johnnie,
    As Duncan already mentioned, the engine log will be interesting to look at what happens.
    NFR is able to spread the load for the agent-scan however these agents will send their scan info to the Engine where a separate process will import these into the central database. Unfortunately NFR only handles one import at a time so if you have a lot of scan data coming in it could be that it will take some time to get hem all imported.
    Ron

  • Database update on navigation

    Dear All.
    I am stuck with a fundamental concept now.
    I update a table in a view, do not commit.
    Click a button and go to next view. There I rollback.
    But the table is still committed.
    I think in ABAP dialog it's the same - go from one screen to another, database update will happen internally.
    Is it the same way in Webdynpro for abap - i.e. navigation to another view does an implicit commit?
    Thanks in adv.

    Hello Aishi,
    The specific example I was referring to is the decoupled infotype framework for updating infotypes.
    This is available for both PA and OM/PD infotypes.
    for more information I'd suggest having a search on SCN - although some of these links may help you:
    [Decoupling Infotypes - SAP Help |http://help.sap.com/erp2005_ehp_04/helpdata/EN/43/a503b963161bbfe10000000a1553f7/frameset.htm]
    also refer to the classes/methods:
    CL_HRPA_MASTERDATA_FACTORY=>GET_PLAIN_INFOTYPE_ACCESS
    CL_HRBAS_PLAIN_INFOTYPE_ACCESS (although it is not as simple to use as the PA access).
    These classes allow you to create all the data first in the buffer and then when you are ready - flush the buffer to the database.
    Although exactly how they work is probably a topic for a different forum - and I've probably already overstepped the bounds of what is reasonable for this forum by mentioning them

  • Database update statistics

    Hi
    I am in the middle of ECC 5.0 installation (IDES). At the step 29 database update statistics, system stops giving error
    CJS-00288  Could not update database statistics. DIAGNOSIS: Command brconnect -c -f crsyn -o SAPDEV returned 128, which is not a success code. SOLUTION: See brconnect.log for details.
    brconnect log is empty.

    Hi Sandhya
    Check the Link:  [Link |http://sap.ittoolbox.com/groups/technical-functional/sap-basis/ides-r3-47-installation-error-1701550]
    and SAP notes 593582 and 145777
    Edited by: Anindya Bose on Dec 10, 2009 2:38 PM

  • Need DataBase Update Event Handling

    Hi All,
    Any database update event and listener classes in java or j2ee APIs?.
    how to generate database update events?
    can you provide me any suggestions
    Thanks,
    Nirmal

    Try this: http://forum.java.sun.com/thread.jspa?messageID=4449127, with the main Object solution...

Maybe you are looking for