Capture changes in the database

Hi,
I want to capture the changes that happens in the database, what is the best way to capture the changes?
Thanks,
GK

I have a logon trigger inserting the time and machine info into a table. Then before the session disconects, I pull from the data dictionary the last SQL statement and update the row I have inserted into the table. This will capture a little info for you. I have another process to capture the errors for me - just use the system level triggers. So far, I cannot say that it has a perfomance impact.
Remember - till it disconnects, all info for the session is available in the v$.
All depends on how much info you need to capture... and also, what changes you need. If you specify your goal, it'll be easier to find a solution to work for you.
Hope this helps,mj

Similar Messages

  • How to change the profile value in the pl/sql code without making change in the database

    How to change the profile value in the pl/sql code without making change in the database.

    I have program ,where if the profiles 'printer and nunber of copies ' are set at the user level, by default when the report completes the O/p will be sent to the printer mentioned in the set-up. but what user wants is
    if these Profiles are set for the user running this program automatic printing should not be done.

  • Error message: "Querying or saving changes to the database failed."

    Hi all,
    I can sense I'm becoming a regular on this particular forum - the problem of spending 8 hours a day trying to get this software working!
    _*The problem*_
    When i try and setup a new device, or modify an existing one, I get an error message when i get to the Transcode Settings page. The error message is "Querying or saving changes to the database failed." After this I get a blank screen and can no longer see my settings.
    No amount of restarting has fixed this. And I cannot see a way to create a device through the client software and add transcode settings.
    Help?
    Thanks in advance,
    Ben

    Ah very good!
    Ok - it seems then that this works absolutely fine as a work around. I can create a device and then assign the device to transcode settings through the client adminsitration panel (which is indeed what I meant).
    So, there is a work around. Far less pressing now, but I would like to know why I'm getting the error message all the same - I'm now concerned about the integrity of the database...
    Thanks Tony!
    B

  • How to send  changes in the Database Table Periodically using Change Pointe

    Hi,
           How to send  changes in the Database Table to the external system Periodically using Change Pointers.
    Thanks & Regards,
    Gopi.

    That depends on what table you are referring to.

  • Access 2003 - Repeated "You cant make changes to the database" error

    Hey all,
    Im having a very strange problem with Microsoft Access 2003, we work with old databases and when we open them up through Access 2003 we get the message "you cant make changes to the database objects" error, which is expected. However the laptop has started to repeat this message over and over about 8 times after the first time you click on it. Its very annoying to have to keep clicking to get rid of the error message. Its a very strange problem, any ideas on how to solve this or even to explain what might of happened will be appreciated.
    Thanks in advance
    P.S. It is not an option to update the database, we work with a program that uses an old 97 database and if you update the database the program will stop working.

    Hi
    Firstly it doesnt seem that this is a Toshiba notebook fault.
    This sounds more like a Microsoft issue.
    I would recommend checking the Microsoft knowledge base articles or/and contact the Microsoft support.
    I have investigated a little bit and found these pages. Maybe you will find useful infos.
    http://www.smartcomputing.com/Editorial/article.asp?article=articles/WebOnly/TechSupport/420w10/20w01.asp&guid
    http://support.microsoft.com/kb/824260/
    http://support.microsoft.com/?scid=kb;en-us;824278&spid=2509&sid=98
    Good luck

  • Why should I recompile report for any changes in the database ?

    Hi All
    I call a report (.rep) from a form using RUN_REPORT_OBJECT, and everything is OK
    For any change in the database, for example :
    1- Export Dump file for the schema.
    2- Drop the schema
    3- Re-create the schema again
    4-Import the taken dump to the schema
    When i call the report again, the report doesn't run. If i recompile the report again it will work.
    I read too much posts about this issue, most of them are taLking about the command line parameter RECURSIVE_LOAD=NO ,
    v_url :=' COMPANY_NO='||:global.company_no|| ' RECURSIVE_LOAD=NO';
    SET_REPORT_OBJECT_PROPERTY(repid,REPORT_OTHER,v_url);
    But , Still the problem not solved
    Notes :
    1- The report type is .REP
    2- if I used the .JSP report, it is working fine , But i can't copy the source code for the report at the customer side.
    Please Help

    Well, first of all this would not be the correct forum. There is a dedicated reports forum for reports related questions. And secondly this most certainly depends on your report. If you have a query like
    select * from my_tableor e.g. are using %rowtype records in your report then you most certainly will have to recompile it after you do an exp/imp.
    cheers

  • Cmp bean - change to the database column not accepting

    Hi,
    I have a CMP Entity bean defined within the application. I had to change the column size of one of the table defined database. It was changed for 1 character to 3. But while inserting the records through the application the EJB still seems to consider the column with 1 character and not 3. I'm not sure if there is something to be changed in the weblogic-cmp-rdbms-jar.xml file. Can someone tell me if I have to change something, so that the entity bean recognizes the column size as 3 and not as 1 ?
    thank you very much in advance.
    -Harish

    here is the rdbms-bean.jar.xml setting for the particular table. i do not think there is any special settings like dbms-column-type.
    <weblogic-rdbms-bean>
    <ejb-name>OrderItem</ejb-name>
    <data-source-name>Order.DataSource</data-source-name>
    <table-map>
    <table-name>ORDER_ITEM</table-name>
    <field-map>
    <cmp-field>itemId</cmp-field>
    <dbms-column>item_id</dbms-column>
    </field-map>
    <field-map>
    <cmp-field>lenderPoolCode</cmp-field>
    <dbms-column>lender_pool_code</dbms-column>
    </field-map>
    </table-map>
    <weblogic-query>

  • Inserting tuples in a ResultSet without populating changes to the database

    Hello,
         After statement.executeQuery(A_SQL_STATEMENT), I got a ResultSet object, however, I want to insert many extra tuples into the final ResultSet before it being iterated and println() its content. I try the following possible solutions, but all of them have problems:
    Possible solution 1:
    rs.moveToInsertRow();
    rs.updateString(...)
    rs.insertRow();
    PROBLEMS:
    It will update the actual data content using insertRow();
    Possible solution 2:
    Extend the implementation Class of ResultSet and override the insertRow() functions so that it did not update the actual data content.
    PROBLEMS:
    I am using Oracle JDBC Drivers. I cannot get the source code for me to hack and extend the ResultSet implementation.
    Possible solution 3:
    Implement the java.sql.ResultSet interface by myself.
    PROBLEMS:
    Kill me, another solution?
    Thanks for your help,
         Eric

    Hi
    I think Solution 2 with a twist will do...
    Create a myResulSet class by extending Object and implements java.sql.ResultSet
    Create a constructor that accepts a oracle.driver.OracleResultSet as a parameter and remove the default constructor
    Map all method to use the passes oracle.driver.OracleResultSet object except for insertRow() where you can provide your own implementation ...
    in your code when you say
    oracle.jdbc.driver.OracleResultSet rs = statement.executeQuery("..");
    Create myResultSet myRS = new mtResultSet( rs) ;
    and use myRS for all operations ....
    Though I am not sure I think its called as Delegation in oops terminology
    hope this helps
    Rupinder

  • Any major changes to the database for SCCM 2012 and Reporting?

    I'm in the process of migrating my custom reports from SCCM 2007 to SCCM 2012. I wonder if all the tables / views in SCCM 2007 are the same in SCCM 2012 for my reporting purposes? So far everything I've migrated over appears to work without a problem.
    Orange County District Attorney

    Yes, I know this is an old post, but I’m trying to clean them up. Did you solve this problem, if so what was the solution?
    The SQL schema for SCCM 2012 has been released here.
    https://technet.microsoft.com/en-us/library/dn581978.aspx
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

  • Trigger changes are not committing to the database

    I have 9iAS and 9i DB both on my laptop.
    I am having a problem in which a trigger run off a WHEN_BUTTON_PRESSED function is not committing the changes to the database. In the trigger I have:
    1 record insert into table A.
    1 record update to table B.
    1 record insert into table C.
    1 delete from table D.
    None of the data is related.
    I have tried various combinations of the below to get the changes to commit:
    POST;
    COMMIT_FORM;
    Exit_Form(NO_COMMIT, NO_ROLLBACK);
    MESSAGE('Got past COMMIT');
    COMMIT;
    CLEAR_FORM(NO_COMMIT);
    ENTER_QUERY;     
    I am getting varying amounts of "FRM: 40508 Oracle Error: Unable to INSERT record" statements. Even so, many times the form would act as if the changes had been properly applied. But when I did a separate DB verification, I would see that the changes are not being committed. Also, most of the time the changes would also be reflected in the calling form queries, but when I exit, all changes are rolled back no matter how many commit stmts are in the trigger.
    I have finally gotten the form to do what I want to do (the 4 steps noted above), but I had to add a FORMS_DDL('COMMIT'); stmt and I am still getting a FRM 40508, but at least the changes are appearing in the db.
    Any ideas on why so many troubles in getting the changes to commit??? I have spent a ton of hours trying "what ifs" to see what might work. Also, this trigger is the only real "code" in the forms.
    Kim

    Brett -
    You're probably right about the intention, but this is a place where people can come and share styles, ideas, and coding tricks, I don't understand why someone would say that. Additionally, I had a professor who was a complete momo that said that all the time (consequently, his lax attitude towards teaching crippled the IS program where I graduated and most likely will cause it to no longer be available). It's a personal peeve of mine, just to let you know where I was coming from.
    Secondly, the problem I'm having may have to do with what you said, however I can't be sure. To give a better description of my scenerio, I created a form that allows the user to load information about an employee by querying a SSN. Most of this information is for display only. Six fields are available to be updated and I wrote a DML UPDATE statement that I placed inside a WHEN-BUTTON-PRESSED trigger. However, these changes won't be written to the DB because Forms is attempting to write my entire datablock, instead of just following the specified DML statements. I'm at a loss as to why this would happen, but for simplicity's sake, I would listen to ideas of how to suppress this from happening so only my statements are used when updating the DB. If you can help, thank you, if not, then thank you for your time.
    Steve

  • Changes commited to the database only after I update OracledataAdapter twice

    Hi. I am using oracledataadapter to manage the data which is eventually displayed in winform datagridview (Visual Studio 2012)
    everything works fine , but I have to click "Save" buttin twice to see the changes in the database (Oracle express 11g)
    Can you please advice....
    Here is my code
    it is very simple
    1. Obtain connection
    2. Create dataadapter
    3. Create commands
    4. Fill the datatable and dataset
    public partial class Concordance : Form
            OracleDataAdapter setupAdapter;
            DataSet projDataset;
            OracleConnection conn;
            //binding sources
            BindingSource setupBindingSource = new BindingSource();
            DataTable setupTable;
            public Concordance()
                InitializeComponent();
                //load tables
                loadSetup();
            //setup table
            private void loadSetup()
                string oradb = ConfigurationManager.ConnectionStrings["OpenU"].ConnectionString;
                conn = new OracleConnection(oradb);
                try
                   // using (conn = new OracleConnection(oradb))
                    setupAdapter = new OracleDataAdapter("select * from ou_setup", conn);
                        OracleCommandBuilder builder = new OracleCommandBuilder(setupAdapter);
                        projDataset = new DataSet("Concordia");
                        setupTable = new DataTable("Setup");
                        projDataset.Tables.Add(setupTable);
                        setupAdapter.Fill(projDataset,"Setup");
                        //bind the gataGridView
                        this.setupGrid.DataSource = projDataset.Tables["Setup"];
                        this.setupBindingSource.DataSource = projDataset.Tables["Setup"];
                        this.setupNavigator.BindingSource = this.setupBindingSource;
                catch (Exception ex)
                    string error = ex.Message;
                    MessageBox.Show(error);
            private void saveSetupBtn_Click(object sender, EventArgs e)
               // only after sabe button clicked for the second time the changes are commited into the database
                this.setupAdapter.Update(projDataset.Tables["Setup"]);
                MessageBox.Show("saved");

    How are you reading in the object initially? The problem is likely that you are modifying an object from the session cache. When you then read in the object from the uow, it uses the object in the session cache as the back up. So there will not appear to be any changes to persist to the database.
    You will need to make a copy of the object for modification, or use the copy from the unitofwork to make the changes instead of working directly on the object in the session. Disabling the cache means there is no copy in the session cache to use as a back up, so the uow read has to build an object from the database.
    Best Regards,
    Chris

  • Change the Database Name in Essbase Studio

    Hi All,
    Happy new year to all of you.
    Is there a way to change the database name in Essbase Studio, what I can see now once we have registered a database in Database Sources panel, we can't change the Database Name because it is always grayed out but the server name, user name and password can be changed.
    Actually I have mentioned this issue to our team before starting development and they proposed me the schema that I can use, but suddenly our client did not agree with it that we have been using, and then our client asked us to use another schema, FYI we are using Oracle Database as the data source.
    Can you guys here share how to manage this situation.
    Thanks,
    Rudy

    For Essbase Studio, the configuration metadata are saved as CP_tablenames, look for the CP tables, and make proper database change for the database name.
    1. Make a Oracle backup for the CP tables before you change the metadata so that you can get back in case if error
    2. Change the CP_CONNECTION TABLE, NAME column
    3. Change the CP_SOURCE table, column DNAME, for example, it is called TBC.Sales, change to NewDB.Sales
    I am not sure if there are other table also need to be updated, anyway, when you update the DBname in the metadata level, everything should be consistent in all of the related tables. Here, I assume the new db has the same table names as the old db, if the new db has very different table names, recreating everything may be easier and has less risk.
    http://hyperionexpert.blogspot.com/
    Bob

  • How to update a versioned container with changes in the DB

    Hi,
    I've reverse engineered a database schema into a container, and then I versioned it as v1.0.
    Now, that schema has changed in the database, and I'd like to update the container with those changes, and version it as v2.0.
    What is the best way to do it?
    Is it posiible to capture only the changed or new items into the v1.0 container, and then version it as v2.0?
    Thanks

    Check the application in and then check it out
    This will take it to the next level of versioning.
    However the version will be like 1.1 and not 2.0.
    There is a way to make it 2.0 but I do not remember the steps.
    Once you check it out it will be identical to 1.0 but the version will be 1.1 and you can make your changes and the next time it is checked in and out it will be set to 1.2 etc.
    At this point I have to ask if versioning is realy what you want to do. Don't get me wrong Configuration managment is a great thing. However most people do not understand it or how to use it. And they usualy get themselves in trouble with it. So if you know what your doing with it then go for it.
    Hope this helps.
    Also once you version your repositry you cannot go back. You are versioned forever.
    Michael

  • Can multiple threads write to the database?

    I am a little confused from the statement in the documentation: "Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time."
    1. Can multiple threads write to the "Simple Data Store"?
    2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    #include "stdafx.h"
    #include <stdio.h>
    #include <windows.h>
    #include <db.h>
    static DB *db = NULL;
    static DB_ENV *dbEnv = NULL;
    DWORD WINAPI th_write(LPVOID lpParam)
    DBT key, data;
    char key_buff[32], data_buff[32];
    DWORD i;
    printf("thread(%s) - start\n", lpParam);
    for (i = 0; i < 200; ++i)
    memset(&key, 0, sizeof(key));
    memset(&data, 0, sizeof(data));
    sprintf(key_buff, "K:%s", lpParam);
    sprintf(data_buff, "D:%s:%8d", lpParam, i);
    key.data = key_buff;
    key.size = strlen(key_buff);
    data.data = data_buff;
    data.size = strlen(data_buff);
    db->put(db, NULL, &key, &data, 0);
    Sleep(5);
    printf("thread(%s) - End\n", lpParam);
    return 0;
    int main()
    db_env_create(&dbEnv, 0);
    dbEnv->open(dbEnv, NULL, DB_CREATE | DB_INIT_MPOOL | DB_THREAD, 0);
    db_create(&db, dbEnv, 0);
    db->open(db, NULL, "test.db", NULL, DB_BTREE, DB_CREATE, 0);
    CreateThread(NULL, 0, th_write, "A", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "C", 0, 0);
    th_write("C");
    Sleep(2000);
    }

    Here some clarification about BDB Lock and Multi threads behavior
    Question 1. Can multiple threads write to the "Simple Data Store"?
    Answer 1.
    Please Refer to http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    A Data Store (DS) set up
    (so not using an environment or using one, but without any of the DB_INIT_LOCK, DB_INIT_TXN, DB_INIT_LOG environment regions related flags specified
    each corresponding to the appropriate subsystem, locking, transaction, logging)
    will not guard against data corruption due to accessing the same database page and overwriting the same records, corrupting the internal structure of the database etc.
    (note that in the case of the Btree, Hash and Recno access methods we lock at the database page level, only for the Queue access method we lock at record level)
    So,
    if You want to have multiple threads in the application writing concurrently or in parallel to the same database You need to use locking (and properly handle any potential deadlocks),
    otherwise You risk corrupting the data itself or the database (its internal structure).
    Of course , If You serialize at the application level the access to the database, so that no more one threads writes to the database at a time, there will be no need for locking.
    But obviously this is likely not the behavior You want.
    Hence, You need to use either a CDS (Concurrent Data Store) or TDS (Transactional Data Store) set up.
    See the table comparing the various set ups, here: http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    Berkeley DB Data Store
    The Berkeley DB Data Store product is an embeddable, high-performance data store. This product supports multiple concurrent threads of control, including multiple processes and multiple threads of control within a process. However, Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time. The Berkeley DB Data Store is intended for use in read-only applications or applications which can guarantee no more than one thread of control updates the database at a time.
    Berkeley DB Concurrent Data Store
    The Berkeley DB Concurrent Data Store product adds multiple-reader, single writer capabilities to the Berkeley DB Data Store product. This product provides built-in concurrency and locking feature. Berkeley DB Concurrent Data Store is intended for applications that need support for concurrent updates to a database that is largely used for reading.
    Berkeley DB Transactional Data Store
    The Berkeley DB Transactional Data Store product adds support for transactions and database recovery. Berkeley DB Transactional Data Store is intended for applications that require industrial-strength database services, including excellent performance under high-concurrency workloads of read and write operations, the ability to commit or roll back multiple changes to the database at a single instant, and the guarantee that in the event of a catastrophic system or hardware failure, all committed database changes are preserved.
    So, clearly DS is not a solution for this case, where multiple threads need to write simultaneously to the database.
    CDS (Concurrent Data Store) provides locking features, but only for multiple-reader/single-writer scenarios. You use CDS when you specify the DB_INIT_CDB flag when opening the BDB environment: http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envopen.html#envopen_DB_INIT_CDB
    TDS (Transactional Data Store) provides locking features, adds complete ACID support for transactions and offers recoverability guarantees. You use TDS when you specify the DB_INIT_TXN and DB_INIT_LOG flags when opening the environment. To have locking support, you would need to also specify the DB_INIT_LOCK flag.
    Now, since the requirement is to have multiple writers (multi-threaded writes to the database),
    then TDS would be the way to go (CDS is useful only in single-writer scenarios, when there are no needs for recoverability).
    To Summarize
    The best way to have an understanding of what set up is needed, it is to answer the following questions:
    - What is the data access scenario? Is it multiple writer threads? Will the writers access the database simultaneously?
    - Are recoverability/data durability, atomicity of operations and data isolation important for the application? http://docs.oracle.com/cd/E17076_02/html/programmer_reference/transapp_why.html
    If the answers are yes, then TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    Question 2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    Answer 2.
    Definitely yes, You can see data loss and/or data corruption.
    You can check the behavior of your testcase in the following way
    1. Run your testcase
    2.After the program exits
    run db_verify to verify the database (db_verify -o test.db).
    You will likely see db_verify complaining, unless the thread scheduler on Windows weirdly starts each thread one after the other,
    IOW no two or ore threads write to the database at the same time -- kind of serializing the writes
    Question 3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    Answer 3.
    In Your case the TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    doing this You have proper deadlock handling in place and proper transaction usage
    so
    You are protected against potential data corruption/data loss.
    see http://docs.oracle.com/cd/E17076_02/html/gsg_txn/C/BerkeleyDB-Core-C-Txn.pdf
    Multi-threaded and Multi-process Applications
    DB is designed to support multi-threaded and multi-process applications, but their usage
    means you must pay careful attention to issues of concurrency. Transactions help your
    application's concurrency by providing various levels of isolation for your threads of control. In
    addition, DB provides mechanisms that allow you to detect and respond to deadlocks.
    Isolation means that database modifications made by one transaction will not normally be
    seen by readers from another transaction until the first commits its changes. Different threads
    use different transaction handles, so this mechanism is normally used to provide isolation
    between database operations performed by different threads.
    Note that DB supports different isolation levels. For example, you can configure your
    application to see uncommitted reads, which means that one transaction can see data that
    has been modified but not yet committed by another transaction. Doing this might mean
    your transaction reads data "dirtied" by another transaction, but which subsequently might
    change before that other transaction commits its changes. On the other hand, lowering your
    isolation requirements means that your application can experience improved throughput due
    to reduced lock contention.
    For more information on concurrency, on managing isolation levels, and on deadlock
    detection, see Concurrency (page 32).

  • How to schedule the webi report based on data changes in the report data

    Hello,
    I want  to schedule a webi report based on data change in a column in the report.
    The scenario is something like below:
    1. If a data of a particular column changes from 2 to 3 than I would like to schedule this report and sent it to users mail box.
    I know how to apply alerts or schedule a report or data tracking for capturing changes in the report but I dont know how to schedule the report only for data changes.
    Anybody done this before.
    Thanks
    Gaurav

    Hi,
    May be these links can help you:
    http://devnet.magicsoftware.com/en/library?book=en/iBOLT/&page=SAP_R_3_Master_Data_Distribution_Defining_Change_Pointers.htm
    SEM-BCS: Load from data stream schedule
    Attribute Change Run

Maybe you are looking for

  • How do I move i-tunes from one Hard Drive to another HD on the same computer.

    What is the easy way to do this. I am over my capacity and I have installed another HD on the same computer. I want everything relate to i-tunes and back-ups to be on the new hard drive. Please help.

  • Blackmagic Intensity Pro Output

    So I thought that with CS6, pretty much any sequence could ouput via my BlackMagic Intensity Pro, and I didn't have to be in a BM sequence. Well, I have found that I am restricted by resolution and framerate - I have to be in a 720p60 sequence or not

  • How do I get it back to normal?

    My ipod is in extreme zoom in and I can't get it back to normal

  • E not possible to post down payment clearing.

    Dear all sap guru's iam getting this meg: could you explaine me? document XXXX  E not possible to post down payment clearing. there are no downpayment. regs laks

  • Regarding BW Cerification

    Hi There, I am planning to appear for BW Certification. can some one guide me how to start for preparing certification and what are the books i need to refer for that for that. Thanks, -Saloni