Concurrent Serializable Transaction Conflict

Hi,
We are planning to use the Serializable isolation level in the transactions. I am wondering if somebody can give me some suggestions on how to handle the following scenarios.
My assumptions are as follows ( when I use the transactions with Serializable isolation level):
If transaction A updates a particular record r of table T, and concurrently, transaction B updates or deletes the same record r of table T, then anexception is thrown at the point at which either transaction (A or B whichever is last) is committed. But if the two transactions update or delete different records of the same table, there will be no problems.
Are these assumptions are correct. If the oracle jdbc driver throws an exception, what would be the exception ( what would be the class and the error code - how i can differntiate from other SQLExceptions)
TIA,
Krishna Balusu

If two serializable transactions attempt to update the same row, the second transaction to attempt to update the data will wait for the first transaction to commit. Then, it will return the error "ORA-08177: can't serialize access for this transaction". This error won't be deferred until the second transaction attempts to commit.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • Oracle's serializable transactions cannot be serialized

    I'm new to oracle. Here is my understanding of oracle's serializable transactions. Feel free to correct me if i'm wrong.
    In my option, the serializable transactions that oracle supports cannot always be serialized.
    Oracle suports serializable transactions by ensuring a serializable transaction cannot modify rows changed by other transactions after the serializable transaction began. (Oracle7 Server Concepts Manual, Data Concurrency).
    However this constraint doesn't ensure two transactions can be serialized. For example, consider a table A that has two columns: rowid and num.
    A has two rows:
    rowid num
    1 1
    2 2
    There are two oracle serializable transactions (T1 and T2) defined as follows.
    T1
    read row 1 from table A into r1
    if (r1.num == 1) {
    update num of row 2 to 100
    commit;
    T2
    read row 2 from A into r2
    if (r2.num == 2) {
    update num of row 1 to 100
    commit;
    Note: row i refers to the row in the table A that has rowid equal to i, for i=1, 2
    Consider the following senario:
    T1: read (1, 1)
    T1: write (2, 100)
    T2: read (2, 2)
    T2: write(1, 100)
    T1: commit
    T2: commit
    In oracle, the senario above will get executed without any error.
    After T1 and T2 commit, the table A has the following rows:
    rowid value
    1 100
    2 100
    However, if you serialize T1 and T2, the table A can only be either
    rowid value
    1 1
    2 100
    if T1 runs before T2;
    or
    rowid value
    1 100
    2 2
    if T2 runs before T1.
    The conclusion is that serializable transactions in Oracle cannot always be serialized.
    Anything wrong in my example?
    Thanks
    null

    Note on the line
    46 if (result == null)
    47 {
    48 String MontoDocumento1 = (String) vo.getCurrentRow (). GetAttribute ("MontoDocumento");
    49 String redondeado1 = Round (MontoDocumento1);
    50 if (redondeado1.equals ("Y"))
    51 { ........
    I call a number then convert to string:
    String MontoDocumento1 = (String) vo.getCurrentRow (). GetAttribute("MontoDocumento");
    and then pass it to double:
    Double.valueOf double v_number = (number). doubleValue ();
    in my function:
    Rounding public String (String number)
    Return String;
    Double.valueOf double v_number = (number). doubleValue ();
    if (v_number == (Math.rint (v_number * 100) / 100))
    return = "Y";
    else
    return = "N";
    return return;
    Which brings me back a string with the value Y or N.
    Thanks if you could help me
    Edited by: 917616 on 29/02/2012 07:16 AM

  • Serializable transactions

    I am trying to figure out how to make transactions serializable, but with little success. I am experimenting with a simple example, where two tables(XTable(x), and YTable(y)), each of which has a single row. One transaction tries to read x, then set y as y=x+1. The other tries to read y, then set x=y*y. Unfortunately, running them simultaneously allows interleaved orders which don't preserve the serialization. Initially x=1, y=10, so a serialized ordering should result in either x=100, y=101, or x=4, y=2. I wrote the code with the expectation that an exception would get thrown or deadlock would occur but neither happened( It terminated successfully with x=100 and y=2). I am using oracle 8.1 driver. I checked, and it says it does support TRANSACTION_SERIALIZABLE. Also, I tried the same experiment using separate threads for each transaction, with the same results.
    Any ideas? Are my assumptions wrong?
    Here is the relavent code snipet if that helps
    Connection conn1=DriverManager.getConnection(url,user,password);
                conn1.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
                conn1.setAutoCommit(false);
                Connection conn2=DriverManager.getConnection(url,user,password);
                conn2.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
                conn2.setAutoCommit(false);
                //read x value from 1st connection
                Statement s1=conn1.createStatement();
                ResultSet resultSet1=s1.executeQuery("Select x from XTABLE");
                resultSet1.next();
                int x=resultSet1.getInt(1);
                //read y value from 2nd connection
                Statement s2=conn2.createStatement();
                ResultSet resultSet2=s2.executeQuery("Select y from YTABLE");
                resultSet2.next();
                int y=resultSet2.getInt(1);
                //set x based on the information available to transaction 2
                s2.executeUpdate("update XTABLE set x="+y*y);
                //set y based on the information available to transaction 1.
                s1.executeUpdate("update YTABLE set y="+(x+1));
                //commit
                conn1.commit();
                conn2.commit();
                System.err.println("x="+TransactionTester.read(conn1,'x'));
                System.err.println("y="+TransactionTester.read(conn1,'y'));
                conn1.close();
                conn2.close();

    It seems to be a matter of interpretation of
    ?SERIALIZABLE?, not only as a word, but how DBMS
    vendors (and programmers) understand this feature.I haven't been able to find a internet posting of the SQL 92 standard, however, I found what appears to be a draft of that standard: http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt. The remainder assumes that this draft is pretty close to the final standard. Section 4.28 covers isolation levels. The philosophy behind serializable is:
    "A serializable execution is defined to be an execution of the operations of concurrently executing SQL-transactions that produces the same effect as some serial execution of those same SQL-transactions. A serial execution is one in which each SQL-transaction executes to completion before the next SQL-transaction begins."
    So it is not saying that the transactions must be executed one after the other in a serial fashion, but that the result must be the same as if it had.
    The document goes on to say that in order to have serializable transactions it is necessary to prevent the three types of reads that we've been talking about: dirty, non-repeatable, and phantom. This could be summarized as providing a transaction with a 100% read consistent view of the database. Then it goes on to say:
    "The execution of a <rollback statement> may be initiated implicitly by an implementation when it detects the inability to guarantee the serializability of two or more concurrent SQL-transactions. When this error occurs, an exception condition is raised: transaction rollback-serialization failure."
    In other words, preventing the bad reads is necessary, but not sufficient for a transaction to be serializable. There must also be logic that "detects the inability to guarantee the serializability".
    On this basis and the results from my first code sample, I would say that though Oracle meets the Java API's definition of transaction_serializable, it fails to comply fully with the SQL 92 standard. Why?
    The answer can be found in the Oracle SQL manual under SET TRANSACTION:
    "The SERIALIZABLE setting specifies serializable transaction isolation mode as defined in SQL92. If a serializable transaction contains data manipulation language (DML) that attempts to update any resource that may have been updated in a transaction uncommitted at the start of the serializable transaction, then the DML statement fails."
    So Oracle's detection of "the inability to guarantee the serializability" consists of seeing if another transaction committed an update to that row after the serializable transaction began. My first code sample shows that this detection is insufficient when dealing with transactions that have read overlap, but update separate tables. I consider it a bug. I think they have to detect if they're reading data that has been updated since the transaction began as well.
    But I wonder why you prefer Oracle?s implementation
    and consider SQLServer?s implementation as
    ?unfortunate?.My reading of the benefit of serializable is that unless the database detects that serializability cannot be guaranteed, transactions can be processed concurrently. If it detects a problem, it does a rollback and generates an exception. This seems to me to be a high throughput OLTP solution.
    I call Microsoft's implementation unfortunate because rather than letting transactions go until it detects a problem, it seems to block anything that might potentially cause a problem. This reduces the ability to concurrently process transactions. Consider in my first code sample, that connection 1 hadn't even updated anything at the time connection 2 was blocked. Connection 1 might have had code that queried a few things and then decided not to do an update based on its findings. All the while connection 2 is kept waiting unnecessarily.
    Unless the SQL 99 standard redefined serializable, Microsoft's implementation is also unfortunate because it is not standards compliant. They don't "detect" they "prevent".
    But look at this example:
    2 persons want to book a flight ticket.
    person1 queries the number of free seats and gets the
    result: 1 seat free.
    person2 queries the number of free seats and gets the
    result: 1 seat free.
    person1 books 1 seat.
    person1 queries the number of free seats and gets the
    result: 0 seats free.
    person2 queries the number of free seats and gets the
    result: 1 seat free.
    person2 books 1 seat.
    person2 queries the number of free seats and gets the
    result: 0 seats free.
    person1 commits.
    person2 commits.Implemented in a single thread, the person2 booking is blocked because person1 has updated the same row and hasn't committed. I tested the following scenario instead (see code below):
    1. Connection 1 reads number of remaining seats
    2. Connection 2 reads number of remaining seats
    3. Connection 2 books a seat
    4. Connection 2 commits
    5. Connection 1 books a seat
    6. Connection 1 commits
    The first run was in default isolation mode and the second in serializable mode:
    java TestTransactionSerializable2
    Leaving connection 1 isolation level as default.
    connection 1 found 1 remaining seats
    connection 2 found 1 remaining seats
    connection 2 updated remaining seats to 0
    connection 2 committed
    connection 1 updated remaining seats to 0
    connection 1 committed
    java TestTransactionSerializable2 on
    Setting connection 1 to transaction_serializable.
    connection 1 found 0 remaining seats
    connection 2 found 0 remaining seats
    connection 2 updated remaining seats to -1
    connection 2 committed
    Exception in thread "main" java.sql.SQLException: ORA-08177: can't serialize access for this transaction
    In default mode we get the wrong result. In serializable mode, when connection 1 attempts the update it detects that the row was updated by another transaction after connection 1's transaction had begun. This shows that with transactions updating the same data, Oracle implements serializability correctly.
    What happens in SQL-Server? Does connection 2 get blocked when it does the read or when it does the update? What happens if you change the first connection 1 read in my first code sample to "select z from ztable"? The answers will give us some more clues about how Microsoft implements serializable.
    If so, why does not everybody cry it loudly around in
    this forum as a warning?I've never used it. I prefer to lock a common database resource at the start of transactions. My guess is that nobody uses it.
    SETUP:
    create table seats( remaining number );
    insert into seats values( 1 );
    commit;
    RESET:
    update seats set remaining = 1;
    commit;
    import java.sql.*;
    public class TestTransactionSerializable2
      public static void main(String[] args) throws SQLException, ClassNotFoundException
        Class.forName("oracle.jdbc.driver.OracleDriver");
        Connection conn1=DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:orcl", "scott", "tiger");
        Connection conn2=DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:orcl", "scott", "tiger");
        conn1.setAutoCommit(false);
        conn2.setAutoCommit(false);
        Statement s1 = conn1.createStatement();
        Statement s2 = conn2.createStatement();
        // Set to serializable if any argument whatsoever is passed
        if(args.length > 0)
          System.out.println("Setting connection 1 to transaction_serializable.");
          conn1.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
        else
          System.out.println("Leaving connection 1 isolation level as default.");
        // 1. Connection 1 reads number of remaining seats
        ResultSet resultSet1 = s1.executeQuery("select remaining from seats");
        resultSet1.next();
        int remaining1 = resultSet1.getInt(1);
        System.out.println("connection 1 found " + remaining1 + " remaining seats");
        // 2. Connection 2 reads number of remaining seats
        ResultSet resultSet2 = s2.executeQuery("select remaining from seats");
        resultSet2.next();
        int remaining2 = resultSet2.getInt(1);
        System.out.println("connection 2 found " + remaining2 + " remaining seats");
        // 3. Connection 2 books a seat
        s2.executeUpdate("update seats set remaining="+(remaining2-1));
        System.out.println("connection 2 updated remaining seats to " + (remaining2-1));
        // 4. Connection 2 commits
        conn2.commit();
        System.out.println("connection 2 committed");
        // 5. Connection 1 books a seat
        s1.executeUpdate("update seats set remaining="+(remaining1-1));
        System.out.println("connection 1 updated remaining seats to " + (remaining1-1));
        // 6. Connection 1 commits
        conn1.commit();
        System.out.println("connection 1 committed");
        // 7. Close connections
        conn1.close();
        conn2.close();

  • Pacman -Syu -- error: failed to commit transaction (conflicting files)

    Hi! I'm upgrading my system after a lot of time, but I have this error:
    sudo pacman -Syu
    [sudo] password for elrengo:
    :: Synchronizing package databases...
    core 121,3 KiB 90,8K/s 00:01 [#######################################################################] 100%
    extra is up to date
    community 2,6 MiB 278K/s 00:10 [#######################################################################] 100%
    multilib is up to date
    archlinuxfr is up to date
    :: Starting full system upgrade...
    :: Replace gtk2-xfce-engine with extra/gtk-xfce-engine? [Y/n] y
    :: Replace gtk3-xfce-engine with extra/gtk-xfce-engine? [Y/n] y
    resolving dependencies...
    looking for conflicting packages...
    Packages (356) alsa-lib-1.0.29-1 alsa-plugins-1.0.29-2 alsa-utils-1.0.29-1 archlinux-keyring-20150212-1 autossh-1.4e-1 avahi-0.6.31-15 banshee-2.6.2-7 binutils-2.25-2 bluez-5.30-1
    bluez-libs-5.30-1 bluez-utils-5.30-1 ca-certificates-mozilla-3.18-3 cairo-1.14.2-1 cairo-perl-1.105-1 chromium-41.0.2272.118-1 clutter-1.20.0-4 cogl-1.20.0-1 colord-1.2.9-2
    curl-7.41.0-1 cvs-1.11.23-10 darktable-1.6.4-3 dbus-1.8.16-2 dbus-sharp-0.8.1-1 dbus-sharp-glib-0.6.0-1 dcraw-9.23.0-1 device-mapper-2.02.116-1 dhclient-4.3.2-1
    dhcpcd-6.8.1-1 dialog-1:1.2_20150225-1 djvulibre-3.5.27-1 dropbox-3.2.9-2 e2fsprogs-1.42.12-2 eclipse-4.4.2-1 elfutils-0.161-3 evince-3.14.2-1 exo-0.10.4-3
    farstream-0.1-0.1.2-5 ffmpeg-1:2.6.1-1 ffmpeg-compat-1:0.10.15-2 file-roller-3.14.2-2 filesystem-2015.02-1 filezilla-3.10.2-1 firefox-37.0.1-1 flashplugin-11.2.202.451-1
    freerdp-1:1.2.0_beta1+android9-1 garcon-0.4.0-1 gcc-4.9.2-4 gcc-libs-4.9.2-4 gd-2.1.1-1 gdk-pixbuf2-2.31.3-1 ghostscript-9.16-1 giflib-5.1.1-1 gimp-ufraw-0.21-1 git-2.3.5-1
    glib-networking-2.42.1-1 glib2-2.42.2-1 glibc-2.21-2 gmp-6.0.0-2 gnome-menus-3.13.3-1 gnome-packagekit-3.14.2-2 gnupg-2.1.2-3 gnutls-3.3.14-2 grep-2.21-2 groff-1.22.3-3
    gsfonts-20150122-1 gst-plugins-ugly-1.4.5-2 gstreamer0.10-ugly-0.10.19-14 gstreamer0.10-ugly-plugins-0.10.19-14 gtk-update-icon-cache-2.24.27-1 gtk-xfce-engine-2.10.1-1
    gtk2-2.24.27-1 gtk2-xfce-engine-3.0.1-2 [removal] gtk3-3.14.9-1 gtk3-xfce-engine-3.0.1-2 [removal] gvfs-1.22.3-2 gvfs-gphoto2-1.22.3-2 gvfs-mtp-1.22.3-2 gvfs-smb-1.22.3-2
    harfbuzz-0.9.40-1 harfbuzz-icu-0.9.40-1 imagemagick-6.9.1.0-1 inxi-2.2.19-1 iproute2-3.19.0-1 iptables-1.4.21-3 kmod-20-1 krb5-1.13.1-1 ldb-1.1.20-1 lib32-alsa-lib-1.0.29-1
    lib32-alsa-plugins-1.0.29-2 lib32-cairo-1.14.2-1 lib32-curl-7.41.0-1 lib32-elfutils-0.161-2 lib32-flashplugin-11.2.202.451-1 lib32-gcc-libs-4.9.2-4 lib32-gdk-pixbuf2-2.31.3-2
    lib32-glib2-2.42.2-1 lib32-glibc-2.21-2 lib32-gnutls-3.3.13-1 lib32-gtk2-2.24.27-1 lib32-harfbuzz-0.9.40-1 lib32-krb5-1.13.1-1 lib32-libcups-2.0.2-1 lib32-libdbus-1.8.16-1
    lib32-libdrm-2.4.60-1 lib32-libgcrypt-1.6.3-1 lib32-libgpg-error-1.18-1 lib32-libidn-1.30-1 lib32-libpciaccess-0.13.3-1 lib32-libpulse-6.0-1 lib32-libtasn1-4.3-1
    lib32-libtiff-4.0.3-3 lib32-libx11-1.6.3-1 lib32-libxdmcp-1.1.2-1 lib32-libxxf86vm-1.1.4-1 lib32-llvm-libs-3.6.0-1 lib32-mesa-10.5.2-1 lib32-mesa-libgl-10.5.2-1
    lib32-nspr-4.10.8-1 lib32-nss-3.18-1 lib32-openssl-1.0.2.a-1 lib32-p11-kit-0.23.1-1 lib32-qt4-4.8.6-4 lib32-sqlite-3.8.8.3-1 lib32-wayland-1.7.0-1 lib32-xz-5.2.1-1
    libcacard-2.2.1-2 libcanberra-0.30-5 libcanberra-pulse-0.30-5 libcups-2.0.2-3 libdatrie-0.2.8-1 libdbus-1.8.16-2 libdc1394-2.2.3-1 libdrm-2.4.60-2 libevdev-1.4-1
    libgcrypt-1.6.3-2 libgphoto2-2.5.7-1 libgpod-0.8.3-4 libgsf-1.14.32-1 libical-1.0.1-2 libidn-1.30-1 libimobiledevice-1.2.0-1 libinput-0.13.0-1 libmm-glib-1.4.6-1
    libmpc-1.0.3-1 libndp-1.4-1 libnewt-0.52.18-2 libnice-0.1.10-1 libnm-glib-1.0.0-2 libplist-1.12-1 libproxy-0.4.11-5 libpulse-6.0-1 librsvg-1:2.40.9-1 libseccomp-2.2.0-1
    libsigc++-2.4.1-1 libsystemd-218-2 libtasn1-4.4-1 libteam-1.14-2 libthai-0.1.21-1 libtool-2.4.6-1 libunistring-0.9.5-1 libunwind-1.1-2 libusbmuxd-1.0.10-1 libuser-0.61-1
    libutil-linux-2.26.1-3 libvdpau-1.1-1 libvirt-1.2.14-1 libvirt-python-1.2.14-1 libwbclient-4.2.0-1 libwebp-0.4.3-1 libwnck-2.31.0-1 libx11-1.6.3-1 libx264-2:144.20150223-1
    libxdmcp-1.1.2-1 libxfce4util-4.12.1-1 libxfcegui4-4.10.0-4 libxfont-1.5.1-1 libxml++-2.38.0-1 libxvmc-1.0.9-1 libxxf86vm-1.1.4-1 linux-3.19.3-3 linux-api-headers-3.18.5-1
    linux-firmware-20150206.17657c3-1 linux-headers-3.19.3-3 lirc-1:0.9.2.a-1 llvm-libs-3.6.0-4 logrotate-3.8.9-1 lua-5.2.4-1 lua-bitop-1.0.2-5 lvm2-2.02.116-1 lz4-128-1
    lzo-2.09-1 man-pages-3.82-1 mc-4.8.14-1 media-player-info-22-1 mesa-10.5.2-1 mesa-libgl-10.5.2-1 mono-3.12.1-1 mousepad-0.4.0-1 mpg123-1.22.0-1 mplayer-37379-1
    mutagen-1.28-1 nano-2.4.0-1 networkmanager-1.0.0-2 nspr-4.10.8-1 nss-3.18-3 ntp-4.2.8.p2-1 openconnect-1:7.06-1 openssh-6.8p1-2 openssl-1.0.2.a-1 orage-4.10.0-2
    p11-kit-0.23.1-2 p7zip-9.38.1-1 packagekit-1.0.5-3 pacman-4.2.1-1 pacman-mirrorlist-20150315-1 parted-3.2-2 patch-2.7.5-1 pavucontrol-3.0-1 perl-5.20.2-1
    perl-date-calc-6.4-1 perl-error-0.17023-1 perl-image-exiftool-9.90-1 perl-net-dbus-1.1.0-1 perl-uri-1.67-1 perl-www-mechanize-1.74-1 perl-yaml-tiny-1.66-1 pinta-1.6-1
    playonlinux-4.2.6-1 polkit-gnome-0.105-2 poppler-0.31.0-1 poppler-glib-0.31.0-1 procps-ng-3.3.10-2 protobuf-c-1.1.0-1 pulseaudio-6.0-1 putty-0.64-1 pyqt4-common-4.11.3-4
    python-3.4.3-2 python-jedi-0.8.1-2 python-psutil-2.2.1-2 python2-numpy-1.9.2-2 python2-pybluez-0.20-3 python2-pycurl-7.19.5.1-2 python2-pyqt4-4.11.3-4 python2-sip-4.16.6-1
    qemu-2.2.1-2 qt5-base-5.4.1-2 qt5-script-5.4.1-2 randrproto-1.4.1-1 remmina-1:1.1.2-1 rhythmbox-3.1-3 ristretto-0.8.0-1 rtkit-0.11-5 ruby-2.2.1-1 samba-4.2.0-1
    scummvm-tools-1.7.0-1 seahorse-3.14.1-1 shadow-4.2.1-3 shared-mime-info-1.4-1 sharutils-4.15-1 shotwell-1:0.22.0-1 sip-4.16.6-1 slang-2.3.0-1 smbclient-4.2.0-1
    smplayer-14.9.0.6690-1 speex-1.2rc2-1 speexdsp-1.2rc3-1 sqlite-3.8.8.3-1 sudo-1.8.13-1 systemd-218-2 systemd-sysvcompat-218-2 tangerine-icon-theme-0.27-3 tcpdump-4.7.3-1
    tevent-0.9.24-1 thunar-1.6.6-2 thunar-archive-plugin-0.3.1-5 thunar-media-tags-plugin-0.2.1-2 thunar-volman-0.8.1-1 tmux-1.9_a-2 totem-plparser-3.10.4-1 ttf-dejavu-2.34-2
    tumbler-0.1.31-1 tzdata-2015b-1 udisks2-2.1.5-1 unrar-1:5.2.6-1 unzip-6.0-10 upower-0.99.2-2 usbredir-0.7-1 util-linux-2.26.1-3 v4l-utils-1.6.2-1 vde2-2.3.2-7
    vim-7.4.663-2 vim-runtime-7.4.663-2 vim-surround-2.1-1 vim-syntastic-3.6.0-1 virt-install-1.1.0-6 virt-manager-1.1.0-6 virtualbox-4.3.26-2 virtualbox-host-modules-4.3.26-5
    vlc-2.2.0-2 volumeicon-0.5.1-1 wayland-1.7.0-1 wget-1.16.3-1 which-2.21-1 wildmidi-0.3.8-1 wine-1.7.40-1 x265-1.5-1 xdg-utils-1.1.0.git20150323-1 xf86-input-evdev-2.9.2-1
    xf86-input-synaptics-1.8.2-2 xf86-video-intel-2.99.917-4 xfburn-0.5.2-2 xfce4-appfinder-4.12.0-1 xfce4-battery-plugin-1.0.5-4 xfce4-clipman-plugin-1.2.6-2
    xfce4-cpufreq-plugin-1.1.1-2 xfce4-cpugraph-plugin-1.0.5-3 xfce4-datetime-plugin-0.6.2-4 xfce4-dict-0.7.1-1 xfce4-diskperf-plugin-2.5.5-1 xfce4-eyes-plugin-4.4.4-1
    xfce4-fsguard-plugin-1.0.2-4 xfce4-genmon-plugin-3.4.0-3 xfce4-mailwatch-plugin-1.2.0-5 xfce4-mixer-4.11.0-2 xfce4-mount-plugin-0.6.7-3 xfce4-mpc-plugin-0.4.5-1
    xfce4-netload-plugin-1.2.4-2 xfce4-notes-plugin-1.7.7-7 xfce4-notifyd-0.2.4-2 xfce4-panel-4.12.0-1 xfce4-power-manager-1.4.4-1 xfce4-quicklauncher-plugin-1.9.4-10
    xfce4-screenshooter-1.8.2-2 xfce4-sensors-plugin-1.2.6-2 xfce4-session-4.12.1-2 xfce4-settings-4.12.0-3 xfce4-smartbookmark-plugin-0.4.6-1 xfce4-systemload-plugin-1.1.2-2
    xfce4-terminal-0.6.3-2 xfce4-time-out-plugin-1.0.2-1 xfce4-timer-plugin-1.6.0-3 xfce4-verve-plugin-1.0.1-2 xfce4-wavelan-plugin-0.5.12-1 xfce4-weather-plugin-0.8.5-2
    xfce4-whiskermenu-plugin-1.5.0-2 xfce4-xkb-plugin-0.7.1-2 xfconf-4.12.0-1 xfdesktop-4.12.1-2 xfwm4-4.12.2-1 xorg-font-util-1.3.1-1 xorg-fonts-misc-1.0.3-3
    xorg-server-1.17.1-4 xorg-server-common-1.17.1-4 xorg-xinit-1.3.4-2 xterm-317-1 xz-5.2.1-1
    Total Download Size: 61,68 MiB
    Total Installed Size: 3268,50 MiB
    Net Upgrade Size: 49,45 MiB
    :: Proceed with installation? [Y/n] y
    :: Retrieving packages ...
    linux-3.19.3-3-x86_64 55,5 MiB 244K/s 03:53 [#######################################################################] 100%
    linux-headers-3.19.3-3-x86_64 6,2 MiB 290K/s 00:22 [#######################################################################] 100%
    (354/354) checking keys in keyring [#######################################################################] 100%
    (354/354) checking package integrity [#######################################################################] 100%
    (354/354) loading package files [#######################################################################] 100%
    (354/354) checking for file conflicts [#######################################################################] 100%
    error: failed to commit transaction (conflicting files)
    libvirt: /var/lib/libvirt/images exists in filesystem
    Errors occurred, no packages were upgraded.
    error: failed to commit transaction (conflicting files)
    libvirt: /var/lib/libvirt/images exists in filesystem
    Errors occurred, no packages were upgraded.
    I found it in Wiki bu I do not know what I need to do
    [~]
    [elrengo@xpsl421x]$ ls -l /var/lib/libvirt/images
    lrwxrwxrwx 1 root root 49 dic 8 00:05 /var/lib/libvirt/images -> /home/elrengo/VirtualMachines/kvm/libvirt/images/
    Thanks in advance!
    Last edited by elrengo (2015-04-08 18:12:25)

    ewaller wrote:What did pacman -Qo thePath/theFileNameInTheErrorMessage say? You need to Querry what package owns that conflicting file
    FTFY.
    Last edited by karol (2015-04-08 18:56:06)

  • Serializable transaction isolation level bugs

    Hello!
    I would like to know which versions of Oracle database are free or not free of serializable transaction isolation level bugs.
    Especially I'm interested in all informations about 440317 bug which is described here: Bug in Oracle's handling of transaction isolation levels?
    Thank you very much.

    If you are genuinely suffering from 440317 (and not poor design which is the most common cause of ORA-8177, and hence why it's taken so long for this bug to get fixed - originally raised in 7.2.2 !!!) then you will have to wait until 10g R2 sees the light of day.
    Cheers, APC

  • Serializable transactions and initrans parameter for version enabled tables

    Hi,
    we want to use serializable transactions when using version enabled tables, so we need to set initrans parameter >= 3 for such tables.
    Change made during BEGINDDL - COMMITDDL process is not done for LT table. I think that initrans parameter is not checked at all during BEGINDDL-COMMITDDL process, because skeleton table has initrans=1 even if LT table has different value of this parameter.
    -- table GRST_K3_LT has initrans = 1
    exec dbms_wm.beginddl('GRST_K3');
    alter table grst_k3_lts initrans 3;
    exec dbms_wm.commitddl('GRST_K3');
    -- table GRST_K3_LT has initrans = 1
    During enableversioning this parameter is not changed, so this script succesfully set initrans for versioned tables.
    -- table GRST_K3 has initrans = 1
    alter table grst_k3 initrans 3;
    exec dbms_wm.enableversioning('GRST_K3','VIEW_WO_OVERWRITE');
    -- table GRST_K3_LT has initrans = 3
    We use OWM 10.1.0.3 version.
    We cannot version disable tables. I understand that change can be done after manually disabling trigger NO_WM_ALTER.
    Are there any problems with using serializable transactions when reading data in version enabled tables? We will not use serializable transactions for changing data in version enabled tables.
    thanks for Your help
    Jan VeleÅ¡Ãk

    Hi,
    You are correct. We do not currently support the initrans parameter during beginDDL/commitDDL. However, as you indicated, we will maintain any value that is set before enableversioning. If this is a critical issue for you, then please file a TAR and we can look into adding support for it in a future release.
    Also, there are no known issues involving serializable transactions on versioned tables.
    Thanks,
    Ben

  • Error: failed to commit transaction (conflicting files)

    I am getting the following error while performing a system wide upgrade i.e. sudo pacman -Syu
    error: failed to commit transaction (conflicting files)
    r: /usr/lib/R/library/foreign/COPYRIGHTS exists in filesystem
    r: /usr/lib/R/library/rpart/Meta/vignette.rds exists in filesystem
    r: /usr/lib/R/library/rpart/NEWS.Rd exists in filesystem
    r: /usr/lib/R/library/rpart/doc/index.html exists in filesystem
    r: /usr/lib/R/library/rpart/doc/longintro.R exists in filesystem
    r: /usr/lib/R/library/rpart/doc/longintro.Rnw exists in filesystem
    r: /usr/lib/R/library/rpart/doc/longintro.pdf exists in filesystem
    r: /usr/lib/R/library/rpart/doc/usercode.R exists in filesystem
    r: /usr/lib/R/library/rpart/doc/usercode.Rnw exists in filesystem
    r: /usr/lib/R/library/rpart/doc/usercode.pdf exists in filesystem
    r: /usr/lib/R/library/survival/doc/sourcecode.pdf exists in filesystem
    Errors occurred, no packages were upgraded
    Thanks

    I believe it is strongly advised not to use --force, especially for a system-wide upgrade. But it could be ok to use it only for the specific package that is causing this problem.
    In this case you could use pacman -Qo to check if one of those files belong to a package or not.
    If you indeed installed those from a third party source, it may be cleaner to delete the whole directorie(s) you copied manually instead of using --force on the new package, so that there are no leftover files.

  • Concurrency with transactions

    Hi,
    My requirement tells that my program should maintain transactions and also it needs concurrent read and write on the same database object.With concurrency document "http://www.oracle.com/technology/documentation/berkeley-db/db/ref/cam/intro.html" clearly tells that transactions are not allowed with concurrency.What should be my approach for this requirement?
    Regards
    Nandish

    Hi Nandish,
    I think that you are saying about "It is an error to specify any of the other DB_ENV->open subsystem or recovery configuration flags, for example, DB_INIT_LOCK, DB_INIT_TXN, or DB_RECOVER." - this means that is an error to specify those flags when configuring the environment as Concurrent Data Store. You can see the differences between the Berkeley DB products in here: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/intro/products.html
    Also, you can see what a Transactional Data Store is about, in here: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/transapp/app.html
    Please also consider the other thread that you had opened: Concurrency with multiprocess application
    Thanks,
    Bogdan Coman

  • Concurrent transaction isolation

    Hello,
    I am building a multithreaded application that uses the Semantic Jena APIs that relies on the transactions of the different threads to be isolated before a commit but I'm not quite getting this behavior. Here's a simple example (the full example source is available upon request).
    <h1>Thread 1</h1>
    Open a connection
    Get a GraphOracleSem from the connection
    call GraphOracleSem.getTransactionHandler.begin()
    Add Triple A
    Add Triple B
    Add Triple C
    call GraphOracleSem.getTransactionHandler.commit()
    Close the GraphOracleSem
    Dispose the connection
    Open a connection
    Get a GraphOracleSem from the connection
    call GraphOracleSem.getTransactionHandler.begin()
    Add Triple A
    Add Triple B
    Add Triple C
    call GraphOracleSem.getTransactionHandler.commit()
    Close the GraphOracleSem
    Dispose the connection
    <h1>Thread 2</h1>
    Open a connection
    Get a GraphOracleSem from the connection
    call GraphOracleSem.getTransactionHandler.begin()
    CheckA = true if Triple A Exists
    CheckB = true if Triple B Exists
    CheckC = true if Triple C Exists
    Throw Exception unless CheckA == CheckB == CheckC
    call GraphOracleSem.getTransactionHandler.abort() //no write is necessary here
    Close the GraphOracleSem
    Dispose the connection
    Now if the effects of the two threads were isolated from each other, CheckA and CheckB and CheckC would always be equivalent (sometimes, true, sometimes false) but this does not seem to be the case (when my code at least...). I'm not sure if this requires a Serializeable transaction isolation level to be specified but quoting the GraphOracleSem performAdd method:
    <h4>"Adds a triple into the graph. This change to this graph object will not be persisted until the transaction is committed. However, subsequent queries (using the same Oracle connection) can see this change."</h4>
    Doesn't this mean that two connections making changes to GraphOracleSem should not see each-other's changes until a commit? Or is there something I'm missing here?
    Also if this isn't the way to get something like this to work, how can it be done?
    Edited by: alexi on Nov 11, 2010 12:22 PM - Whoops, cant attach anything to this forum

    Hi,
    I am afraid you cannot use it this way.
    See this example using SQL inserts directly. Assume there are two concurrent sessions.
    Session 1:
    SQL> set transaction isolation level serializable;
    Transaction set.
    SQL> insert into basic_tpl values(sdo_rdf_triple_s('basic','<urn:a>','<urn:b>','<urn:c_123>'));
    1 row created.
    Session 2:
    SQL> set transaction isolation level serializable;
    Transaction set.
    SQL> insert into basic_tpl values(sdo_rdf_triple_s('basic','<urn:a>','<urn:b>','<urn:c_567>'));
    insert into basic_tpl values(sdo_rdf_triple_s('basic','<urn:a>','<urn:b>','<urn:c_567>'))
    ERROR at line 1:
    ORA-08177: can't serialize access for this transaction
    ORA-06512: at "MDSYS.SDO_RDF_INTERNAL", line 7538
    ORA-06512: at "MDSYS.BASIC_INS", line 37
    ORA-04088: error during execution of trigger 'MDSYS.BASIC_INS'
    SQL> rollback;
    Rollback complete.
    SQL> insert into basic_tpl values(sdo_rdf_triple_s('basic','<urn:a>','<urn:b>','<urn:c_567>'));
    insert into basic_tpl values(sdo_rdf_triple_s('basic','<urn:a>','<urn:b>','<urn:c_567>'))
    ERROR at line 1:
    ORA-55303: SDO_RDF_TRIPLE_S constructor failed: BNode-non-reuse case:
    SQLERRM=ORA-06519: active autonomous transaction detected and rolled back
    ORA-06512: at "MDSYS.MD", line 1723
    ORA-06512: at "MDSYS.MDERR", line 17
    ORA-06512: at "MDSYS.SDO_RDF_TRIPLE_S", line 64
    If you want application level serialization, you can use dbms_lock package to acquire a lock before
    performing updates. Another simple way is to create a simple table with one row and do a "select * from tabName for update." You can add a "nowait" if you don't want your session to be blocked.
    Hope it helps,
    Zhe Wu

  • Effect of Multiversion Concurrency Control on Isolation

    Suppose I have a table defined as follows:
    create table duty
    (person char(30),
    status char(3)
    And it has the following contents:
    select * from duty;
    PERSON STATUS
    Greg on
    Heping on
    If I do the following in two sessions as outlined:
    *** Session 1 ***
    set transaction isolation level serializable;
    *** Session 2 ***
    set transaction isolation level serializable;
    *** Session 1 ***
    select * from duty where person = 'Greg';
    PERSON STATUS
    Greg on
    -- Since Greg is 'on' we'll set Heping 'off'.
    *** Session 2 ***
    select * from duty where person = 'Heping';
    PERSON STATUS
    Heping on
    -- Since Heping is 'on' we'll set Greg 'off'.
    *** Session 1 ***
    update duty set status = 'off' where person = 'Heping';
    *** Session 2 ***
    update duty set status = 'off' where person = 'Greg';
    *** Session 1 ***
    commit;
    *** Session 2 ***
    commit;
    Then, my table contains
    select * from duty;
    PERSON STATUS
    Greg off
    Heping off
    If these two transactions had been executed according to the SQL92 standard for transaction isolation level serializable, that is in one order or the other, then the status of these two rows would not both be 'off' (because I would not have executed the update if I saw the status off).
    I note that Sybase seems to correctly handle these transactions in serializable mode if I execute them just as show above in that it identifies a deadlock between the two and forces one to rollback.
    Does Oracle not implement the SQL92 Standard with respect to transaction isolation levels? Is this behavior due to Multi-Version Concurrency Control?
    Thanks,
    G.Carter

    The couple of responses are very much appreciated. I especially found the link to Tom Kyte's article, "On Transaction Isolation Levels" (http://www.oracle.com/technology/oramag/oracle/05-nov/o65asktom.html) enlightening.
    My conclusion is that different SQL database vendors may claim compliance with the SQL-92 standard despite the fact that their databases exhibit different behaviors and yield different answers under like circumstances because the SQL-92 standard is self inconsistent. In particular, on the one hand, the standard states that:
    [1]A serializable execution is defined to be an execution of the operations of
    concurrently executing SQL-transactions that produces the same effect as
    some serial execution of those same SQL-transactions.
    And on the other hand (in fact, in the very next paragraph), the standard states that:
    [2]The isolation level specifies the kind of phenomena ["Dirty read", "Non-repeatable read", "Phantom"]
    that can occur during the execution of concurrent SQL-transactions.
    Whereas Sybase can emphasize [1] as a justification for its behavior, Oracle can emphasize [2] to justify its behavior, and under like circumstances, those behaviors yield different results.
    Unfortunately, I (and I've got to believe that many others as well) do not have the luxury of building an application that will work with only one vendor's database.
    Thanks,
    G.Carter

  • Transactions and database locks

    Hi,
                   We use Weblogic 4.5.1 on Windows NT 4.0 with Oracle 8.0.5. Our database
              isolation is set to TRANSACTION_READ_COMMITTED. I have an entity bean with
              TX_REQUIRED & TRANSACTION_READ_COMMITTED settings. If my client creates a
              transaction, and starts calling methods on this entity bean, is the
              corresponding database row locked for the duration of the transaction? We
              have concurrent SQl-plus sessions going on and we want make sure there is
              no data corruption. If the row is not locked, is it ok for me to explicitly
              lock it from inside my entity bean?
              Thanks,
              Srini.
              

    Hi. This should have been posted to the EJB or JDBC group, but I'll take it.
              This is an Oracle question. If you have a transaction as you've described,
              then the behavior will be exactly as if you had multiple SQL-PLUS sessions,
              and in one of them, you did:
              SQL> BEGIN;
              -- do what your bean would do;
              SQL> COMMIT;
              You can test this there. In general, you'll find that Oracle's optimistic locking
              will allow any number of simutaneous transactions to access a given row
              at one time. Oracle does not lock the real data while a transaction is ongoing,
              instead making a copy for the client to work off of. At commit time, depending
              on the isolation level semantics, some or all of the transactions may fail when
              Oracle tries to update the real data from the per-session private data.
              I would council against running with SERIALIZABLE mode because there
              is a serious bug in Oracle, where serializable transactions may fail silently.
              Details on request.
              Joe
              Srini wrote:
              > Hi,
              > We use Weblogic 4.5.1 on Windows NT 4.0 with Oracle 8.0.5. Our database
              > isolation is set to TRANSACTION_READ_COMMITTED. I have an entity bean with
              > TX_REQUIRED & TRANSACTION_READ_COMMITTED settings. If my client creates a
              > transaction, and starts calling methods on this entity bean, is the
              > corresponding database row locked for the duration of the transaction? We
              > have concurrent SQl-plus sessions going on and we want make sure there is
              > no data corruption. If the row is not locked, is it ok for me to explicitly
              > lock it from inside my entity bean?
              >
              > Thanks,
              > Srini.
              PS: Folks: BEA WebLogic is in S.F., and now has some entry-level positions for
              people who want to work with Java and E-Commerce infrastructure products. Send
              resumes to [email protected]
              The Weblogic Application Server from BEA
              JavaWorld Editor's Choice Award: Best Web Application Server
              Java Developer's Journal Editor's Choice Award: Best Web Application Server
              Crossroads A-List Award: Rapid Application Development Tools for Java
              Intelligent Enterprise RealWare: Best Application Using a Component Architecture
              http://weblogic.beasys.com/press/awards/index.htm
              

  • Transaction resources not released across program invocations using MVCC

    I'm evaluating Berkley DB XML v2.4.13 for use in a new software product and have encountered problems with multiversion concurrency control.
    I'm using the Java API and JRE 1.6.0_07. The platform is Windows XP Professional SP3.
    When I create a transaction with snapshot isolation enabled, execute some XQueries that read/update the contents of a document and then commit or abort it, the resources allocated to the transaction do not appear to get released. I've used EnvironmentConfig.setTxnMaxActive() to increase the number of active transactions allowed but this just allows me to run my program more times before it eventually hangs.
    When I set EnvironmentConfig.setTxnMaxActive() to a small number the program hangs before completing the first time I run it. When I set
    EnvironmentConfig.setTxnMaxActive() to a high value then I can run my program several times before it hangs. Note that the program runs to
    completion and exits cleanly. I used the task manager to verify that all processes started by the program are gone when it completes. So
    it appears that the resources allocated to a transaction are persisted in the database across program invocations. As a result the program
    eventually hangs after running it several times. The number of times it can be run before hanging is directly proportional to how many active
    transactions I've set with EnvironmentConfig.setTxnMaxActive(). It appears as if I'm not shutting down the environment cleanly but I've
    read the documentation several times and am following the examples exactly. I've committed every transaction and have closed the XmlContainer,
    XmlManager and the Environment.
    When I use read-uncommitted, read-committed or serializable transaction isolation everything works fine. I'm only seeing this problem when
    I use snapshot isolation. Am I doing something wrong or is this a known problem? If this is a known problem is there a work around?
    Here's a copy of the program source code. Thanks in advance for any help you can provide.
    package dbxml;
    import java.io.File;
    import com.sleepycat.dbxml.*;
    import com.sleepycat.db.*;
    public class txn
    public static void main(String args[])
    double partId = Double.parseDouble(args[0]);
    int partCnt = Integer.parseInt(args[1]);
    XmlTransaction txn = null;
    Environment myEnv = null;
    XmlManager myManager = null;
    XmlContainer parts = null;
    XmlQueryExpression qe;
    XmlResults results;
    XmlQueryContext context;
    String readQry = "collection('parts.dbxml')/part[@number=$partId]";
    String updQry = "replace value of node collection('parts.dbxml')/part[@number=$partId]/description with $desc";
    try
    EnvironmentConfig envConf = new EnvironmentConfig();
    // If the environment doesn't exist, create it.
    envConf.setAllowCreate(true);
    // Turn on the transactional subsystem.
    envConf.setTransactional(true);
    // Initialize the in-memory cache
    envConf.setInitializeCache(true);
    // Initialize the locking subsystem.
    envConf.setInitializeLocking(true);
    // Initialize the logging subsystem.
    envConf.setInitializeLogging(true);
    envConf.setLockDetectMode(LockDetectMode.MINWRITE);
    envConf.setLockTimeout(5000);
    envConf.setTxnMaxActive(200);
    envConf.setCacheMax(10000);
    envConf.setMaxLocks(10000);
    envConf.setMaxLockers(10000);
    envConf.setMaxLockObjects(10000);
    File envHome = new File("./dbxml/src/dbxml/dbhome");
    myEnv = new Environment(envHome, envConf);
    XmlManagerConfig managerConfig = new XmlManagerConfig();
    managerConfig.setAdoptEnvironment(true);
    myManager = new XmlManager(myEnv, managerConfig);
    XmlContainerConfig containerConf = new XmlContainerConfig();
    containerConf.setTransactional(true);
    containerConf.setAllowCreate(true);
    // Turn on container-level multiversion concurrency control
    containerConf.setMultiversion(true);
    parts = myManager.openContainer("parts.dbxml", containerConf);
    TransactionConfig txnConfig = new TransactionConfig();
    // set the transaction isolation-level
    txnConfig.setSnapshot(true);
    for (int i = 0; i < partCnt; i++, partId++)
    // start a transaction
    txn = myManager.createTransaction(null, txnConfig);
    context = myManager.createQueryContext();
    context.setVariableValue("partId", new XmlValue(partId));
    context.setVariableValue("desc", new XmlValue(System.currentTimeMillis()));
    qe = myManager.prepare(txn, readQry, context);
    results = qe.execute(txn, context);
    while (results.hasNext())
    String part = results.next().asString();
    System.out.println(part);
    results.delete();
    qe.delete();
    // Prepare (compile) the query
    qe = myManager.prepare(txn, updQry, context);
    results = qe.execute(txn, context);
    txn.commit();
    results.delete();
    qe.delete();
    txn.delete();
    txn = myManager.createTransaction(null, txnConfig);
    qe = myManager.prepare(txn, readQry, context);
    results = qe.execute(txn, context);
    while (results.hasNext())
    String part = results.next().asString();
    System.out.println(part);
    txn.commit();
    results.delete();
    context.delete();
    qe.delete();
    txn.delete();
    catch (Exception e)
    try
    if (txn != null)
    txn.abort();
    catch(XmlException xe)
    xe.printStackTrace();
    e.printStackTrace();
    finally
    try
    System.out.println("Shutting Down ...");
    if (parts != null)
    System.out.println("Closing Container");
    parts.close();
    if (myManager != null)
    System.out.println("Closing Manager");
    myManager.close();
    System.out.println("Shut Down Complete");
    catch (Exception xe)
    xe.printStackTrace();
    }

    From the documentation I linked for you:
    The environment should also be configured for sufficient transactions
    using [DB_ENV-&gt;set_tx_max|http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/env_set_tx_max.html]. The maximum number of transactions
    needs to include all transactions executed concurrently by the
    application plus all cursors configured for snapshot isolation.
    Further, the transactions are retained until the last page they created
    is evicted from cache, so in the extreme case, an additional transaction
    may be needed for each page in the cache. Note that cache sizes under
    500MB are increased by 25%, so the calculation of number of pages needs
    to take this into account.As you can see, to calculate the maximum number of transactions you should configure for, you should use the following formula:
    number_of_txns = cache_size / page_size + number_of_concurrently_active_transactionsThis will give you a worst case figure - in practice you may find that a lower number will suffice.
    You'll also notice that transactions remain allocated "until the last page they created is evicted from the cache", so you should find that checkpointing will also reduce the number of transactions necessary at any one time.
    Using snapshot semantics only on read-only operations will give you the full value provided by MVCC. This means that the read-only (snapshot) transactions will take copies (snapshots) of the pages that are write-locked, but the transactions that write will wait on the write-lock. Using snapshot semantics this way will reduce deadlocks as intended - using snapshot semantics for write transactions just introduces [a new way to get a deadlock|http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/txn_begin.html#DB_TXN_SNAPSHOT]:
    The error DB_LOCK_DEADLOCK will be returned from update operations if a snapshot transaction attempts to update data which was modified after the snapshot transaction read it.John

  • Phantom Reads / Data Concurrency

    I am using OWB 10.2.
    I am encountering phantom reads within some of my OWB maps. These phantom reads are caused by activity on the source system when I am sucking the data from the source tables. The problem is my map does an update and then an insert. The activity on the source occurs after the update transaction has started but before the insert transaction.
    Does anyone know of a setting in OWB that allows serializable transactions?
    I thought of using the command "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE" and changing the Commit Control to Manual for the map but can I put the set transaction command in a Pre-Mapping process? Or does it have to be in a SQL*Plus operator in the Process Flow.
    I cannot find any information about Data Concurrency in the OWB documentation and am surprised that no one has encountered this problem before.
    Help!

    Hi,
    Check the below links:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:27523665852829
    http://www.experts-exchange.com/Database/Oracle/Q_20932242.html
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com

  • Error with transaction

    Hi!
    An error has occurred! Why are you transfer money from my card again? I paid 9 735 HKD for all (mac mini, mouse, keyboard and adapter). But then you transferred money again! 738 HKD and 370 HKD for that? Can you comment this?
    Order Number: W226194711
    login: [email protected]

    The 8177 error does not seem to be a rarity when using transaction isolation serializable...
    Yes, if you elect to use serializable you can just about bet that ORA-8177 will, at some point, come along for the ride. It is just a "fact of life" when using serializable (including "false positives"). The only way to avoid it (as far as I know) is to not use serializable. If this is not possible, then the application will have to deal with it in some fashion.
    It is relatively easy to demonstrate that you can get the error using SQL*Plus. On one of my test systems, the following reliably raises ORA-8177:
    Session 1:
    SQL> create table test (a number, b number);
    Table created.
    SQL> insert into test values (1,1);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> begin
      2    -- use serializable for this test
      3    set transaction isolation level serializable;
      4
      5    -- sleep for 10 seconds to perform actions in session 2
      6    dbms_lock.sleep(10);
      7
      8    -- this will likely fail with ORA-8177
      9    update test set b = 2 where a = 1;
    10  end;
    11  /
    While the anonymous pl/sql block is executing...
    Session 2:
    SQL> insert into test values (2,2);
    1 row created.
    SQL> commit;
    Commit complete.
    Now, back in Session 1...
    begin
    ERROR at line 1:
    ORA-08177: can't serialize access for this transaction
    ORA-06512: at line 9I'm not sure what you mean about using serializable to push concurrency problems into the database.
    Regards,
    Mark
    [EDIT]
    Also, I believe index block splits during the serializable transaction can also result in ORA-8177...

  • Conflicts resolution methods in Oracle Lite

    Can anyone please provide the answers of the following questions?
    1_What Methods are used for Conflict detection and resolution for concurrent updates by multiple clients in Oracle lite databases?
    2_ Is there any method that extract semantic relation from the concurrent update transactions and create a global update schedule?
    3_ Does oracle lite use conflict avoidance mechanism?
    4_ What replication method is used by Oracle Lite Database?

    In terms of conflict resolution with oracle lite, which end do you mean? conflict resolution in the client database (ie: oracle lite) or on the server side when processing client uploads (this is just a standard oracle database), also not sure what you are trying to achieve
    *1_What Methods are used for Conflict detection and resolution for concurrent updates by multiple clients in Oracle lite databases?*
    I assume in the following that you are talking about dealing with uploads
    Depending on how the publication items are defined, the process is quite different.
    a) fast refresh publication items
    When the client synchronises, the upload data is uploaded as a packed binary file which is then unpacked and inserted into in queue tables in the mobileadmin repsitory (table names begin CFM$ followed by the publication item name). This is the only action that happens during the actual sync process.
    A second and independent process, not linked to the actual synchronisation - the MGP process, runs on the mobile server, and this has three phases - apply, process logs and compose that run one after the other. You can set the MGP to only do the apply phase, or all three.
    during the apply phase the data in the in queue tables for a particular user/transaction will be applied to the server database. Normally the MGP process is set to have three threads (this can be changed, but three is the default), and therefore three client uploads will be processed in parallel, but each of these threads is independant of the others and therefore should be seen as seperate transactions.
    It should be noted that even if you have 50 or 100 users synchronising concurrently, only three upload sets will be processed at any one time, and almost certainly a period of time after the synchronisation has completed (may be many hours depending on the MGP cycle time)
    As each of the apply threads is a seperate transaction, there is no concept of concurrency built in, and the only conflict resolution by default is based on the server wins/client wins setting of the publication item. where multiple users are updating the the same server record with 'client wins', the first user will update the data, and then the next user will update the data (just a repeat of the previous one). NOTE also that in the case of an update, ALL columns in the record are updated, there is no column level update.
    There are customisation options available to provide finer grained control over the apply process, look at the PLSQL callback packages registered against each publication item for beforeapply, afterapply, beforetranapply and aftertranapply, Apply DML procedures against the publication items and also the CUSTOMIZE package at the MGP level
    b) complete refresh publication items
    where the publication as a whole has a mixture of fast and complete refresh publication items, these normally work in the same way as the fast refresh described above. Where however you just have complete refresh items the data MAY be written directly to the server table on upload
    c) queue based publication items
    These work in realtime, rather than with a delay for the MGP process to pick up the data.
    When the user synchronises, the uploaded data is is written to the in queue tables in the same way, but when this is completed, a package (defined as part of the publication definition) is called, and the procedure upload_complete is run passing in the user and transaction identifiers. This package needs to be hand crafted, but you have full control over what and how all of the uploaded data is processed, but again this is a single transaction for that user. If you want to look at other sessions running, you need to find a way to implement this.
    *2_ Is there any method that extract semantic relation from the concurrent update transactions and create a global update schedule?*
    As noted above, the uploads may be processed in parallel, but they are seperate transactions, so no built ins
    *3_ Does oracle lite use conflict avoidance mechanism?*
    Only the basic oracle stuff, unless you use the customisation options to write your own
    *4_ What replication method is used by Oracle Lite Database?*
    The different types of publication items select data from the server database for download in different ways
    a) fast refresh
    change logging tables and triggers are created in the server database. These are scanned during the MGP process logs phase to determine what changes have happened since the last MGP compose, and what publication items they affect. The MGP compose then runs and this uses the same three threads to process the users in alphabetical order using the change keys to populate data in out queue tables (prefixed CMP$) in the repository. These have the PK values for the data, plus a transaction types (insert/update/delete). All the MGP process does is populate these out queue tables.
    When the user synchronises, the data in the out queue tables is used as a key list to extract the data from the actual server tables into a packed binary file, and this is sent to the client.
    b) complete refresh
    there is no pre-preparation in this case, the data is streamed directly from the server database into the packed binary download file
    c) queue based items
    in real time when the user is synchronising after the apply has been done by the uploade_complete procedure, a second procedure download_init is called. Within this you have to code the data extract, and MUST populate tables (you also need to create them) CTM$<publication item name> these are effectively out queue tables, but contain all of the data, not just the PK values. At the end of the procedure, the data is streamed from these into the binary file for download.
    Depending on the definition of your apublication, you could have one or more of the above types (VERY bad idea to mix queue based and fast refresh unless you are very sure about what you are doing) and therefore there may be a mix of different actions happening at different times
    In conclusion i would say that try and send seperate data to clients so that they do not interfere with each other, and for inserts use uniqueue keys or sequences. If you MUST send the same data to different clients for update, then the queue based approach provides the best control, but as it is real time is not as scalable for large data sets.

Maybe you are looking for

  • Dvd drive not recognizing dvds but recognizes all other disks

    My dvd drive on my macbook will recognize cds both burned and retail. It will recognize mac formated (burned) dvds. But it will not recognize any other dvds. It burns in a non mac format (ie burning a linux ISO image). I was just watching a movie on

  • New to iMac , how do I save photo's to my computer?

    I just got a i-mac , this is my first mac computer so I am lost. I am trying to save photo's to my computer from a disk and am not sure how to do this. On my old computer I could just right click and save the image but I can not do that on the mac. t

  • ITunes Keeps Erasing Entire Library

    Hello, Since the most recent iTunes update, my iTunes has started to erase my entire library. I have re-imported all of my music, books, videos, ringtones and TV shows back into iTunes at least 20 times, only for iTunes to dump the lot the next time

  • How do you change the location of a small image within an IMAQ image display?

    I have an image that exactly fills the display. But then I want to extract a portion of the image and display that small piece, but have it be in the same location as it was before, not centered in the display (essentially I want to crop the image bu

  • Unable to DOWNLOAD from OTN

    Hi, I've been trying to download ORACLE 10g for Vista, for the pass week, with unsucessfull results. I'm able to go as far as selecting the file to download, then it brings me to the OTN logon screen ??!!??. I then put in the same user/password as I