Triggers on Snapshots
Hi
I have a system currently implemented on Oracle 7.34 which
involves triggers on the snapshot basetables. The snapshots are
read only Fast Refresh , and the triggers perform a funcion
similar to the snapshot log, the main difference being that the
triggers are on the client, reducing the load on the server.
The problem I face, is that triggers on read-only snapshots are
not supported by Oracle, and therefore I have not guarantee that
the mechanism will still work on Oracle 8i.
The triggers are insert/update/delete on the base snapshot
table .. ie SNAP$_STBLE and create a row entry into a Temporary
table for each modification picked up in the fast refresh
process. An Oracle procedure reads the temporary table, delete
the entery after it has finished processing. All this occurs on
the Client, reducing the load on the Transaction system.
Anyone any ideas how to accomplish the above without triggers
on snapshots, or if not, how to ensure it will work before going
down the road of a migration. ?
Thanks in advance
JEB
null
Hi,
I recommend that you contact Oracle RDBMS support directly. This
discussion forum's main expertise is migrations from non-oracle
environments to oracle.
Regards
John
JEB (guest) wrote:
: Hi
: I have a system currently implemented on Oracle 7.34 which
: involves triggers on the snapshot basetables. The snapshots
are
: read only Fast Refresh , and the triggers perform a funcion
: similar to the snapshot log, the main difference being that the
: triggers are on the client, reducing the load on the server.
: The problem I face, is that triggers on read-only snapshots are
: not supported by Oracle, and therefore I have not guarantee
that
: the mechanism will still work on Oracle 8i.
: The triggers are insert/update/delete on the base snapshot
: table .. ie SNAP$_STBLE and create a row entry into a Temporary
: table for each modification picked up in the fast refresh
: process. An Oracle procedure reads the temporary table, delete
: the entery after it has finished processing. All this occurs
on
: the Client, reducing the load on the Transaction system.
: Anyone any ideas how to accomplish the above without triggers
: on snapshots, or if not, how to ensure it will work before
going
: down the road of a migration. ?
: Thanks in advance
: JEB
Oracle Technology Network
http://technet.oracle.com
null
Similar Messages
-
Hi,
Here is the situation. We have existing replication with sql 2008 r2 publisher and a subscriber on same edition. The database is quite big so snapshot generally takes about 4 hours or more. There are additional tables on subscriber so a backup / restore
mode of initialization is not suitable.
This replication is working just fine, now there is a need to add a sql 2014 subscriber to this publisher and same tables as in current publication, so we decided to simply add another push subscription for this new sql 2014 subscriber. During the process
the Initial snapshot is generated and applied to new 2014 subscribe, keep in mind we have not made any change to publication to avoid affecting the existing subscriber. So when snapshot is delivered and appeared in replication monitor it started creating primary
keys on subscriber database. We have few big tables where we are not sending clustered and non clustered indexes across so that replication snapshot goes fast (although it is over 4 hours). These indexes and primary keys get created with manual scripts after
snapshot is generated. During this time log reader agent has been actively sending transactions to existing subscriber that is telling me that as soon as distribution agent done with the snapshot it will start sending transactions to new subscriber (which
is quite normal). Now the problem, after snapshot is done I need to run manual scripts to create PK and indexes on new subscriber, to do that I have to stop distribution agent so that indexes and PKs don't get blocked. After I stopped distrib
agent temporarily, the Scripts got completed successfully within 20 minutes and when I re-started distribution agent , to my surprise it started initializing the new subscriber again and sending same snapshot that was already applied. It make me puzzled
what caused the distribution agent to reinitialize although it already delivered snapshot and I have indexes created and ready to go. I tried this process again with more careful approach and gave about one hour after snapshot delivered message received. but
it did the same thing. Only explanation I can think of that publisher database is actively sending transactions when I was applying snapshot and as soon as it is delivered, the distribution agent started delivering transaction, I could see Undistributed
Commands kept on growing high telling me that there are lots of transactions for new subscriber to catch up but without indexes and some missing primary keys it would not even catch up and so I always stop the distrib agent and run my scripts on subscriber
and run into this situation. During all these times the existing subscriber never had any issue.
Do I need to make sure no ort minimal transactions happening on publisher when I initialize the new subscriber.
Or should I create new separate publication on the same publisher and use that.
Any other suggestions are welcome
Thanks!
dba60Snapshot was set of initialize immediately. No errors in repl_errors table. Distribution agent was using default profile.
Do you think that I should be running missing PK and indexing scripts as part of replication, replication provides an option run scripts before or after snapshot was applied. Also, I am wondering if creating some missing PKs is triggering the snapshot to
be reapplied. Also, Why the primary key for some tables not coming along during snapshot delivery?
Do you think having lots of transactions on publisher database while snapshot being applied would be affecting it too?
Also, is it possible to use a different distributors for each publication from same publisher?
My plan now is to create new publication for same publisher and database, this way I will have more control over changing article properties.
Thanks!
dba60 -
Analytical snapshots and scheduled jobs
Hello,
for extended analytical reports we need to make the snapshots of our statistic datas every day.
There is an option of doing snapshots of the Project in Project Management, but we need to have snapshots of our own BO's or data sources. Is there any possibility to do this?
In case snapshots can't be done by the system we also would perform it in coding, by creating business objects with required datas and saving them. But we need to do the saving automatically (regularly).
In Project Management it is also possible to run scheduled snapshots. How can we schedule jobs (it would be enough if we could execute an action which saves our datas at scheduled time)?
We have found only "Mass Data Run Process" in this context but the MDR's can be done only for the special standard Floorplans, aren't they?
Best regards,
Leonid Granatstein.Hi,
currently it is not possible to create snapshots for partner BO content by a standard process.
Also defining a mass data run object, which you could schedule and where you could implement the snapshot by yourself, is not available for partners yet. So we have no standard mechanism available to get your task done with the use of the current implementation possibilities.
The only option I see is that you develop the snapshot activity on your own in ByD studio.
To trigger the snapshot activity on a regular basis, I only see the option to trigger this from outside. An option would be to define a web service or an XML file upload.
You could write for example a small program ( by using PHP or .NET) on a PC which runs on a regular basis and which uploads an XML file or calls a web service. This then triggers the snapshot activity you have programmed in ByD Studio.
I hope this helps.
Regards,
Thomas -
Materialized views not updating
I'm having trouble with getting a materialized view with fast refresh to update. I'm working between two schemas and have the global query rewrite privilege for both users. I've run the catrep.sql script to make sure it had been run and find that it has a lot of packages that don't compile, which I guess is expected as I'm not using Advanced Replication. I think the problem is that I can't get the dbms_snapshot package to compile so can't even update the view manually. Is there another script I need to run to make the materialized views work? Some other privilege or variable?
I've granted permissions on tables, views, materialized views, triggers, and snapshots to both users.
The log does get entries but never sends them to the views.
I have a base table and a log on the table in schema 1.
I have a materialized view in schema 2:
create materialized view log on schema1.document
tablespace app_mview
with primary key
( list columns needed);
create materialized view schema2.doc_mv
pctfree 0
tablespace app_mview
storage (initial 128k next 128k pctincrease 0)
refresh fast
start with sysdate
next sysdate + 1/1440
as select * from schema1.document
Does anyone know where my problem might be?
thanks for any helpI have temporary parameters, not in itit.ora but echoing the job_queue_processes is 10, job_queue_interval is 60.
a sho err returns no errors.
earlier I did not get anything new in dba_errors when trying to compile. Since I've rerun the catrep.sql I'm getting pls-00201 - identifier must be declared for a lot of dbms_snap, sys.dbms_alert, dbms_reputil2 etc. (package, package body, and trigger types) That is why I think there must be another script that should be run that I don't know about.
I can't do a full refresh because the dbms_snapshot package will not compile. I believe it's possible that both full and fast will work once I get the dbms_snapshot package working.
thanks for your help
pat -
Snapshot Reports randomly (not) triggering Subscription
Hi.
I currently have set up 5 reports that generates a snapshot once a month, additionally I have set on a subscription on each of these.
The problem I am facing is that it seems that the subscription are triggered randomly, one month the subscriber can get 1 report, other months it can get 5 reports (and all in between).
The subscriptions are set up with: "When the report content is refreshed. This option is available only for report snapshots."
And when I view the subscription and snapshot-history of a report that was not send this month (at the time I am writing this), it says:
Snapshot History: 3/1/2015 12:00:10 AM (lastrun)
Subscriber: 2/1/2015 12:00 AM (last run)
IE: Last month this report had no problem, but this month is has... (This have been going on for a year now)
Does anybody have an idea what could be causing this? (This month 4 of 5 reports got sent)Hi Gaute Odin,
Per my understanding that you have configured the Report-special Schedule in the
Snapshot Options to add snapshots to report history and also schedule the subscription set up with: "When the report content is refreshed. This option is available only for report snapshots.", now the issue is that subscription didn't
run as the schedule in the snapshot history, right?
Generally, when you configure the schedule in the "Snapshot Options", it will generate snapshot due to this schedule and you can find the new snapshot information in the "Report History", this setting
have no relationship with the shedule of the subscription.
When you configured the schedule under the "Render this report from a report snapshot" session of the "Processing Options" you will get the subscription to run due to this schedule:
So, please check in the above sceenshot to make sure the schedule is set correcttly.
If you still have any problem, please feel free to ask.
Regards,
Vicky Liu
If you have any feedback on our support, please click
here.
Vicky Liu
TechNet Community Support -
How to cancel email notification from snapshot and simulation
Hi,
We've activated workflow template WS28700001 so that email notification can be triggered when task release. Meanwhile we are using snapshot and simulation at the same time. But when saving sanpshot or simulation the tasks which have status of released also trigger email nofitication. How to cancel this notification of snapshot and simulation?
Regards.Hi Ravi
I have added a Container Element(version) in workflow template WS28700001 and set a workflow start condition as follows.
&Task.Version& = ' '
Regards
Yemi -
Problem (?) with LVM snapshots since Feb 12 updates
After the update, I'm seeing some LVM snapshot errors during boot. In context:
:: running hook [udev]
:: Triggering uevents
[ x.xxx ] device-mapper: table: 254:9: snapshot: Snapshot cow pairing for exception table handover failed
[ x.xxx ] device-mapper: table: 254:14: snapshot: Snapshot cow pairing for exception table handover failed
:: running hook [keymap]
:: performing fsck on <root LV>
arch: clean ...
:: mounting '<root LV>' on real root
:: running cleanup hook [shutdown]
:: running cleanup hook [lvm]
:: running cleanup hook [udev]
Welcome to Arch Linux!
The number of errors varies between zero and three, the latter being the number of snapshots I have.
The corresponding info in the journal is:
Feb 15 09:29:17 caddywhompus kernel: device-mapper: table: 254:9: snapshot: Snapshot cow pairing for exception table handover failed
Feb 15 09:29:17 caddywhompus kernel: device-mapper: ioctl: error adding target to table
Feb 15 09:29:17 caddywhompus kernel: device-mapper: table: 254:14: snapshot: Snapshot cow pairing for exception table handover failed
Feb 15 09:29:17 caddywhompus kernel: device-mapper: ioctl: error adding target to table
Since the errors seem to occur during initramfs, here's some info on that:
# cat /etc/mkinitcpio.conf
MODULES="nouveau ext4"
BINARIES=""
FILES=""
HOOKS="base udev autodetect block keymap lvm2 keyboard fsck shutdown"
COMPRESSION="xz"
COMPRESSION_OPTIONS="-e -9"
# lsinitcpio -a /boot/caddywhompus.img
==> Image: /boot/caddywhompus.img
==> Created with mkinitcpio 0.13.0
==> Kernel: 3.7.7-1-ARCH
==> Size: 3.29 MiB
==> Compressed with: xz
-> Uncompressed size: 12.37 MiB (.265 ratio)
-> Estimated extraction time: 0.402s
==> Included modules:
ahci dm-mirror ehci-hcd libahci pata_marvell usb-common
ata_generic dm-mod ext4 [explicit] libata scsi_mod usbcore
button dm-region-hash hid mbcache sd_mod usbhid
cdrom dm-snapshot i2c-algo-bit mxm-wmi sr_mod usb-storage
crc16 drm i2c-core nouveau [explicit] ttm video
dm-log drm_kms_helper jbd2 pata_acpi uhci-hcd wmi
==> Included binaries:
blkid cp findmnt fsck.ext4 lsblk lvmetad switch_root udevd
busybox dmsetup fsck kmod lvm mount udevadm
==> Early hook run order:
udev
lvm2
==> Hook run order:
udev
keymap
==> Cleanup hook run order:
shutdown
lvm2
udev
Initially I missed the news to enable lvm-monitoring.service. I have since enabled the service, rebooted to another install of Arch which doesn't use LVM (all lvs are prevented from being activated with empty auto_activation_volume_list), and deleted and recreated the snapshots in case they had gotten borked. FWIW, the SSs now only exist for the purpose of troubleshooting this issue, so I'm not worried about their integrity.
I've done numerous boots and seen instances where both lvm-monitoring.service and dmeventd.service start, where both fail, and where one starts but the other fails. This doesn't seem to correspond with the occurence of the errors during boot. On my most recent boot both failed:
# systemctl status lvm-monitoring.service dmeventd.service
lvm-monitoring.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
Loaded: loaded (/usr/lib/systemd/system/lvm-monitoring.service; enabled)
Active: failed (Result: exit-code) since Fri 2013-02-15 09:49:25 EST; 2min 10s ago
Docs: man:dmeventd(8)
man:lvcreate(8)
man:lvchange(8)
man:vgchange(8)
Process: 237 ExecStart=/usr/sbin/lvm vgchange --monitor y (code=exited, status=5)
Feb 15 09:49:18 caddywhompus lvm[237]: 2 logical volume(s) in volume group "VG1" monitored
Feb 15 09:49:25 caddywhompus lvm[237]: No input from event server.
Feb 15 09:49:25 caddywhompus lvm[237]: VG0-ss_var: event registration failed: Input/output error
Feb 15 09:49:25 caddywhompus lvm[237]: VG0/snapshot1: snapshot segment monitoring function failed.
Feb 15 09:49:25 caddywhompus lvm[237]: 8 logical volume(s) in volume group "VG0" monitored
Feb 15 09:49:25 caddywhompus systemd[1]: lvm-monitoring.service: main process exited, code=exited, status=5/NOTINSSTALLED
Feb 15 09:49:25 caddywhompus systemd[1]: Failed to start Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 15 09:49:25 caddywhompus systemd[1]: Unit lvm-monitoring.service entered failed state
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
dmeventd.service - Device-mapper event daemon
Loaded: loaded (/usr/lib/systemd/system/dmeventd.service; static)
Active: failed (Result: core-dump) since Fri 2013-02-15 09:49:35 EST; 2min 0s ago
Docs: man:dmeventd(8)
Process: 469 ExecStart=/usr/sbin/dmeventd (code=exited, status=0/SUCCESS)
Main PID: 470 (code=dumped, signal=SEGV)
CGroup: name=systemd:/system/dmeventd.service
Feb 15 09:49:25 caddywhompus systemd[1]: Starting Device-mapper event daemon...
Feb 15 09:49:25 caddywhompus dmeventd[470]: dmeventd ready for processing.
Feb 15 09:49:25 caddywhompus systemd[1]: Started Device-mapper event daemon.
Feb 15 09:49:25 caddywhompus lvm[470]: Monitoring snapshot VG0-ss_var
Feb 15 09:49:25 caddywhompus lvm[470]: Monitoring snapshot VG0-ss_root
Feb 15 09:49:25 caddywhompus lvm[470]: Monitoring snapshot VG0-ss_home
Feb 15 09:49:35 caddywhompus systemd[1]: dmeventd.service: main process exited, code=dumped, status=11/SEGV
Feb 15 09:49:35 caddywhompus systemd-coredump[676]: Process 470 (dmeventd) dumped core.
Feb 15 09:49:35 caddywhompus systemd[1]: Unit dmeventd.service entered failed state
But in all cases, lvdisplay seems to indicate that the snapshots are working fine:
# lvdisplay VG0/ss_root VG0/ss_var VG0/ss_home
--- Logical volume ---
LV Path /dev/VG0/ss_root
LV Name ss_root
VG Name VG0
LV UUID jQo9aS-392r-VEX4-36ra-LNS8-BfIJ-0d8NyL
LV Write Access read/write
LV Creation host, time recover, 2013-02-14 17:37:31 -0500
LV snapshot status active destination for lv_root
LV Status available
# open 0
LV Size 5.00 GiB
Current LE 1280
COW-table size 5.00 GiB
COW-table LE 1280
Allocated to snapshot 0.01%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:5
--- Logical volume ---
LV Path /dev/VG0/ss_var
LV Name ss_var
VG Name VG0
LV UUID SPEFKY-nOYY-3B2L-GpAq-3eSe-aLtY-jlGLQJ
LV Write Access read/write
LV Creation host, time recover, 2013-02-14 17:37:32 -0500
LV snapshot status active destination for lv_var
LV Status available
# open 0
LV Size 5.71 GiB
Current LE 1462
COW-table size 5.71 GiB
COW-table LE 1462
Allocated to snapshot 0.51%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:14
--- Logical volume ---
LV Path /dev/VG0/ss_home
LV Name ss_home
VG Name VG0
LV UUID MzJVwN-i49J-1nto-6UM4-bq0P-6jD9-F21ok2
LV Write Access read/write
LV Creation host, time recover, 2013-02-14 17:37:32 -0500
LV snapshot status active destination for lv_home
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 2560
COW-table size 10.00 GiB
COW-table LE 2560
Allocated to snapshot 0.25%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:7
I made the suggested change to global_filter in lvm.conf, though I never had any problems booting; I just saw the occasional warning about fd0 and/or cdrom when running lvscan after boot, and figured it couldn't hurt.
And... I have no idea where to go from here.Hi,
I've had the same problem for a few months now and the computer has been rebooted many times. This time an endpoint engine has been updated so requires a reboot.
I've rebooted now and still don't see any updates for this windows 8 machines. On some of the W8 machines I have seen working, the updates don't appear in software centre, but when I go to view updates in programs and features, they do
appear there which is also a bit strange.
All the windows 7 machines work fine.
Jaz -
Distribution Manager failed to process package triggers an update of the packages
Hi all,
we have randomly following issue:
SCCM 2012 Sp1 CAS, takes a snapshot of a Windows update package in the morning (not sure how the frequency of this check is done). If for any reason it fails, SCCM automatically redistributes the package to all sites. This happened this morning again for 5
Windows updates packages. You understand that this means GB sent to all Secondary sites (66) with an useles amount of data sent out.
From the Status messages I see
Information Milestone RC0 06.11.2014 07:12:11 SCHVSGGSC600.rccad.net SMS_DISTRIBUTION_MANAGER 2300 Distribution Manager is beginning to process package "SUP-2014.09" (package ID = RC00017B).
Then lot of updates lists with comment taking a snapshot and finally
Error Milestone RC0 06.11.2014 07:12:29 SCHVSGGSC600.rccad.net SMS_DISTRIBUTION_MANAGER 2302 Distribution Manager failed to process package "SUP-2014.09" (package ID = RC00017B). Possible cause: Distribution manager does not have access to either the package source directory or the distribution point. Solution: Verify that distribution manager can access the package source directory/distribution point. Possible cause: The package source directory contains files with long file names and the total length of the path exceeds the maximum length supported by the operating system. Solution: Reduce the number of folders defined for the package, shorten the filename, or consider bundling the files using a compression utility. Possible cause: There is not enough disk space available on the site server computer or the distribution point. Solution: Verify that there is enough free disk space available on the site server computer and on the distribution point. Possible cause: The package source directory contains files that might be in use by an active process. Solution: Close any processes that maybe using files in the source directory. If this failure persists, create an alternate copy of the source directory and update the package source to point to it.
This triggers immediately an update of all DPs
Information Milestone RC0 06.11.2014 07:43:52 SCHVSGGSC600.rccad.net SMS_DISTRIBUTION_MANAGER 2304 Distribution Manager is retrying to distribute package "RC00017B". Wait to see if the package is successfully distributed on the retry.
Any idea
How this can be avoided, since nobody changed the package and we suppose it was a temp connection issue between the CAS and the package repository server
If this check can be set up to once a week for instance or even less?
Thanks,
MarcoHi Daniel,
thanks for the prompt answer. Actually I saw it yesterday at least for 1 package (the last one). The error is generate by SQL killing a task
Adding these contents to the package RC00010D version 347.
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:23
7796 (0x1E74)
Sleep 30 minutes... SMS_DISTRIBUTION_MANAGER
06.11.2014 07:12:23 3652 (0x0E44)
*** [40001][1205][Microsoft][SQL Server Native Client 11.0][SQL Server]Transaction (Process ID 152) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:25
5460 (0x1554)
STATMSG: ID=2302 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=SCHVSGGSC600.rccad.net SITE=RC0 PID=2144 TID=5460 GMTDATE=jeu. nov. 06 06:12:25.422 2014 ISTR0="SUP-2012.Q4" ISTR1="RC000068" ISTR2=""
ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=400 AVAL0="RC000068"
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:25
5460 (0x1554)
Failed to process package RC000068 after 0 retries, will retry 100 more times
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:25
5460 (0x1554)
Exiting package processing thread. SMS_DISTRIBUTION_MANAGER
06.11.2014 07:12:25 5460 (0x1554)
Used 4 out of 5 allowed processing threads.
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:26
3300 (0x0CE4)
Starting package processing thread, thread ID = 0x894 (2196)
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:27
3300 (0x0CE4)
Used all 5 allowed processing threads, won't process any more packages.
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:27
3300 (0x0CE4)
Sleep 1828 seconds... SMS_DISTRIBUTION_MANAGER
06.11.2014 07:12:27 3300 (0x0CE4)
STATMSG: ID=2300 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=SCHVSGGSC600.rccad.net SITE=RC0 PID=2144 TID=2196 GMTDATE=jeu. nov. 06 06:12:27.716 2014 ISTR0="SUP-2014.M05" ISTR1="RC00011D" ISTR2=""
ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=400 AVAL0="RC00011D"
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:27
2196 (0x0894)
Start updating the package RC00011D... SMS_DISTRIBUTION_MANAGER
06.11.2014 07:12:27 2196 (0x0894)
CDistributionSrcSQL::UpdateAvailableVersion PackageID=RC00011D, Version=14, Status=2300
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:27
2196 (0x0894)
*** [40001][1205][Microsoft][SQL Server Native Client 11.0][SQL Server]Transaction (Process ID 154) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:27
1424 (0x0590)
Taking package snapshot for package RC00011D
SMS_DISTRIBUTION_MANAGER 06.11.2014 07:12:27
2196 (0x0894)
Now as I mentioned it was probably an SQL (more likely) or networ issue, however the question was more about how to avoid that. Finally the packages were not changed and if possible I would avoid any automatic SCCM action on packages unless an operator manually
triggers them.
I didn't see any error today, so it should not be a configuration issue.
Is this possible? -
Hello,
I have been experimenting with snapshot isolation with Berkeley DB, but I find that it frequently triggers a segmentation fault when write transactions are in progress. The following test program reliably demonstrates the problem in Linux using either 5.1.29 or 6.1.19.
https://anl.app.box.com/s/3qq2yiij2676cg3vkgik
Compilation instructions are at the top of the file. The test program creates a temporary directory in /tmp, opens a new environment with the DB_MULTIVERSION flag, and spawns 8 threads. Each thread performs 100 transactional put operations using DB_TXN_SNAPSHOT. The stack trace when the program crashes generally looks like this:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff7483700 (LWP 11871)]
0x00007ffff795e190 in __memp_fput ()
from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
(gdb) where
#0 0x00007ffff795e190 in __memp_fput ()
from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
#1 0x00007ffff7883c30 in __bam_get_root ()
from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
#2 0x00007ffff7883dca in __bam_search ()
from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
#3 0x00007ffff7870246 in ?? () from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
#4 0x00007ffff787468f in ?? () from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
#5 0x00007ffff79099f4 in __dbc_iput ()
from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
#6 0x00007ffff7906c10 in __db_put ()
from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
#7 0x00007ffff79191eb in __db_put_pp ()
from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
#8 0x0000000000400f14 in thread_fn (foo=0x0)
at ../tests/transactional-osd/bdb-snapshot-write.c:154
#9 0x00007ffff7bc4182 in start_thread (arg=0x7ffff7483700)
at pthread_create.c:312
#10 0x00007ffff757f38d in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
I understand that this test program, with 8 concurrent (and deliberately conflicting) writers, is not an ideal use case for snapshot isolation, but this can be triggered in other scenarios as well.
You can disable snapshot isolation by toggling the value of the USE_SNAP #define near the top of the source, and the test program then runs fine without it.
Can someone help me to identify the problem?
many thanks,
-PhilHi Phil,
We have taken a look at this in more detail and there was a bug in the code. We have fixed the bug. We will roll it into our next 6.1 release that we do. If you would like an early patch that will go on top of 6.1.19, please email me at [email protected], reference this forum post and I can get a patch sent out to you. It will be a .diff file that apply on the source code and then rebuild the library. Once again thanks for finding the issue, and providing a great test program which tremendously helped in getting this resolved.
thanks
mike -
VCB/VADP, ESX 4.1 and NetWare vm snapshot backup issue
Hi!
If you are running Netware as a guest in VMware ESX 4.1 and are using backup
software that uses the snapshot feature to backup the guests vmdk then you
may run into an issue that causes the snapshot to fail. This was just
documented by VMware on Feb 23 (two days ago) so you may have not seen this
yet. Here is the url to the VMware kb:
http://kb.vmware.com/kb/1029749
The fix is to install the older v4.0 vmware tools into the NetWare guest.
Cheers,
RonRon,
It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.
Has your problem been resolved? If not, you might try one of the following options:
- Visit http://support.novell.com and search the knowledgebase and/or check all
the other self support options and support programs available.
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://forums.novell.com)
Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.novell.com/faq.php
If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.
Good luck!
Your Novell Product Support Forums Team
http://forums.novell.com/ -
Exporting snapshot logs...and terminates with unsuccessfull
Hi all,
i'm getting this error when i try to export a user in oracle 10g.
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
EXP-00008: ORACLE error 1455 encountered
ORA-01455: converting column overflows integer datatype
EXP-00000: Export terminated unsuccessfully.
i have one materialized view inside the user. the snapshot table having more than 500 rows of data.
when i drop the materialized view, i can able to export successfully.
how can i export a user having a materialized view. help me.thanks robert...thanks for ur timely reply....
i had a very rough time with this export since today morning.......
can i use this format....
expdp username/password FULL=y INCLUDE=GRANT INCLUDE=INDEX DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL
in the database there's no directory. can i use dpump_dir1 as dirrectory.....
Edited by: 887268 on Dec 6, 2011 5:06 AM -
Can anyone kindly explain what mutative triggers are in Oracle?
hi
Can anyone kindly explain what mutative triggers are in Oracle with example?
what is frag in oracle?Oracle raises the mutating table error to protect you from building in-deterministic software.
Let’s explain that with a very simple example. Here’s a simple EMP-table:
ENAME SAL
====== ====
SMITH 6000
JONES 4000Let’s suppose you have a business rule that dictates that the average salary is not allowed to exceed 5000. Which is true for above EMP-table (avg. SAL is exactly 5000).
And you have (erroneously) built after-row triggers (insert and update) to verify this business rule. In your row trigger you compute the average salary, check it’s value and if it’s more than 5000 you raise an application error.
Now you issue following DML-statement, to increase the salary of employees earning less than the average salary, and to decrease the salary of employees earning more than the average salary.
Update EMP
Set SAL = SAL + ((select avg(SAL) from EMP) – SAL)/2;The end result of this update is:
ENAME SAL
====== ====
SMITH 5500
JONES 4500Note that the business rule is still OK: the average salary still doesn’t exceed 5000. But what happens inside your row-trigger, that has a query on the EMP-table?
Let’s assume the rows are changed in the order I displayed them above. The first time your row trigger fires it sees this table:
ENAME SAL
====== ====
SMITH 5500
JONES 4000The row trigger computes the average and sees that it is not more than 5000, so it does not raise the application error. The second time your row trigger fires it sees the end result.
ENAME SAL
====== ====
SMITH 5500
JONES 4500For which we already concluded that the row-trigger will not raise the application error.
But what happens if Oracle in it’s infinite wisdom decides to process the rows in the other order? The first time your row trigger executes it sees this table:
ENAME SAL
====== ====
SMITH 6000
JONES 4500And now the row-trigger concludes that the average salary exceeds 5000 and your code raises the application error.
Presto. You have just implemented indeterministic software. Sometimes it will work, and sometimes it will not work.
Why? Because you are seeing an intermediate snapshot of the EMP-table, that you really should not be seeing (that is: querying).
This is why Oracle prevents you from querying the table that is currently being mutated inside row-triggers (i.e. DML is executed against it).
It’s just to protect you against yourself.
(PS1. Note that this issue doesn't happen inside statement-triggers)
(PS2. This also shows that mutating table error is really only relevant when DML-statements affect more that one row.)
Edited by: Toon Koppelaars on Apr 26, 2010 11:29 AM -
One of RAC instance doesnt generate awr snapshot
Hi,
I have a problem about awr snapshots. One of RAC instance doesn t generate awr snapshot.
When I look over alert.log. I couldn t see any error message
is there anybody to have idea about this issue?
OS Level HP-UX ITANIUM 11.23
DB version Oracle RAC 10.2.0.4
Regards,hi JohnWatson
thanks for your reply
I have run dbms_workload_respository.create_snapshot on both of two nodes.it has finished succesfully and I saw this record in the dba_hist_snapshot view
I haven t seen any error record in alert log and there isn t any trace file occured.
In other words, when I run this create snapshot command in working node, the snapshot of other node doesn t generate in the dba_hist_snapshot view
but when I run this command broken node (so it doesn t generate awr snapshot) manually , other node isn t triggered to generate awr snapshot,
SQL> exec dbms_workload_repository.create_snapshot ;
PL/SQL procedure successfully completed.
select instance_number,begin_interval_time,end_interval_time from dba_hist_snapshot where snap_id in(40274);
INSTANCE_NUMBER BEGIN_INTERVAL_TIME END_INTERVAL_TIME
1 16-JAN-13 01.00.54.450 AM 18-JAN-13 03.51.19.951 PM
2 18-JAN-13 03.43.02.430 PM 18-JAN-13 03.51.20.103 PM
regards,
Edited by: dataseven on 18.Oca.2013 06:19 -
Looking for OTN Article re DB triggers
Hello,
I was wondering if someone can point me to the correct location.
I am looking for an artice that has appeared on OTN that described how to implement certain functionality within database triggers.
Specifically it covered such things as if a parent record was updated and you wanted a child/children records to be updated at the same time, it described how this could be done using SQL/PL-SQL code.
Can anyone help? I think it has appeared within the last year.
regards,
MohanAvi Miller wrote:
Dude wrote:
And as previously mentioned, a snapshot requires all parent snapshots to be useful for data recovery. What? btrfs snapshots are independent -- they do not require parent snapshots for recovery. In fact, once a snapshot has been taken, it's literally indistinguishable from the original. You can remove/change/burn with fire the original and the snapshot will continue to operate. That's the whole point of snapshots. :)Doesn't BTRFS use copy-on-write technology? As far as I understand, creating a snapshot does not involve any copying of data and as such is similar to a restore point marker. When mounting a snapshot or subvolume, any file system changes are recorded in metadata, which contains information about what data blocks were changed and the differential data itself. Snapshot and subvolumes are independent files systems and have their own copy of a B-tree. When modifying data, such actions are again recorded and affect only the currently mounted volume. But unless data was modified, there is still only one copy of a data block on disk and hence the snapshot won't help if such data is lost due to a bad disk block or disk error.
SAN and virtualization products use similar snapshot technologies. As far as I'm aware, this is great for undoing data changes by deleting snapshots or mounting different versions of subvolumes, but its not a solution for full data recovery and not a suitable solution to recover Oracle databases.
Is this incorrect, please let me know where I'm mistaken. Thanks! -
How to use coherence snapshot feature
Hi,
Refer - http://wiki.tangosol.com/display/COH33UG/Overview+for+Implementors
If long-term persistence of data is required, Coherence provides snapshot functionality to persist an image to disk. This is especially useful when the external data sources used to build the cache are extremely expensive. By using the snapshot, the cache can be rebuilt from an image file (rather than reloading from a very slow external datasource).
I want to take snapshot of my cache and persits in file system , and I need a feature to rebuilt the cache from the image.
Can someone provide me a details documents on cache snapshot.
RegardsI raised this with Oracle support and was told that they are planning to remove this from the documentation as this feature is not recommended to implement.
So I guess we will need to do something ourselves.
What I'm interested in is to be able to dump a node's data to disc before a shutdown so that we can do a full shutdown of the cluster and start it backup, load back in the snapshot dump and we are back to where we were. I was thinking this action could be triggered just before a release that requires a shutdown, perhaps via the invocation service.
Edited by: JTea on Jun 11, 2010 2:20 PM
Edited by: JTea on Jun 11, 2010 2:23 PM
Maybe you are looking for
-
How do I copy my old xl files from my XP machine to operate on xl for Mac on my new iMac. When I copy them onto a stick and transfer them it automatically makes them xls files which are then corrupted when I try to open them in xl for mac
-
I installed Lion but now my Macbook won't go past the start-up screen. Help!
I installed Mac OS X Lion today and it downloaded and installed fine but when it restarted it went to the white start up screen with the grey apple logo and the wheel kept going around for two hours. I held the power button and my macbook went off so
-
Mavericks Server preventing Mac Mini to sleep
Hi everyone, I am using my Mac Mini Server without webhosting (just for file sharing) and I would like it to go to sleep when I am not using it. The idea is to be able to turn it on with WOL when I need it and let it sleep otherwise. The problem is:
-
Is DDL is suported in ogg from sqlserver 2005 to oracle 11g xe
Hi ! i want to use ogg to trasfer all tables and sp from sql server 2005 to oracle 11g xe. (using windows 2008,ogg112101_ggs_Windows_x86_ora11g_32bit.zip) is it suported? becasue i read "Extraction or replication of DDL (data definition language) ope
-
Debited from my Bank Account and there is no Credi...
Hi, I`m really upset because xxxxxxxxxxxcccc debited R$ 83,75 from my account and now there isn`t any credit available and neither a support to complain about.