Daily snapshot solution
Hi All,
i have a requirement something like this
i will get some open and closed items in the trans data
let say the data looks like below
no open_date closed_Date status load date amt
1 08/26/2007 O 08/27/07 100
2 08/22/07 O 08/27/07 200
3 08/26/07 O 08/27/07 10
4 08/26/07 O 08/27/07 50
2 08/22/07 08/28/07 C 08/28/07 -200
1 08/26/07 08/28/07 C 08/28/07 -100
until the items is closed the amount is outstanding.
so if the user want to report on aspecific data range lets say outstanding amount
between 08/22/07 to 08/24/07 report should look like this:
Date outstanding amt
08/22/07 200
08/23/07 200
08/24/07 200
but there are no records for dates 08/23/07 and 08/24/07 but still the amount is outstanding on those days.
one way of achiving this is run a self update and populate the entries if there is not change, but the volume is huge,
thats why i am looking for any other options to do this.
did anyone had to implement this kind of solution, please advice.
I also get this - it's quite annoying.
This may help:
#!/bin/sh
svcadm disable -t time-slider
for f in `zfs list -H -t snapshot -o name`; do zfs release -r org.opensolaris:time-slider-plugin:zfs-send $f; done
svcadm clear svc:/application/time-slider/plugin:zfs-send
svcadm enable time-slider
Similar Messages
-
HR - Daily Snapshot Issues with Future Dated Changes
The w_employee_daily_snp_f_1 table has timing issues. The effective end date on all per_all_assignments_f, per_all_people_f, etc always shows the time at midnight. For example: 08/14/2009 12:00:00 AM. When this person has a new assignment, for one day only, the extract does not pick up this person. See example below:
Employee 123 has a supervisor change and the new supervisor is effective on 6/15/2009
Emp Eff Start Date Eff End Date Supervisor
123 01/01/2009 12:00:00 AM 06/14/2009 12:00:00 AM 888
123 06/15/2009 12:00:00 AM 12/31/4712 12:00:00 AM 555
When the ETL runs on the night of 6/14/09 at 7:35:42 PM, this record does not get picked up because according to the apps.per_all_assignments_f table on the day of 6/14/09 this person does not have an active assignment. There is one day when the person (according to the system) does not have a supervisor. The next night, on 6/15/09 10:34:18 PM the record reappears and does so every night until the assignment record is again end dated. This only occurs for future dated HR changes.
This is a big problem because if the future dated change is for a supervisor, not only does the supervisor not appear in the daily snapshot, but either does any of his/her subordinates. Has anyone else run into this problem? How did you solve it? Did you customize all the HR mappings?Have posted an updated version of the MST to my webspace here:
http://homepage.ntlworld.com/minkus/shockwave/shockwave115606mst2.zip
This new version fixes the following issues (changes to this version in bold):
MsiInstaller repair & errors on first run as non-admin (removes 'AppData' directory entries)
DCOM errors once Shockwave is installed (adds double quotes to the beginnning of LocalServer32 value)
Incorrect TypeLib paths in the registry (fixed to use built-in [SystemFolder] property)
Default CLSID description value missing (set to 'Shockwave ActiveX Control', as per the executable installer)
Incorrect Shockwave 10 paths in the registry (fixed to use built-in [SystemFolder] property)
Misspelt 'OptionalComponets' registry key (renamed to 'OptionalComponents')
Missing .dcr, .dir, .dxr Content Type registry values (included in the executable installer)
Fixed incorrect AppID registry key (set to use installer's [VERSION] variable)
Removed 'Launch SWDNLD.exe' shortcut created in Programs subfolder (people don't want this - see here)
I have tested this on my network and it all seems to work fine. Unfortunately my original plan to use the executable installer didn't work out as it does not include the Shockwave 10 compatibility components described here and here, so it looks like I'll have to keep this MST file up to date with the latest versions until Adobe fix the problems themselves...
If you want to see the changes that I have made, load up Adobe's 11.5.6.606 MSI file in Orca, then use Transform / Apply Transform to apply the MST, and look for the entries in green.
Please post any feedback to this thread, and I will try to respond if I can help!
Chris Hill -
Hi there,
I have some questions about cvs daily snapshot.
The problem is for the package eo-snapshot in AUR
Here is the PKGBUILD
# Contributor: Sebastien Piccand <[email protected]>
pkgname=eo-snapshot
pkgver=20060411
_ver=11Apr2006
pkgrel=1
pkgdesc="EO is a templates-based, ANSI-C++ compliant evolutionary computation library"
url="http://eodev.sourceforge.net/"
license="LGPL"
depends=()
source=(http://www.lri.fr/~marc/EO/snapshot/eo_src.tgz)
md5sums=('f4e001d00405292712e5a62599749c3d')
build() {
cd $startdir/src/eo${_ver}
./autogen.sh
./configure --prefix=/usr
make || return 1
make DESTDIR=$startdir/pkg install
I have to remove the _ver variable which is not required.
I have to use the daily snapshot because:
1. The stable version does not compile
2. I don't have access to CVS
The version(=folderOfSources=date) of the package is a bit weird and stored as:
11Apr2006.
I was asked to change the version so I moved it to 20060411 which seems to be the case for the usual snapshots.
I used the variable _ver so that one has just have to modify the beginning of the PKGBUILD when a new snapshot comes. But it is definitely not really needed so I have to remove it, which means you have to edit the build() function at each snapshot.
A workaround is to use
cd $startdir/src/eo*
instead of
cd $startdir/src/eo11Apr2006
in the build(function)
In this case now the version of the snapshot is only discriminated by:
1. the pkgver
2. the md5sum
But the package is updated daily and there should be a way to simplify the update process. The daily snapshot does not contain the date so we can't have access to the previous snapshot.
Can I, as for the CVS versions, use an empty md5sum() and just the date for the pkgver ?
Or should the package be a mess to maintain ?
ThanksThanks a lot.
The package is working.
I now have some questions about the provides() and conflicts() fields of some packages.
Here are 2 packages I put on AUR recently: eo-snapshot, eo-cvs
I am thinking of removing eo-snapshot but it can be good to keep it for people who don't have access to CVS.
In this case eo-snapshot and eo-cvs will conflict. Once a new, working, stable version of eo will be out, I (or anybody) will probably make a package (called eo).
So I am thinking of using the fields provides=('eo') for both packages, and conflicts=('eo-snapshot') or ('eo-cvs')
Is it the right thing to do ?
An other question:
libge can use galib if it is compiled with galib support, so should I make a libge-galib version (with galib in makedepends) or put a comment somewhere (where) that asks the user to install galib before building libge to enable the galib support ?
An other one
eo-cvs(or eo-snapshot) and galib both provide a /usr/lib/libga.a file, so they conflict with one another. Is there any workaround to install both? -
ArchSnaps -- Daily Snapshots Downloader
Hi guys, I've coded a new tiny bash script that can be run from cron. It can download any architecture selected automatically by running simple commands.
See this wiki page for more details and code it self at http://wiki.gotux.net/code/bash/archsnaps I hope this helps some one else, like it does me Thanks.
EDITED: See ArchDL instead now.
https://github.com/GoTux/Bash/blob/master/archdl.sh
Last edited by TuxLyn (2012-12-19 10:20:30)Updated Code: v0.1.1 – fixed urls spaces issue.
@Awebb, lol well... I've coded this script to help me grab daily iso's without me actually doing all the extra work, and I can put this in cron and add extra option to rotate the iso's -
What happened to daily snapshots ? The last snapshot is 5 days ago on April 12th.
http://releng.archlinux.org/isos/
Also, does any one here have 2012-04-01 snapshot ISO ? I found it most stable.
I had random installer errors with ISO's from 2nd to 12th.
Last edited by TuxLyn (2012-04-18 03:16:43)http://mailman.archlinux.org/pipermail/ … 02505.html
Last edited by the.ridikulus.rat (2012-04-18 06:20:33) -
Hello BW Experts,
Current Scenario:
There is a daily snapshot of the BW system in nite for our Production server. During this they have to stop any jobs / loads running at that time.
Secondly they take a backup every weekend. They stop any jobs / loads during that time also.
Wondering if you could provide ur experiences, that would very great of you..
1) Is snapshot necessary every day?
2) what is the alternative to daily snapshot?
3) what is the best procedure for daily snapshot ?
4) is there a procedure not to stop the loads / jobs during snapshot ?
5) Is backup necessary every weekend?
6) what is the alternative to weekly backup?
7) what is the best procedure for weekly backup ?
8) is there a procedure not to stop the loads / jobs during backup ?
9) is there any doc / paper / link / notes for the snapshots and backups for BW production system.
10)what is the typical time taken for snapshot?
11) what is the typical time taken for backup ?
any does please mail to [email protected]
Thanks,
BWerThanks for the valuable info Mimosa.
In our company, they call the snapshot: copy of the production box to the box on the SAN network. The copy is made to the disc. Here the snapshot disc is on RAID 0 and production disc is on RAID 5. They do this every night.
In our company backup is : taking copy to the tape. they do this every weekend.
Its interesting that they do the 'split mirror on-line backup' daily during system is available to all users & jobs. Wondering if you have any details of this backup. What is the procedure called. What level is this copy done, on the database level, network functionality or sap functionality.....?
Suggestions appreciated.
Thanks,
BWer -
Does anyone know if SAN snapshot technology can be used with Oracle Databases
The 10gR2 Backup and Recovery Advanced User Guide seems to have a chapter ( CH17 ) that covers User Managed Backups which I am guessing where SAN snapshot technology would fit in ?
What I am trying to ascertain is -
1. Can SAN snapshot be used to backup a database if it is open and online ? - do you need to put the database into backup mode ?
2. Is is better to use SAN snapshot in combination with Hotbackup so you are purely snapshotting the backup itself and not directly the database ?
3. What the difference between SAN snapshot of an open database and hotbackup ?
4. What are the industry common SAN Snapshot solutions used with Oracle ?
thanks,
Jim1. Can SAN snapshot be used to backup a database if it is open and online ? - do you need to put the database into backup mode ?yes & yes
2. Is is better to use SAN snapshot in combination with Hotbackup so you are purely snapshotting the backup itself and not directly the database ?HUH?
3. What the difference between SAN snapshot of an open database and hotbackup ?A HOTBACKUP will allow a consistent restore to be done.
A SAN Snaphot is a waste of effort.
4. What are the industry common SAN Snapshot solutions used with Oracle ?It depends -
ASM backup strategies with SAN snapshots
Hello folks,
in one of our setups we have a database server (11.2) that has only binaries and config (not the datafiles) and all ASM disks that are on the SAN. The SAN has daily snapshots, so the datafiles are backed up. My question is, how do I proceed if the database server fails and needs to be reinstalled? My question is how do I tell to the grid that we have already a diskgroup with disks on the SAN. I found md_backup and md_restore, but I am not sure how / when to proceed with md_restore.
I have searched some time, but I am quite confused with the approaches, please note that RMAN is not an option in this environment.
Thanks in advance
AlexHi,
O.k. that changes things. You should have mentioned that in the beginning of your post, that you shutdown the database before you doing the snapshot and your are not using archivelog mode.
I would advise to also dismount the diskgroups before doing the snapshots.
However when was the last time you tried the archivelog mode? 11g increased the performance a lot.
Note: In this case the snapshot is only a point in time view. In case of an error, you will loose data. (The last hours from your last snapshot).
Now regarding your questions:
=> You will install a new system with GI and database software. You will need a new diskgroup (e.g. INFRA) for the new OCR and Voting disks (they should not be contained in the other diskgroups like DATA and FRA, so no need to snapshot the old INFRA).
=> Then simply present the LUNs from DATA and FRA diskgroup to the new server. ASM will be able to mount the diskgroups as soon as all disks are available (and have the right permission). No need for md_backup or restore. MD_BACKUP backups the contents of the ASM headers, but since you still have the disks this meta information is still intact.
=> Reregister the database and the services with the Grid Infrastructure (srvctl add database / srvctl add service etc.). Just make sure to point to the correct spfile (you can find that e.g. with ASMCMD).
=> Start database...
That is if your snapshot did work. Just have another forum thread, where a diskgroup got corrupted due to snapshot (luckily only FRA).
And just as reminder: A snapshot is not a backup. Depending on how the storage is doing the snapshot, you should take precautions to move it to separate disks and verfiy it (that it is usable).
Regards
Sebastian -
One of the feature that i much like about btrfs is Snapshot.
What i want to do is to create snapshots of /home every 5/15 minutes (i have to test better the impact on performance) and retain:
02 - Yearly snapshots
12 - Monthly snapshots
16 - Weekly snapshots
28 - Daily snapshots
48 - Hourly snapshots
60 - Minutely snapshots
The above scheme is indicative and i'm looking for suggestion in order to improve it.
After long google searching finally i've found a very simple but powerful bash script to manage btrfs snapshots: https://github.com/mmehnert/btrfs-snapshot-rotation
I've changed a bit the script in order to use my naming scheme
#!/bin/bash
# Parse arguments:
SOURCE=$1
TARGET=$2
SNAP=$3
COUNT=$4
QUIET=$5
# Function to display usage:
usage() {
scriptname=`/usr/bin/basename $0`
cat <<EOF
$scriptname: Take and rotate snapshots on a btrfs file system
Usage:
$scriptname source target snap_name count [-q]
source: path to make snaphost of
target: snapshot directory
snap_name: Base name for snapshots, to be appended to
date "+%F--%H-%M-%S"
count: Number of snapshots in the timestamp-@snap_name format to
keep at one time for a given snap_name.
[-q]: Be quiet.
Example for crontab:
15,30,45 * * * * root /usr/local/bin/btrfs-snapshot /home /home/__snapshots quarterly 4 -q
0 * * * * root /usr/local/bin/btrfs-snapshot /home /home/__snapshots hourly 8 -q
Example for anacrontab:
1 10 daily_snap /usr/local/bin/btrfs-snapshot /home /home/__snapshots daily 8
7 30 weekly_snap /usr/local/bin/btrfs-snapshot /home /home/__snapsnots weekly 5
@monthly 90 monthly_snap /usr/local/bin/btrfs-snapshot /home /home/__snapshots monthly 3
EOF
exit
# Basic argument checks:
if [ -z $COUNT ] ; then
echo "COUNT is not provided."
usage
fi
if [ ! -z $6 ] ; then
echo "Too many options."
usage
fi
if [ -n "$QUIET" ] && [ "x$QUIET" != "x-q" ] ; then
echo "Option 4 is either -q or empty. Given: \"$QUIET\""
usage
fi
# $max_snap is the highest number of snapshots that will be kept for $SNAP.
max_snap=$(($COUNT -1))
# $time_stamp is the date of snapshots
time_stamp=`date "+%F_%H-%M"`
# Clean up older snapshots:
for i in `ls $TARGET|sort |grep ${SNAP}|head -n -${max_snap}`; do
cmd="btrfs subvolume delete $TARGET/$i"
if [ -z $QUIET ]; then
echo $cmd
fi
$cmd >/dev/null
done
# Create new snapshot:
cmd="btrfs subvolume snapshot $SOURCE $TARGET/"${SNAP}-$time_stamp""
if [ -z $QUIET ]; then
echo $cmd
fi
$cmd >/dev/null
I use fcontab
[root@kabuky ~]# fcrontab -l
17:32:58 listing root's fcrontab
@ 5 /usr/local/bin/btrfs-snapshot /home /home/__snapshots minutely 60 -q
@ 1h /usr/local/bin/btrfs-snapshot /home /home/__snapshots hourly 48 -q
@ 1d /usr/local/bin/btrfs-snapshot /home /home/__snapshots daily 28 -q
@ 1w /usr/local/bin/btrfs-snapshot /home /home/__snapshots weekly 16 -q
@ 1m /usr/local/bin/btrfs-snapshot /home /home/__snapshots monthly 12 -q
And this is what i get
[root@kabuky ~]# ls /home/__snapshots
[root@kabuky ~]# ls -l /home/__snapshots
total 0
drwxr-xr-x 1 root root 30 Jul 15 19:00 hourly-2011-10-31_17-00
drwxr-xr-x 1 root root 30 Jul 15 19:00 hourly-2011-10-31_17-44
drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-18
drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-20
drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-22
drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-24
drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-26
drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-32
drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-37
drwxr-xr-x 1 root root 30 Jul 15 19:00 minutely-2011-10-31_17-42tavianator wrote:
Cool, I just wasn't sure if the "And this is what i get" was what you were expecting.
Looks good, I may take a look since I'm using btrfs. Don't confuse this for a backup though .
Sorry for confusion, my english is not very good
Yes this is not a backup.
It's a sort of parachute if you do something stupid with your files you can at any time come back -
Hello,
I have two Hyper-V servers running Replica feature, it is working great, but it only protects me from hardware failure.
Now I want to have snapshots enabled for quick rollbacks (and full backups for more restoring options, but that's for later).
I found this script which describes how to schedule a snapshot each 7 days: https://support.managed.com/kb/a1872/how-to-automatically-rotate-hyperv-snapshots.aspx
From what I understand, it will rotate the Snapshot each 7 days, creating a new one and deleting the previous... but I am looking for something more complicated.
What I really wanted was having a snapshot each 7 days, just like that example, but keeping the last two weeks, and also have a new snapshot daily. So I would always have three separate snapshots instances (2 weeks old, 1 week old and "yesterday").
If I notice performance degradation, I will only keep 2 snapshots (1 week and 1 day old).
Does anyone know how to do it?
Thanks,
RafaelHi Brian,
Thank you very much for your answer.
Well, I am not thinking of snapshots as a backup. I am thinking about a fast way to do a rollback in case of need.
Recently I had a problem with a server where the last backup I had was 5 days before, and recover was not very fast (very fast I mean a few minutes, if that much). Snapshots are a way to cover me from that kind of problem.
Because of the quantity of servers I run, most of them have automatic updates. I had a case not long ago that one update messed up the "fonts" on a windows server 2003. I also have some servers where more then one people have access... once a employee
deleted by accident something he was not supposed to.
Beyond having snapshots, I will also have proper backup. Maybe having 2 weeks of snapshots is a little too much. But I think having at least one day and a week always recoverable can save me some headaches.
So... my question is how to make a script to always have a one week old and a one day old snapshot. The script I have for a week is:
$Days = 7 #change this value to determine the number of days to rotate.
$VMs = Get-VM
foreach($VM in $VMs){
$Snapshots = Get-VMSnapshot $VM
foreach($Snapshot in $Snapshots){
if ($snapshot.CreationTime.AddDays($Days) -lt (get-date)){
Remove-VMSnapshot $Snapshot
Checkpoint-VM $VM
How to add a daily snapshot without overwriting the weekly snapshots? -
Local Time Machine snapshots in Mountain Lion on Macbook Pr
Does Time Machine take local snapshots in Mountain Lion? When I enter Time Machine on my Macbook Pro, it shows only backups made with my external hard drive. The timeline appears white. Where do I find the local (internal) backups?
Yes, Time Machine takes local snapshots in Mountain Lion. To see them, try disconnecting your external drive and then entering Time Machine.
The following bit is from Mac Help.
charlie
About local snapshots
If you have a portable computer, Time Machine keeps a copy of everything on your computer’s internal disk, and saves hourly “snapshots” of files that have changed. Because Time Machine stores the snapshots on your computer’s internal drive, they are called “local snapshots” to distinguish them from backups stored on an external backup drive. Time Machine saves local snapshots on portable computers only, and not on desktop computers.
Local snapshots are periodically thinned to save disk space. A single daily snapshot is saved for every 24 hours, counting from the time you start or restart your computer. Similarly, a single weekly snapshot is saved for one week.
If disk space becomes very low, additional thinning is done:
When the disk’s free space is less than 20% of its total available space, Time Machine removes snapshots starting with the oldest one first, and working toward the newest one. If enough space becomes free so that the disk’s free space is more than 20% of its total available space, Time Machine stops removing snapshots.
If the disk’s free space falls below 10% of its total available space, or is less than 5 GB, the task of removing snapshots will be given high priority on your Mac. When free space is between 10%–20% of total available space, removal continues at a lower priority.
If Time Machine is unable to free up enough space to get above the 10% or 5 GB threshold, Time Machine removes all snapshots except the current one, and stops creating new snapshots. Once free space rises above the 10% / 5 GB threshold, a new snapshot is created, and the previous one is removed.
If you want Time Machine to stop saving local snapshots, open Time Machine preferences and slide the switch to Off. Snapshots will resume when you turn Time Machine back on. -
Replace Materialized View with Flashback?
I'm building a Data Warehouse with the following:
1. Tables populated throughout the day using CDC from the Application DB
2. MVs on those tables to keep a daily snapshot of the tables for reporting.
3. End users access the data through views with VPD applied, on the MVs
My systems team would like the solution to use as little storage as possible and currently I effectively have a copy of the app DB in the DW tables and would need another copy in the Daily MVs. (It is an insurance DB, so it is complex with lots of data, > 1.5 TB)
One way to reduce the storage could be to use flashback to keep a static daily version of the tables, so
At midnight I'd recreate the views like:
CREATE OR REPLACE VIEW client
AS SELECT *
FROM client_tab
AS OF TIMESTAMP (TO_TIMESTAMP(TRUNC(SYSDATE)));This would replace my refresh MV script. The end users would then refer to the client view in their reports
We would obviously need enough undo to store a days worth of data to ensure the flashback views remain consistent, but this is much less than the space required for a full copy. On a busy day there would be about 1% data change.
No DDL will occur on the tables during the day
Is there anything else I should be aware of? Can you let me know if (and why) this would not be a good idea?
This will run on Oracle 11.2.0.1
Thanks,
BenI guess I'm having some trouble visualizing the basic data model...
In most data warehouses that I've seen in the financial industry, reporting the position/ balance/ etc. at a given date involves scanning a single daily partition of each fact table involved and then hitting dimension tables that may or may not be partitioned (slowly changing dimensions would often have effective and expiration date columns to store the range of time a row was valid for, for example). Year-over-year reporting, then, just has to scan two fact table partitions-- the one for today and the one for a year ago. You may not store every intermediate change if there are potentially hundreds of transactions per account per day, but you'd generally put the end state for a given day in a single partition.
In one of your updates, it sounded like the 1.5 TB of data was just for the data that constituted end-of-day yesterday plus the 1% of changes made today which would imply that there was at least 15 GB of UNDO generated every day that would need to be applied to make flashback query work. That quantity of UNDO would make me pretty concerned from a performance perspective.
I would also tend to wager that VPD policies applied to views that are doing flashback query would be problematic. I haven't tried it and haven't really worked through all the testing scenarios in my mind, but I would be somewhat surprised if that didn't introduce some sort of hurdle that you'd have to work through/ work around.
Justin -
Hi All,
My client wants to use reporting tools for bussiness analysis.For this I want to use a separate database which will be replica of production database.Is there any way to achieve this in R12 database(10.2.0.2)?
Kindly share your experience.
Regards
LatifAs Srini states, knowing more about your requirements would help us to help you. :-)
Do you need a daily twice/daily snapshot of your data? If so, then you might be able to implement the solution that Srini mentioned earlier. An added benefit of this method is that it can go beyond the creation of a reporting database: it can also be leveraged in the creation of additional clones for test and dev environments.
Do you need real-time replication of data to your reporting database? If so, then you might want to investigate Oracle Streams/Goldengate (disclaimer: I'm typing on the fly, so I'm not 100% certain that this is supported).
If you're willing to license more Oracle software to solve your problem, you might also consider Oracle Business Intelligence Applications, which has out-of-the-box integration with E-Business Suite for common reporting/business intelligence needs. There are additional costs involved, but you can weigh those against the development costs of "rolling your own" MVs and reports, and maintaining them.
Again, knowing your requirements, constraints, and points of flexibility will go a long way toward figuring out which option best suits you. :)
Regards,
John P.
http://only4left.jpiwowar.com -
Hello:
Recently had to RESET and had to manually re-instate my
> Navigation Toolbar (the toolbar with the URL and google search in it)
> Addson toolbar (the skinnier toolbar at the bottom)
1) How can I backup/export/restore these toolbars?
2) How can I automate the backup of these toolbars (like bookmarks) so it happens at a schedule set by firefox or in a preference by me?
3) What is best way to backup my ZOOM settings (the +/-) i use on my toolbar.
I use this extensively. Almost 100% of every page i visit has a preferred zoom set. This is a critical piece of configuration for me. so I would like to understand 1) how do i back it up 2) how to I automate the backup (like bookmarks) so it occurs automatically at a schedule (either set by firefox or set in a preference by me)
4) What is ffox support suggested procedure to backup your preference settings set in each of the ADD-ONs. I currently have over 50 add-ons and the list seems to be growing each month.
My add-on component is becoming a major component of my Firefox solution. Great in functionality, dangerous for recovering from the Firefox RESET feature. I have searched help online and found nothing of value to provide clear steps of what one must to do.
Thankyou
''edited by a moderator to add the 3rd and 4th items which were in other questions''(1) Toolbar customizations are stored in a file name localstore.rdf in your personal settings (AKA Firefox profile) folder. Unlike bookmarks, Firefox does not store daily snapshots, just the latest settings. You could try backing up this file periodically using an external program or add-on. This article describes how to make a copy of your entire profile folder: [[Back up and restore information in Firefox profiles]].
To restore the settings from your old profile folder (Old Firefox Settings) you could try copying it to your current profile folder. To access your current profile folder, use
Help > Troubleshooting Information > Show Folder
Firefox may overwrite the existing localstore.rdf file when shutting down, so let Firefox shut down and wait a bit to ensure that it has finished its updating before replacing the file.
(2) See #1
(3) Not sure where this is stored, but probably one of the SQLite databases in the profile folder. ??
(4) Unfortunately, I don't think there is a single universal solution for capturing all add-on data. Add-ons can store data in Firefox preferences (e.g., NoScript stores your list of approved sites in prefs.js) or their own storage, either in your Firefox profile folder or, in the case of programs installed systemwide (e.g., RoboForm or Flash), outside your Firefox profile folder. However, backups of your active profile folder presumably would help protect you against the loss of all the data stored there. -
AR Open Items and Aging question
Hi
I have one question on AR Open Items aging. If Doc 100000 is posted on 4/1 with amount 25. Suppose it is cleared on 4/8. Now i get two records to BI. One will replace the existing record with Clearing Doc no and other one new posting for the clearing doc.
Delta load on 4/1/2011
Posting Date Doc no Amount Clearing Doc
4/1/2011 1000000 25
Delta load on 4/8/2011
4/1/2011 1000000 25 1000025
4/8/2011 1000025 -25 1000025
My question is If I run the Aging report on 4/9/2011 with key date value 4/1/2011 will the result show 25 amount as Open item or cleared item?
If I want to show status of the document as of old date, Should I take daily snapshots of the data?
Please let me know your thoughts!
Thank you
SreeHI Sree..
That document was open on 4/1/2011......and was cleared on 4/8/2011................
If you execute the R/3 report FBL5N (Customer Line Item Display) with the "Open at key date" 4/1/2011.........the report will show the document as open item.....
What your Aging Report in the BI side will show it depeds on the design of the report.....but as per my understanding if the Query get executed with key date value 4/1/2011 ....then the report should show the document as Open Item..
In that case the restricted keyfigure should design in such a way that it will compare the Clearing Date with the Key date......if the Key Date < Clearing date then the document will be trreated as an Open Item........if Key Date >= Clearing date then the Document will be displayed as an Open Item.........
We are also using such a report.......Infact we have also designed in such a way so that the report can display how old the document is 90 days, 30 days....etc. by giving the Offset value......
But frankly speaking it depends on your requirement and the design of the Query....
Regards,
Debjani......
Maybe you are looking for
-
Process: Safari [465] Path: /Applications/Safari.app/Contents/MacOS/Safari Identifier: com.apple.Safari Version: 6.1.1 (8537.73.11) Build Info: WebBrowser-7537073011000000~2 Code Type: X86-64 (Native) Parent
-
How do I get rid of the apple watch app that appeared on my phone when I updated my software?
The apple watch app is now on my phone and I'd like to remove it. I don't intend to get an apple watch and don't want the app
-
M4V video files not downloading or viewing, just showing characters in browser tab
Website I'm trying to download videos. http://www.eevblog.com/episodes/ I click on a video, it takes me to that videos page, somewhere down the page is a download link. I click the download and the browser tab starts filling up with all these strange
-
Acrobat Professional 7 Document Message Bar does not display
I am running Acrobat Professional 7.0.9 on a Windows 2000 PC (HP Compaq dc7100, P4, 1GB RAM). After upgrading Adobe Reader to version 8.1.2, I can no longer create forms that trigger the Document Message Bar to display in either Acrobat or Reader. (N
-
2LIS_03_UM Set up -Very Urgent
Hi Gurus, I'm trying to fill the set-up tables for DataSource 2LIS_03_UM. I have 5 Company codes. While I execute T.Code OLIZBW, it prompts for an entry in Archiving Run apart from the Company Code, Name of Run, Termination time. Where do you get the