ACS 5.3.0.40 On-demand Full Backup failed.
Hi,
I have ACS 5.3.0.40 Primary Secondary Authenticators , of which the Scheduled backup has stopped.
When checked the :
Monitoring Configuration >
System Operations >
Data Management >
Removal and Backup
> Incremental Backup , it had changed to OFF mode. without any reason.
The same was observed earlier too.
I have made the
Incremental Backup to ON and intiated the
View Full Database Backup Now. But it wasn't successful and reported an Error:
FullBackupOnDemand-Job Incremental Backup Utility System Fri Dec 28 11:56:57 IST 2012 Incremental Backup Failed: CARS_APP_BACKUP_FAILED : -404 : Application backup error Failed
Later i did the acs stop/start "view-jobmanager" and initiated the On-demand Full Backup , but no luck, same error reported this time too.
Has any one faced similar type of error /problem reported , please let me know the solution.
Thanks & Regards.
One other thing; if this does end up being an issue with disk space it is worth considering patch 5.3.0.40.6 or later since improves database management processes
This cumulative patch includes fixes/enhancements related to disk management to avoid following issue
CSCtz24314: ACS 5.x *still* runs out of disk space
and also fix for
CSCua51804: View backup fails even when there is space in disk
Following is taken from the readme for this patch
The Monitoring and Reporting database can increase when as records are collected. There are two mechanisms to reduce this size and prevent it from exceeding the maximum limit.
1. Purge: In this mechanism the data will be purged based on the configured data retention period or upon reaching the upper limit of the database.
2. Compress: This mechanism frees up unused space in the database without deleting any records.
Previously the compress option could only be run manually. In ACS 5.3 Patch 6 there are enhancements so it will run daily at a predefined time, automatically when specific criteria are met. Similarly by default purge job runs every day at 4 AM. In Patch 6 new option provided to do on demand purge as well.
The new solution is to perform the Monitoring and Reporting database compress automatically.
2. New GUI option is provided to enable the Monitoring and Reporting database compress to run on every day at 5 AM. This can be configured under GUI Monitoring And Configuration -> System Operations -> Data Management -> Removal and Backup
3. Changed the upper and lower limit of purging of Monitoring and Reporting data. This is to make sure at lower limit itself ACS has enough space to take the backup. The maximum size allocated for monitoring and reporting database is 42% of /opt( 139 GB). The lower Limit at which ACS purges the data by taking the backup is 60% of maximum size Monitoring and Reporting database (83.42 GB). The upper limit at which ACS purges the data without taking backup is 80% of maximum size Monitoring and Reporting database (111.22 GB).
4. The acsview-database compress operation stops all services till 5.3 patch 5 , now only Monitoring and Reporting related services are stopped during this operation.
5. Provided “On demand purge” option in Monitoring and Reporting GUI. This option will not try to take any backup, it will purge the data based on window size configured.
6. Even if the “Enable ACS View Database compress” option is not enabled in GUI then also automatic view database compress will be triggered if the physical size of Monitoring and Reporting database reached to the upper limit of its size.
7. This automatic database compress takes place only when the “LogRecovery” feature is enabled, this is to make sure that the logging which happens during this operation will be recovered once this operation is completed. ACS generates alert when there is a need to do automatic database compress and also to enable this feature.
8. Before enabling “LogRecovery” feature configure the Logging Categories in such way that only mandatory data to log into Local Log Target and Remote Log Target as Log collector under System Administration > ... > Configuration > Log Configuration
This “LogRecovery” feature can recover the logs only if the logs are present under local log target.
9 This automatic database compress operation also performed only when the difference between actual and physical size of Monitoring and Reporting database size is > 50GB.
10 The new CLI “acsview” with option “show-dbsize” is provided to show the actual and physical size of the Monitoring and Reporting database. This is available in “acs-config” mode.
acsview show-dbsize Show the actual and physical size of View DB and transaction log file
Similar Messages
-
I have a student that is currently using PSE 5. She bought PSE 7, but before installing it and converting her PSE 5 catalog, she wanted to do a complete Backup.
When using the Backup command it fails at 29% with the error message, "Error encountered while writing file". She tried the Recover command, but the Backup command fails at the same point.
She does not want to convert the catalog to PSE 7 and propagate the problem.
Has anyone experienced similar problems with PSE 5?
By the way, I forgot to ask, but I am assuming that she is backing up to an external hard drive and that her PC is not displaying any other problems.
John Rolf Ellis - Is this the type of situation that your psedbtool program could help with.
Thanks in advance.
Don S.Thanks John.
I had forgotten about the switch from an Access based data base when PSE 6 came along.
I was thinking about suggesting the same thing to her, had not yet. Thanks for for confirming that is worth a try. I also will check and see if she has a relatively recent PSE 5 full backup available if the conversion does not go well.
P.S. My catalogs seem to be behaving, and I have gotten my photos along with the Catalog onto my second internal drive. Thanks again for your help with that.
Don S. -
DB Full Backup failed in db13 - oracle system
Dear All,
We are using SAP ECC 6.0 unicode system with Database Oracle 10.2.0.2. I am trying to get a valid full database backup for this system in DB13. We are using Disk ( shared network device ) for taking the backup. It was working fine earlier. But now the Backup is getting failed, i have mentioned the action log below. Please someone help me in this.
BR0051I BRBACKUP 7.00 (31)
BR0055I Start of database backup: beaoucqa.fnd 2009-05-15 21.00.56
BR0484I BRBACKUP log file: H:\oracle\DEV\sapbackup\beaoucqa.fnd
BR0252W Function remove() failed for 'E:\oracle\DEV\102\database\sapDEV.ora' at location BrInitOraCreate-1
BR0253W errno 13: Permission denied
BR0252W Function remove() failed for 'E:\oracle\DEV\102\database\sapDEV.ora' at location BrInitOraCopy-7
BR0253W errno 13: Permission denied
BR0166I Parameter 'control_files' not found in file E:\oracle\DEV\102\database\initDEV.ora - default assumed
BR0165E Parameter 'control_file_record_keep_time' is not set in E:\oracle\DEV\102\database\initDEV.ora
BR0056I End of database backup: beaoucqa.fnd 2009-05-15 21.01.05
BR0280I BRBACKUP time stamp: 2009-05-15 21.01.05
BR0054I BRBACKUP terminated with errors
BR0280I BRBACKUP time stamp: 2009-05-15 21.01.05
BR0291I BRARCHIVE will be started with options '-U -jid FLLOG20090515210000 -d disk -c force -p initdev.sap -cds'
BR0280I BRBACKUP time stamp: 2009-05-15 21.01.11
BR0292I Execution of BRARCHIVE finished with return code 3
Thanks
RajHi Joe/Juan/All
I have given the permissions as per the note mentioned by Joe, but i am getting another error . I have attached the error log below. I have checked the services Oracle listener and OracleserviceDEV are running under administrator account. Could some one please help me in this ?
BR0280I BRBACKUP time stamp: 2009-05-27 22.56.08
BR0063I 53 of 53 files processed - 115676.414 MB of 115676.414 MB done
BR0204I Percentage done: 100.00%, estimated end time: 22:56
BR0001I **************************************************
BR0280I BRBACKUP time stamp: 2009-05-27 22.56.08
BR0317I 'Alter tablespace SYSTEM end backup' successful
BR0280I BRBACKUP time stamp: 2009-05-27 22.56.12
BR0530I Cataloging backups of all database files...
BR0278E Command output of 'E:\oracle\DEV\102\BIN\rman nocatalog':
Recovery Manager: Release 10.2.0.2.0 - Production on Wed May 27 22:56:14 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN>
RMAN> connect target *
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
ORA-12640: Authentication adapter initialization failed
RMAN> *end-of-file*
RMAN>
host command complete
RMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14> 15> 16> 17> 18> 19> 20> 21> 2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of catalog command at 05/27/2009 22:56:15
RMAN-06171: not connected to target database
RMAN>
Recovery Manager complete.
BR0280I BRBACKUP time stamp: 2009-05-27 22.56.15
BR0279E Return code from 'E:\oracle\DEV\102\BIN\rman nocatalog': 1
BR0536E RMAN call for database instance DEV failed
BR0280I BRBACKUP time stamp: 2009-05-27 22.56.15
BR0532E Cataloging backups of all database files failed
BR0056I End of database backup: bearaxbh.fnd 2009-05-27 22.56.15
BR0280I BRBACKUP time stamp: 2009-05-27 22.56.15
BR0054I BRBACKUP terminated with errors
Thanks
Raj -
ACS 5.3 - Backups fail to TFTP, work to DISK
Hi All,
I'm configuring ACS for the first time and the config is complete and working, except backups of the view database. I've created a TFTP repositiory and if I perform a manual backup or wait for a scheduled one to occur it fails. I do get a .tar.gpg file in the TFTP server (but can not restore from it as it's not listed in "Restore" as a backup).
It works fine if I create and use a local disk repository. I get a .tar.gpg but also a catalog.xml and repolock.cfg file (which I don't in TFTP). Looking at the logs on the TFTP server I can see it tries repeatedly to read the catalog.xml file but fails:
Read request for file <DB/catalog.xml>. Mode netascii [15/07 16:05:52.167]
File <DB\catalog.xml> : error 2 in system call CreateFile The system cannot find the file specified. [15/07 16:05:52.167]
That seems correct, the file doesn't exist. However it never seems to try and create it.
(I've created 4 or 5 TFTP repositories testing this, all behave the same)
Any ideas?
PaulPaul,
TFTP will not work because the protocol doesnt support directory listing, what the ACS is trying to do is determine if a backup is currently running by looking into the repolock.cfg file. It also tries to see the contents of the catalog.xml file so that when a incremental backup is triggered it will add a line of the first full backup followed by all the incremental backups. Your best bet is to use ftp as the backup and this will fix the issue you are facing.
thanks,
tarik Admani -
Maximum Express full backups per day per Datasource?
I have 3 DB server having 24 databases. Some of these are SIMPLE recovery model and some are FULL. Express full backups are set one in a day at 8:00 PM and Synchronization is set to every 4 hours.
Now customer want to keep backup of every 2 hours. So I have some queries on this.
1) Its not possible to set EXPRESS full backup after 2 hours. So what should I do here, Should set synchronization frequency to 2 hours?
2) If I set Synchronization Frequency to 2 hours, it will backup only FULL Recovery model databases only? SIMPLE one will be backed up 8 PM only?
3) Incremental Backups recovery is only allow to restore Instance? cant be copy to Network folder? As when I tried to restore 4 hours synchronization backup it shows cant be copied, only recovery point associated with express backup will be copied.
4)Based on my understanding, I should change all recovery models to FULL and should set Synchronization to 2 Hours to achieve customer demand. It will allow me to restore 2 hours recovery point to directly Instance or another instance (But not to Network
folder and tape in case of Incremental recovery)
Am I right? Thoughts!
Thanks in Advance
VJaySee.
My 4 hours Synchronization frequency/Incremental backups are creating recovery points successfully and I can see these from recovery tab. But when I am trying to restore Incremental recovery point, it doesn't allowing me to restore. See in below image, I
am trying to restore Incremental recovery point which was created yesterday at 4:00 PM but its asking me to restore yesterday's 12:20 PM. This incremental backups tooks place after 12:20, doesn't mean anything in terms of recovery!
As my final decision, should I choose EXPRESS Full backup for evry 2 hours and Synchronization just before Express backup for these 24 databases since these are now in FULL model. -
Insufficient disk space to perform full backup
Hi to all
i'm trying to backup my acs 5.2 "backup acs repository repository application acs" but acs gives me a critical error.
% Creating backup with timestamped filename: acs-121022-1431.tar.gpg
% Error: Insufficient disk space to perform full backup.
% Try lowering the data window on ACSView Data retention under Data Management Configuration.
% Application backup error
how do i lower the data witdow in ACS view?
thanks in adv
AnteroHi
the problem is solved now, i've openned a TAC and the cisco Guy told me how to solve it.
in summary you need to have root access to ACS and then erase de acsview database and put a clean one.
my acsview db has grown to 175GB., this was caused by the misconfiguration of acsview database doesn't being backuped and the purgin was, in this case, not being also done.
so Brian if you hav e the same problem then you need to open a TAC case.
Thanks
Antero Vasconcelos -
HT1175 Time Machine wants to start with a new, full backup instead of using the existing one
Hi, my Time Machine wants to start a new, full backup, instead of continuing with the existing one on the time Capsule.
Looking a the Troubleshooting guide, I've found the following piece of info, which is, as 99% of the times with the troubleshooting guides, totally useless to resolve my problem...
Message: Time Machine starts a new, full backup instead of a smaller incremental backup
There are three reasons why this may occur:
You performed a full restore.
Your Computer Name (in Sharing preferences) was changed--when this happens, Time Machine will perform a full backup on its next scheduled backup time.
If you have had a hardware repair recently, contact the Apple Authorized Service Provider or Apple Retail Store that performed your repair. In the meantime, you can still browse and recover previous backups by right-clicking or Control-clicking the Time Machine Dock icon, and choosing "Browse Other Time Machine disks" from the contextual menu.
It should be noted (hence the uselessness of the guide) that:
-I haven't performed ANY full restore
- I havent changed my computer name
- My computer has not had ANY repair, much less a hardware repair. Neither recently, nor in the 9 months I've owned it.
Which puts me back to square one, with the marginal improvement that now I have a way of browsing my old backups.
In the event I decide to start all over again, will there be a way for me to "sew" the two backup files?
Or will I always have to go the silly way around?
Anybody has any idea?
Thank you for your time (machine/capsule, your choice;-)Hi John,
no, that doesn't have anything to do with my problem.
Just look at the image below.
Time Machine thinks that the disk I've always used to do my backups has none of them.
If I open the disk with Finder, obviously they are where they should be.
It is even possible to navigate them as per the instruction reported above, but even following Pondini's advice #s B5 and B6 (which are more applicable to my problem) didn't solve the problem.
Actually, not being a coder, I'm always scared of doing more damage than good when I have a Terminal window open.
My question is: how is it possible that Apple hasn't thought of some kind of user command to tell the idiotic Time Machine software that a hard disk backup is a hard disk backup, no matter what?
Estremely frustrating and annoying: I'm wasting HOURS in trying to find a solution to a problem that should not have arisen in the first place...
Have a nice weekend
Antonio -
Full Backup Operation not working
Hi,
Iu2019m using MAXDB 7.8 and using Dataprotector tool to take full backup. Data Protector toool uses the below commands to take full backup.. but it is giving ERROR saying that
Error: ERR
-24920,ERR_BACKUPOP: backup operation was unsuccessful
The backup tool failed with 2 as sum of exit codes. The database request was canceled and ended with error -903.
qapaux26# dbmcli -d tvs -u dbm,dbm
dbmcli on tvs>user_logon dbm,dbm
OK
dbmcli on tvs>dbm_configset -raw BSI_ENV /var/opt/omni/tmp/TVS.bsi_env
OK
dbmcli on tvs>medium_put BACKDP-Data[1]/1 /var/opt/omni/tmp/TVS.BACKDP-Data[1].1 PIPE DATA 0 8 NO NO \"\" BACK
OK
dbmcli on tvs>util_connect
OK
dbmcli on tvs>backup_start BACKDP-Data[1] DATA
ERR
-24920,ERR_BACKUPOP: backup operation was unsuccessful
The backup tool failed with 2 as sum of exit codes. The database request was canceled and ended with error -903.
Help is much appreciatedu2026could you please help me out if anybody has idea on this..
The same command commands works for MAXDB 7.7 but not for MAXDB 7.8
Regards,
RanganathHi Markus Doehr,
Thanks for your information. is it may be Data Protector Configuration issue?
dbmcli on tvs>dbm_configget all
OK
BSI_ENV = /var/opt/omni/tmp/TVS.bsi_env
set_variable_10 = OB2BACKUPAPPNAME=TVS
set_variable_11 = OB2BACKUPHOSTNAME=qapaux26.ind.hp.com
set_variable_12 = OB2OPTS=(null)
I'm new to this work and I'm learning the concepts of maxdb as well as DataProtector and trying to fix this issue. Asper your replay i came to know that DP doesn't create the medium appropriately.. can you just let me know more on this?
if possible just tell me with example how to create medium appropriately?
I'm resending KnlMsg
0xc000016740669928, Node:'qapaux26', PID: 6152)
Thread 0x1F Task 269 2010-12-07 12:32:03 Savepoint 1: Savepoint (SaveData) started by T269
Thread 0x1B Task 284 2010-12-07 12:32:03 Pager 20003: SVP(1) Start Write Data
Thread 0x1B Task 284 2010-12-07 12:32:03 Pager 20004: SVP(1) Stop Data IO, Pages: 6 IO: 6
Thread 0x1B Task 284 2010-12-07 12:32:03 Pager 20005: SVP(2) Wait for last synchronizing task: 284
Thread 0x1B Task 284 2010-12-07 12:32:03 Pager 20006: SVP(2) Stop Wait for last synchronizing task, Pages: 0 IO: 0
Thread 0x1B Task 284 2010-12-07 12:32:03 DataCache 4: Mark data pages for savepoint (prepare phase)
Thread 0x1B Task 284 2010-12-07 12:32:03 Pager 20007: SVP(3) Start Write Data
Thread 0x1B Task 284 2010-12-07 12:32:03 Pager 20008: SVP(3) Stop Data IO, Pages: 2 IO: 2
Thread 0x1B Task 284 2010-12-07 12:32:03 Pager 20009: SVP(3) Start Write Converter
Thread 0x1B Task 284 2010-12-07 12:32:03 Pager 20011: SVP(3) Stop Converter IO, Pages: 8 IO: 8
Thread 0x1B Task 284 2010-12-07 12:32:03 DataCache 3: Savepoint with ID 10 completed
Thread 0x1F Task 268 2010-12-07 12:32:03 CONNECT 12633: Connect req. (TVS, T268, connection obj. 0xc00001674089dd38, Node:'qapaux26', PID: 6152)
Thread 0x1F Task 269 2010-12-07 12:32:03 Log 20083: New DBIdentifier set (qapaux26:TVS_20101207_123203)
Thread 0x1F Task 269 2010-12-07 12:32:03 RTEIO 112: Open of medium /sapdb/TATA/data/wrk/TVS/data_000 as number 3 for READ successfull
Thread 0x1F Task 269 2010-12-07 12:32:03 RTEIO 112: Open of medium /sapdb/TATA/data/wrk/TVS/arch_000 as number 4 for READ successfull
Thread 0x1F Task 268 2010-12-07 12:32:04 CONNECT 12651: Connection released (TVS, T268, connection obj. c00001674089dd38)
Thread 0x1B Task 300 2010-12-07 12:33:08 ERR RTEIO 113: medium /var/opt/omni/tmp/TVS.BACKDP-Data[1].1 cannot be opened for WRITE access,_FILE=RTEIO_St
reamMedium.cpp,_LINE=1261
2010-12-07 12:33:08 ERR RTEIO 65: open of medium /var/opt/omni/tmp/TVS.BACKDP-Data[1].1 was canceled,_FILE=RTEIO_StreamMedium.cp
p,_LINE=4804
Thread 0x1B Task 299 2010-12-07 12:33:08 WNG SAVE 52108: canceled by user
Thread 0x1F Task 269 2010-12-07 12:33:08 RTEIO 114: medium /sapdb/TATA/data/wrk/TVS/data_000 with number 3 was closed
Thread 0x1F Task 269 2010-12-07 12:33:08 RTEIO 114: medium /sapdb/TATA/data/wrk/TVS/arch_000 with number 4 was closed
Thread 0x1F Task 269 2010-12-07 12:33:08 ERR SAVE 52012: error occured, basis_err 3700
Thread 0x1F Task 269 2010-12-07 12:33:08 ERR Backup 3: Data backup failed,_FILE=Kernel_Administration.cpp,_LINE=1560
2010-12-07 12:33:08 SrvTasks 17: Servertask Info: because Error in backup task occured
2010-12-07 12:33:08 SrvTasks 10: Job 1 (Backup / Restore Medium Task) [executing] WaitingT269 Result=3700
2010-12-07 12:33:08 KernelComm 6: Error in backup task occured, Error code 3700 "hostfile_error"
2010-12-07 12:33:08 Backup 1: Backupmedium #1 (/var/opt/omni/tmp/TVS.BACKDP-Data[1].1) Could not open stream
2010-12-07 12:33:08 KernelComm 6: Backup error occured, Error code 3700 "hostfile_error"
Thread 0x1F Task 269 2010-12-07 12:33:08 ERR Backup 3: Data backup failed,_FILE=Kernel_Administration.cpp,_LINE=1560
2010-12-07 12:33:08 SrvTasks 17: Servertask Info: because Error in backup task occured
2010-12-07 12:33:08 SrvTasks 10: Job 1 (Backup / Restore Medium Task) [executing] WaitingT269 Result=3700
2010-12-07 12:33:08 KernelComm 6: Error in backup task occured, Error code 3700 "hostfile_error"
2010-12-07 12:33:08 Backup 1: Backupmedium #1 (/var/opt/omni/tmp/TVS.BACKDP-Data[1].1) Could not open stream
2010-12-07 12:33:08 KernelComm 6: Backup error occured, Error code 3700 "hostfile_error"
Thread 0x1F Task 269 2010-12-07 12:33:10 CONNECT 12651: Connection released (TVS, T269, connection obj. c000016740669928)
Thread 0x1F Task 269 2010-12-07 12:47:18 CONNECT 12633: Connect req. (TVS, T269, connection obj. 0xc00001674089dd38, Node:'qapaux26', PID: 5802)
Thread 0x1F Task 269 2010-12-07 12:47:34 Savepoint 1: Savepoint (SaveData) started by T269
Thread 0x1B Task 284 2010-12-07 12:47:34 Pager 20003: SVP(1) Start Write Data
Thread 0x1B Task 284 2010-12-07 12:47:34 Pager 20004: SVP(1) Stop Data IO, Pages: 0 IO: 0
Thread 0x1B Task 284 2010-12-07 12:47:34 Pager 20005: SVP(2) Wait for last synchronizing task: 284
Thread 0x1B Task 284 2010-12-07 12:47:34 Pager 20006: SVP(2) Stop Wait for last synchronizing task, Pages: 0 IO: 0
Thread 0x1B Task 284 2010-12-07 12:47:34 DataCache 4: Mark data pages for savepoint (prepare phase)
Thread 0x1B Task 284 2010-12-07 12:47:34 Pager 20007: SVP(3) Start Write Data
I'm waiting for your replay.....
Ranganath -
ACS 5.3 on VM- backup fails all the time. I opened several tickets with Cisco, but still no luck.
Here is one of the log message I got during the backup. Maybe someone can point out what the issue is.
debugd[2933]: [31965]: config:kron: cs_api.c[1142] [daemon]: occurrence occurrence_backup1 could not be deleted.I just did it on my ACS 5.4 patch 6 (running on VMWare) backup to a Windows 2003 FTP server and it works without any issues:
repository ftp_192.168.1.129
url ftp://192.168.1.129/
user Administrator password hash e50ffb9aabc8ccebe066f6239efeaa1ab728a16f2b2
labacs2/admin# backup labacs2 repository ftp_192.168.1.129
% Creating backup with timestamped filename: labacs2-140628-2059.tar.gpg
Calculating disk size for /opt/backup/backup-labacs2-1403989173
Total size of backup files are 27 M.
Max Size defined for backup files are 13339 M.
labacs2/admin# -
I bought a new external hard drive for backups, but time machine won't do a full back up.
I think it is remembering backing up onto previous external hard drives, which I don't own anymore. How do I do a new full backup?
When I bought the new (used) iMac, I also bought an external hard drive for backups. It worked fine, but my husband stole it.
Then I bought a new external hard drive (Seagate) and it worked fine for three weeks, then died.
So I just got a new external hard drive, which was put together from an internal hard drive and a hard drive enclosure.
Time machine did the first backup today, and it should have taken 9 hours like it did on the previous first time full back up. Instead, it took 30 minutes. That can't be right. I want to start over and do a full backup to make sure everything gets onto my new external hard drive, but I can't figure out how to do that. Please help.Triple-click anywhere in the line below to select it:
tmutil compare -E
Copy the selected text to the Clipboard (command-C).
Launch the Terminal application in any of the following ways:
☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
☞ Open LaunchPad. Click Utilities, then Terminal in the icon grid.
Paste into the Terminal window (command-V).
The command will take at least a few minutes to run. Eventually some lines of output will appear below what you entered.
Each line that begins with a plus sign (“+”) represents a file that has been added to the source volume since the last snapshot was taken. These files have not been backed up yet.
Each line that begins with an exclamation point (“!”) represents a file that has changed on the source volume. These files have been backed up, but not in their present state.
Each line that begins with a minus sign (“-“) represents a file that has been removed from the source volume.
At the end of the output, you’ll get some lines like the following:
Added:
Removed:
Changed:
These lines show the total amount of data added, removed, or changed on the source(s) since the last snapshot. -
DPM Express Full Backups Causing SQL timeouts
Hi,
We run a SQL 2008 R2 failover cluster on Server 2008 R2 SP1 connected to an Equallogic SAN. The SQL failover cluster is fully servicepacked and installed the latest cumulative updates.
We have all our databases in full recovery mode and protected by DPM 2012 SP1 with latest cumulative updates.
The protection groups for SQL are configured to synchronise every 60mins and 1 express full backup every night. Every time the SQL express backups run the SQL server experiences timouts and end users report applications timing out. In the SQL server Application
logs I see Event ID 833 numerous times, these only occur as the express full backups take place. Every time the express full backups run we see the same timeouts and event logged that I/O has taken longer than 15 seconds to complete:
Log Name: Application
Source: MSSQLSERVER
Date: 01/04/2014 04:10:10
Event ID: 833
Task Category: Server
Level: Information
Keywords: Classic
User: N/A
Computer: WDCSQL02.local
Description:
SQL Server has encountered 1 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [D:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\MSDBLog.ldf] in database [msdb] (4). The OS file handle is 0x00000000000008F8. The offset
of the latest long I/O is: 0x000000042eae00
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="MSSQLSERVER" />
<EventID Qualifiers="16384">833</EventID>
<Level>4</Level>
<Task>2</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2014-04-01T03:10:10.000000000Z" />
<EventRecordID>14165629</EventRecordID>
<Channel>Application</Channel>
<Computer>WDCSQL02.local</Computer>
<Security />
</System>
<EventData>
<Data>1</Data>
<Data>15</Data>
<Data>D:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\MSDBLog.ldf</Data>
<Data>msdb</Data>
<Data>4</Data>
<Data>00000000000008F8</Data>
<Data>0x000000042eae00</Data>
<Binary>410300000A0000000600000055004B00530051004C00000000000000</Binary>
</EventData>
</Event>
I was under the impression that DPM SQL backups would remain online during the backup.
Is this normal?
Any way to fix this issue?
Thanks,
Microsoft PartnerHi
A copy backup will not truncate the logs but a full backup will and it will also stamp the database once it does a full backup. -
Hello all, I could use some help with my iPhone 4, I hope this is the best place to ask!
The Problem:
I want to do a full backup of my iphone to either itunes or to icloud. Unfortunately the iphone got wet and isn’t working properly. The phone turns on and still works, but the touch screen no longer works. I have a passcode lock on the phone so iTunes will not allow me to backup it up until I type in the passcode (which I cannot physically do). WiFi had been turned off before it got wet so it won’t connect to any known WiFi networks either to backup automatically to iCloud.
What happened:
I had been travelling for 6 months and went tubing on a river with my iphone in a dry bag. Unfortunately the dry bag failed and filled with water ruining my iphone.
What I’d like to do:
It’s an old phone and I was planning on buying a new iPhone anyway. I would love to do a full backup to either iCloud or to my computer so when I get a new iPhone I don’t lose anything. I’ve already connected it to my PC and downloaded all the photos and videos off the phone so at least I haven’t lost that, but I would like to be able to recover all the text messages, WhatsApp messages and notes I had created during my 6 months of travel.
The last time it was backed up to my pc or to icloud was in April, so there are 6 months worth of very meaningful data I would love to keep to remind me of my travels!
Is there anyway to force a full backup of the iPhone without typing in my passcode lock or is there anyway to force it to connect to a WiFi network so the phone can backup to iCloud??
iTunes recognizes the phone just now and I tried upgrading the phone to iOS7 in the hopes that it would first backup and then upgrade. Unfortunately all that happened is the phone upgraded to iOS7, but it didn’t backup to iTunes. Now the phone is stuck at the iOS7 welcome screen and I can’t do anything else with it.
Any ideas or tips would be welcome!
NOTE: This is NOT a jailbroken iPhone.If there is no other way around this, is there anyway to completely wipe the phone then??
The Apple store agreed to give me a new iPhone 4 as a warranty replacement for $150 (I travel a lot and I have a factory unlocked phone, so this is still worth it for me). But before I trade it in, I would like to make sure all my data is backed up on the device first.
When I try and restore the phone now on iTunes, it says I have to turn off Find My iPhone on the phone first before I can restore it. Now since I can't use the screen I can't really do that.
So how do I completely wipe the iPhone???
Thanks! -
How to create a new full backup alongside existing one?
I have recently reinstalled my MBP from scratch and installed some apps from scratch as well. I still have my old TM backup in a sparsebundle on my NAS (a ReadyNAS NV+).
I now would like to create a complete new full backup alongside my existing one. Or put differently, a new folder/backup inside the Backups.backupdb inside the mounted sparsebundle. Currently I see a folder called "XYZs MacBook Pro". I'd like that to be renamed to something like "XYZs Old MacBook Pro" and create a new full backup under "XYZs MacBook Pro".
I understand you end up having multiple folders inside Backups.backupdb if you have many Macs (with different MAC addresses). I only have one Mac, but want TM to treat my old previous backup (of the same MPB, hence with the same MAC address) as if it was from another Mac (with a different name).
With such a setup, I can still access backups from my previous installation of the same hardware by selecting another volume from TM (by holding Alt when clikcing at the TM icon) or with tools such as Back-In-Time.
Is there any way to achieve this? My sparsebundle is large enough to hold another full backup alongside the old one.
Thanks
- TomPondini wrote:
tofe1965 wrote:
But wait, you said if I erased the disk I get a new UUID. I did not erase it but I moved all files into a /Previous folder (after booting from the install DVD), then installed the system new, and then moved user data (and some Library and some application stuff) back from /Previous. So, I guess, I have no new UUID for my internal disk.
Ah, yes, that is different ("reinstalled from scratch" usually means erased). If the disk was never erased, it will still have the same UUID. But since you've moved or replaced virtually everything, it will all get backed up anyway.
I'm not sure why you did all that; the old +Erase and Install+ and +Archive and Install+ options no longer exist for Snow Leopard: there's just Install, which replaces OSX without touching anything else.
I wanted to have a *really clean* install as my old systems seemed to be messed up with some bad settings.
So, back to my original question: how can I make the new "XYZs MacBook Pro" appear alongside the old "XYZs MacBook Pro" and switch volumes when I start TM using the Option key?
Why would you want to do that? You can view and restore from any backup.
I figured that way I can keep all old backups from the old installation and not have TM retire old stuff. And start a new one from scratch for the new installation where things will get retired. Basically keep the two backups apart from each other and not see any of the old stuff unless explicitly mounted using the option key.
If I follow your approach, what happens to old files of the old disk/UUID? Will they get retired eventually? Would my original idea work as envisioned or would it retire old things nevertheless? If the latter is the case, it is of no use and your suggested approach is the better one.
Is there maybe a ay to force TM to start a new full backup?
Back up to a different place: a different HD or partition, or erase the current one.
Guess that's my only option then. I created a full bootable backup to en external USB disk using Carbon Copy Cloner. Erasing the internal disk and copying back using CCC will create a new UUID then I guess ...
I will try that in the next couple of days and report back here. Thanks a lot for all your help and explanations so far, much appreciated! -
Time Machine repeatedly does full backups, including FileVault while logged in
So let me preface this - for months I'd hoped for a way that Time Machine could backup my FileVault encrypted account without logging out. Last night Snow Leopard spontaneously started doing this. I know many may think that I'm looking a gift horse in the mouth, but read further and hopefully someone can help me make sense of this.
Last night I attached my external 1TB USB hard drove to my 15" MBP (spring 2010 model) running OS X 10.6.7. It started doing a Time Machine backup automatically, as usual. Since I was logged into my account that uses FileVault (a massive home folder - 220GB) I expected a fast backup of things only outside my home folder. As i looked at the details, however, i quickly realized that it was backing up all 270GB of data on my hard drive - including my FileVault account while still logged in!
At first i thought i just got lucky - i'd been craving this feature for years and it just happened to me without even doing anything. Curious guy that i am, i logged out of my acct with my USB drive still attached to see what would happen. After FileVault cleared out space in my home folder, it did _another_ backup, that took about 10 minutes to complete. This seemed strange so i logged back in and started another backup manually. To my utter shock, it started doing the whole 270GB over again!
I even cleared out my TM drive, erased, reformatted, checked permissions, etc etc and did the same on my system drive, hoping that it was just a matter of broken permissions. No change.
So there's two things going on here and i'm pretty concerned about both:
1) Why is TM backing up my entire encrypted home folder while i'm still logged in??? Apple has clearly designed TM and FV to NOT work this way for data integrity reasons and the fact that it's happening has me freaked out a bit over the reliability of my backups.
2) Why is TM doing full backups after each time i login to my account? If i stay logged in, it appears to only do incremental (normal) backups but if i logout, then re-login and allow the hour to pass and let a backup start on its own, it starts the whole 270GB again.
I'm getting to the point of wiping the system and starting from scratch but since i presently live in a country with no Apple store and extremely poor internet access, that prospect horrifies me
any ideas?
many thanks,
-TimThanks for the input Pondini -
I tried deleting the file as mentioned in #A4 and it still had the problem. Last weekend i finally got a brand new hard drive and reinstalled OSX 10.6.3 from scratch, then promptly applied the 10.6.7 combo update. I purposely chose to NOT import my old user from backup and instead set up everything again manually (royal pain, but the only way i could be sure that i wouldn't bring the problem with me). Today i whipped out my TimeMachine drive and set it up for backups. I crossed my fingers and it AGAIN started backing up all 250+GB of my system from within my FileVault protected account.
My wife's MBP is still running 10.6.4 and i'm beginning to wonder if his is an issue with 10.6.7 since i'm relatively certain i didn't experience this before that upgrade.
I've checked my wife's exclusions under Options and her home folder is NOT in that list, yet her TM is not attempting to backup everything until she logs out of her FV acct.
In response to Linc Davis, _yes_ it was backing up my info into a sparsebundle file, not just the raw unencrypted data, which is extremely strange.
If i DO add my home folder to the exclude list, will it still be backed up when i log out?
At this point, i'm thinking my next steps are:
1) try a different TM backup drive, even though i've wiped and checked my current drive numerous times
2) downgrade to an older version of Snow Leopard to see if 10.6.7 is the culprit
Any other ideas? -
Can I create a full backup on google drive by moving my user folders to Google Drive Folder?
Can I create a full backup on google drive? Google suggested I simply move my folders (Pictures, Music, Downloads, Documents, etc) into the google drive folder. But these seem to be automated files involved with Mac's organization... will this create problems for my computer? Ie. if I import new photos, or download music, will my computer know where to put it if its in google drive?
If this is a problem, how else could I possibly create a full backup of personal data on google drive (or any similar syncing service, for that matter). iCloud is prohibitively expensive, and programs like crash plan and carbonate run so slowly and bog down my computer so much that they are not worth it. Google drive seems simplest and most competitive in price, but I am worried about moving my folders.
Help?Google Drive also installs a client--a google drive folder on my hard drive that is programmed to sync changes automatically to the cloud--that allows me to drag and drop. I suppose my original concern was about dragging files from my hard drive (such as "pictures" that would contain my iPhoto library, etc) into it. Would that mess up Mac's internal organizational system? If I drag and drop my pictures folder, music folder, etc. into the google drive client folder in Finder, will those applications still function correctly? Will my mac still be able to figure out where to put photos I import from my camera, etc?
I am only as tech savvy as I have to be to survive professionally as a traveling teacher, so I am looking for something pretty straightforward that will free up some of my brain waves for other activities, rather than remembering which folders I have made changes in and need to sync.
Maybe you are looking for
-
Hi, I'm using N73 now,just bought it. When I tried to play a MP4 file by using RealOne, it promted "unable to play media file". However, when I move the memory card to another N73, it played well. My version is V 2.06.... I know there is higher versi
-
Wiping corrupt FI documents from DB
I need to get completely rid of FI documents that somehow got corrupted and could not be rebuilt via SAP-supported utilities. There is an exact list of documents that are missing one or more item records in table BSEG. The financial aspect of this pr
-
In my applet i have to access a file from server...so i read the file this way , public class Remoteconnection() File newRemoteFile(String serverName, String shareName, String path, String userName, String password) throws IOException { try Stri
-
Access List (ACL) to Block Russian and Chinese Nets From Routers
I see people asking if there are premade ACL's to block Chinese and Russian nets from their edge routers. Since I spent so much time creating entries for them based on information received from http://www.ipdeny.com/ipblocks/ i decided to share them.
-
ActiveX Container showing EXCEL Chart
Hi, I´m trying to display an Excel97 Chart in a Labview 7.0 ActiveX Container. I´ve tried several things : 1. Creating an container that is connected to an Excel file. Then copying the chart I want to display to the file and saving it. --> problem :