OWB Backups with normal exports

Hi,
Could someone please confirm whether a normal database export will suffice for a Warehouse BUilder repository backup?
That is NOT using the Design Centre (or any other OWB front-end) but a plain exp.
I want to schedule regular daily backups of the repository.
What would be the best process to follow?
Regards,
Steven.

Hi,
Thank you - this what I expected ...
However, how can this be scheduled to run daily? Do you have to log into the OWB application to perform the backup manually, or can the xport be scheduled?
What does the Datapump export do: expdp system/manager DIRECTORY=dpump_dir FULL=y
I found this in the docs : http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/cdc.htm
Regards,
Steven

Similar Messages

  • MSSQL 2008 R2 - 32bit, TSQL backup with compression, error: There is insufficient system memory in resource pool 'internal' to run this query

    Hello,
    I would like to ask you about advice.
    We have MSSQL 2008 R2, 32 bit. Memory is 4GB, split into 2GB for Windows and 2GB for applications. Database has recovery model simple because we have replicated data into other servers ( 2 ). Contemporary we work with 2 servers. Max memory for MSSQL is 2048
    MB.
    We set the backup as follows:
    USE MSDB
    GO
    DECLARE @JMENO_ZALOHY VARCHAR(120)
    SELECT  @JMENO_ZALOHY = 'E:\backup\BackupSQL\1 Pondeli\DAVOSAM_'+ convert( varchar(2), datepart( hh, getdate() ) ) + '00_DEN_DIFF.bak'
    SELECT  @JMENO_ZALOHY
    BACKUP DATABASE [DAVOSAM]
    TO DISK = @JMENO_ZALOHY
    WITH INIT, DIFFERENTIAL, CHECKSUM, COMPRESSION
    GO
    Every second or third day in log there is error message: 'There is insufficient system memory in resource pool 'internal' to run this query' Accurate in time of backup. The error is still repeat, majority in working hours.
    Today I have found out, that problem is probably in compression of backup. Because if I removed word: compression, a backup normally runs without error.
    Question: Is my hypothesis correct that problem is in backup with compression?
    Thank you David

    Hello, today evening I have ran backup command bellow. All is OK. Probably MSSQL has cleaned memory. Next attempt I will try in peak next week.
    Since time I have removed word compression, in error log is not any error.
    I have checked memory as soon as memory gets on top, it is about 1.707 GB the MSSQL writes into log this messgages:
    2014-03-14 15:00:04.63 spid89      Memory constraints resulted reduced backup/restore buffer sizes. Proceding with 7 buffers of size 64KB.
    2014-03-14 15:00:08.74 Backup      Database differential changes were backed up. Database: DAVOSAM, creation date(time): 2014/01/12(22:03:10), pages dumped: 16142, first LSN: 1894063:1673:284,
    last LSN: 1894063:1792:1, full backup LSN: 1894053:15340:145, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {'E:\backup\BackupSQL\5 Patek\DAVOSAM_1500_DEN_DIFF.bak'}). This is an informational message. No user action is required.
    2014-03-14 15:00:12.79 spid72      Memory constraints resulted reduced backup/restore buffer sizes. Proceding with 7 buffers of size 64KB.
    2014-03-14 15:00:12.88 Backup      Database differential changes were backed up. Database: WEBFORM, creation date(time): 2014/02/01(05:22:47), pages dumped: 209, first LSN: 125436:653:48, last
    LSN: 125436:674:1, full backup LSN: 125435:689:36, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {'E:\backup\BackupSQL\5 Patek\WEBFORM_1500_DEN_DIFF.bak'}). This is an informational message. No user action is required.
    After that the MSSQL reduced memory on 1.692.
    USE MSDB
    GO
    DECLARE @JMENO_ZALOHY VARCHAR(120)
    SELECT  @JMENO_ZALOHY = 'E:\backup\BackupSQL\6 Sobota\DAVOSAM_'+ convert( varchar(2), datepart( hh, getdate() ) ) + '00_DEN_FULL.bak'
    SELECT  @JMENO_ZALOHY
    BACKUP DATABASE [DAVOSAM]
    TO DISK = @JMENO_ZALOHY
    WITH INIT, CHECKSUM, COMPRESSION, MAXTRANSFERSIZE=65536
    GO
    E:\backup\BackupSQL\6 Sobota\DAVOSAM_2100_DEN_FULL.bak
    (1 row(s) affected)
    Processed 467240 pages for database 'DAVOSAM', file 'DavosAM_Data' on file 1.
    Processed 2 pages for database 'DAVOSAM', file 'DavosAM_Log' on file 1.
    BACKUP DATABASE successfully processed 467242 pages in 24.596 seconds (148.411 MB/sec).
    select * from sys.dm_exec_connections
    where net_packet_size > 8192
    session_id  most_recent_session_id connect_time            net_transport                            protocol_type            
                   protocol_version endpoint_id encrypt_option                           auth_scheme                
                 node_affinity num_reads   num_writes  last_read               last_write              net_packet_size client_net_address  
                                client_tcp_port local_net_address                                local_tcp_port
    connection_id                        parent_connection_id                 most_recent_sql_handle
    (0 row(s) affected)
    SELECT SUM (pages_allocated_count * page_size_in_bytes)/1024 as 'KB Used', mo.type, mc.type
    FROM sys.dm_os_memory_objects mo
    join sys.dm_os_memory_clerks mc on mo.page_allocator_address=mc.page_allocator_address
    GROUP BY mo.type, mc.type, mc.type
    ORDER BY 1 DESC;
    KB Used     type                                                         type
    29392       MEMOBJ_SORTTABLE                                             MEMORYCLERK_SQLSTORENG
    9392        MEMOBJ_SOSNODE                                               MEMORYCLERK_SOSNODE
    8472        MEMOBJ_SQLTRACE                                              MEMORYCLERK_SQLGENERAL
    5480        MEMOBJ_SECOLMETACACHE                                        USERSTORE_SCHEMAMGR
    5280        MEMOBJ_RESOURCE                                              MEMORYCLERK_SQLGENERAL
    5008        MEMOBJ_CACHEOBJPERM                                          USERSTORE_OBJPERM
    4320        MEMOBJ_SOSSCHEDULER                                          MEMORYCLERK_SOSNODE
    2864        MEMOBJ_PERDATABASE                                           MEMORYCLERK_SQLSTORENG
    2328        MEMOBJ_SQLCLR_CLR_EE                                         MEMORYCLERK_SQLCLR
    2288        MEMOBJ_SESCHEMAMGR                                           USERSTORE_SCHEMAMGR
    2080        MEMOBJ_SOSDEADLOCKMONITORRINGBUFFER                          MEMORYCLERK_SQLSTORENG
    2008        MEMOBJ_LOCKBLOCKS                                            OBJECTSTORE_LOCK_MANAGER
    1584        MEMOBJ_CACHESTORETOKENPERM                                   USERSTORE_TOKENPERM
    1184        MEMOBJ_LOCKOWNERS                                            OBJECTSTORE_LOCK_MANAGER
    840         MEMOBJ_SNIPACKETOBJECTSTORE                                  OBJECTSTORE_SNI_PACKET
    760         MEMOBJ_SOSDEADLOCKMONITOR                                    MEMORYCLERK_SQLSTORENG
    752         MEMOBJ_SESCHEMAMGR_PARTITIONED                               USERSTORE_SCHEMAMGR
    688         MEMOBJ_RESOURCEXACT                                          MEMORYCLERK_SQLSTORENG
    616         MEMOBJ_SOSWORKER                                             MEMORYCLERK_SOSNODE
    552         MEMOBJ_METADATADB                                            MEMORYCLERK_SQLGENERAL
    480         MEMOBJ_SRVPROC                                               MEMORYCLERK_SQLCONNECTIONPOOL
    424         MEMOBJ_SQLMGR                                                CACHESTORE_SQLCP
    400         MEMOBJ_SBOBJECTPOOLS                                         OBJECTSTORE_SERVICE_BROKER
    384         MEMOBJ_SUPERLATCH_BLOCK                                      MEMORYCLERK_SQLSTORENG
    384         MEMOBJ_RESOURCEDATASESSION                                   MEMORYCLERK_SQLGENERAL
    352         MEMOBJ_SOSSCHEDULERMEMOBJPROXY                               MEMORYCLERK_SOSNODE
    328         MEMOBJ_SBMESSAGEDISPATCHER                                   MEMORYCLERK_SQLSERVICEBROKER
    320         MEMOBJ_METADATADB                                            USERSTORE_DBMETADATA
    296         MEMOBJ_INDEXSTATSMGR                                         MEMORYCLERK_SQLOPTIMIZER
    264         MEMOBJ_LBSSCACHE                                             OBJECTSTORE_LBSS
    224         MEMOBJ_XE_ENGINE                                             MEMORYCLERK_XE
    216         MEMOBJ_GLOBALPMO                                             MEMORYCLERK_SQLGENERAL
    208         MEMOBJ_PROCESSRPC                                            USERSTORE_SXC
    200         MEMOBJ_SYSTASKSESSION                                        MEMORYCLERK_SQLCONNECTIONPOOL
    200         MEMOBJ_REPLICATION                                           MEMORYCLERK_SQLGENERAL
    192         MEMOBJ_SOSSCHEDULERTASK                                      MEMORYCLERK_SOSNODE
    176         MEMOBJ_SQLCLRHOSTING                                         MEMORYCLERK_SQLCLR
    168         MEMOBJ_SYSTEMROWSET                                          CACHESTORE_SYSTEMROWSET
    128         MEMOBJ_RESOURCESUBPROCESSDESCRIPTOR                          MEMORYCLERK_SQLGENERAL
    128         MEMOBJ_CACHESTORESQLCP                                       CACHESTORE_SQLCP
    128         MEMOBJ_RESOURCESEINTERNALTLS                                 MEMORYCLERK_SQLSTORENG
    120         MEMOBJ_BLOBHANDLEFACTORYMAIN                                 MEMORYCLERK_BHF
    120         MEMOBJ_SNI                                                   MEMORYCLERK_SNI
    88          MEMOBJ_QUERYNOTIFICATON                                      MEMORYCLERK_SQLOPTIMIZER
    72          MEMOBJ_HOST                                                  MEMORYCLERK_HOST
    72          MEMOBJ_INDEXRECMGR                                           MEMORYCLERK_SQLOPTIMIZER
    64          MEMOBJ_RULETABLEGLOBAL                                       MEMORYCLERK_SQLGENERAL
    56          MEMOBJ_SERVICEBROKER                                         MEMORYCLERK_SQLSERVICEBROKER
    56          MEMOBJ_REMOTESESSIONCACHE                                    MEMORYCLERK_SQLGENERAL
    56          MEMOBJ_PARSE                                                 CACHESTORE_PHDR
    48          MEMOBJ_CACHESTOREBROKERTBLACS                                CACHESTORE_BROKERTBLACS
    48          MEMOBJ_APPENDONLYSTORAGEUNITMGR                              MEMORYCLERK_SQLSTORENG
    40          MEMOBJ_SBASBMANAGER                                          MEMORYCLERK_SQLSERVICEBROKER
    32          MEMOBJ_OPTINFOMGR                                            MEMORYCLERK_SQLOPTIMIZER
    32          MEMOBJ_SBTRANSPORT                                           MEMORYCLERK_SQLSERVICEBROKERTRANSPORT
    32          MEMOBJ_CACHESTOREBROKERREADONLY                              CACHESTORE_BROKERREADONLY
    32          MEMOBJ_DIAGNOSTIC                                            MEMORYCLERK_SQLGENERAL
    32          MEMOBJ_UCS                                                   MEMORYCLERK_SQLSERVICEBROKER
    24          MEMOBJ_STACKSTORE                                            CACHESTORE_STACKFRAMES
    24          MEMOBJ_CACHESTORESXC                                         USERSTORE_SXC
    24          MEMOBJ_FULLTEXTGLOBAL                                        MEMORYCLERK_FULLTEXT
    24          MEMOBJ_APPLOCKLVB                                            OBJECTSTORE_LOCK_MANAGER
    24          MEMOBJ_FULLTEXTSTOPLIST                                      CACHESTORE_FULLTEXTSTOPLIST
    24          MEMOBJ_CONVPRI                                               CACHESTORE_CONVPRI
    16          MEMOBJ_SQLCLR_VMSPY                                          MEMORYCLERK_SQLCLR
    16          MEMOBJ_VIEWDEFINITIONS                                       MEMORYCLERK_SQLOPTIMIZER
    16          MEMOBJ_SBACTIVATIONMANAGER                                   MEMORYCLERK_SQLSERVICEBROKER
    16          MEMOBJ_AUDIT_EVENT_BUFFER                                    OBJECTSTORE_SECAUDIT_EVENT_BUFFER
    16          MEMOBJ_HASHGENERAL                                           MEMORYCLERK_SQLQUERYEXEC
    16          MEMOBJ_SBTIMEREVENTCACHE                                     MEMORYCLERK_SQLSERVICEBROKER
    16          MEMOBJ_ASYNCHSTATS                                           MEMORYCLERK_SQLGENERAL
    16          MEMOBJ_BADPAGELIST                                           MEMORYCLERK_SQLUTILITIES
    16          MEMOBJ_QSCANSORTNEW                                          MEMORYCLERK_SQLQUERYEXEC
    16          MEMOBJ_SCTCLEANUP                                            MEMORYCLERK_SQLGENERAL
    16          MEMOBJ_XP                                                    MEMORYCLERK_SQLXP
    8           MEMOBJ_SECURITY                                              MEMORYCLERK_SQLGENERAL
    8           MEMOBJ_CACHESTOREBROKERRSB                                   CACHESTORE_BROKERRSB
    8           MEMOBJ_EXCHANGEXID                                           MEMORYCLERK_SQLGENERAL
    8           MEMOBJ_CACHESTOREVENT                                        CACHESTORE_EVENTS
    8           MEMOBJ_CACHESTOREXPROC                                       CACHESTORE_XPROC
    8           MEMOBJ_DBMIRRORING                                           MEMORYCLERK_SQLUTILITIES
    8           MEMOBJ_SERVICEBROKERTRANSOBJ                                 CACHESTORE_BROKERTO
    8           MEMOBJ_CACHESTOREOBJCP                                       CACHESTORE_OBJCP
    8           MEMOBJ_CACHESTOREXMLDBELEMENT                                CACHESTORE_XMLDBELEMENT
    8           MEMOBJ_ENTITYVERSIONINFO                                     MEMORYCLERK_SQLSTORENG
    8           MEMOBJ_AUDIT_MGR                                             MEMORYCLERK_SQLGENERAL
    8           MEMOBJ_EXCHANGEPORTS                                         MEMORYCLERK_SQLGENERAL
    8           MEMOBJ_DEADLOCKXML                                           MEMORYCLERK_SQLSTORENG
    8           MEMOBJ_CACHESTORETEMPTABLE                                   CACHESTORE_TEMPTABLES
    8           MEMOBJ_HTTPSNICONTROLLER                                     MEMORYCLERK_SQLHTTP
    8           MEMOBJ_CACHESTOREVIEWDEFINITIONS                             CACHESTORE_VIEWDEFINITIONS
    8           MEMOBJ_CACHESTOREPHDR                                        CACHESTORE_PHDR
    8           MEMOBJ_CACHESTOREXMLDBTYPE                                   CACHESTORE_XMLDBTYPE
    8           MEMOBJ_CACHESTORE_BROKERUSERCERTLOOKUP                       CACHESTORE_BROKERUSERCERTLOOKUP
    8           MEMOBJ_EVENTSUBSYSTEM                                        MEMORYCLERK_SQLGENERAL
    8           MEMOBJ_CACHESTOREBROKERDSH                                   CACHESTORE_BROKERDSH
    8           MEMOBJ_SOSDEADLOCKMONITORXMLREPORT                           MEMORYCLERK_SQLSTORENG
    8           MEMOBJ_CACHESTOREXMLDBATTRIBUTE                              CACHESTORE_XMLDBATTRIBUTE
    8           MEMOBJ_CACHESTOREBROKERKEK                                   CACHESTORE_BROKERKEK
    8           MEMOBJ_QPMEMGRANTINFO                                        MEMORYCLERK_SQLQUERYEXEC
    8           MEMOBJ_CACHESTOREQNOTIFMGR                                   CACHESTORE_NOTIF
    (101 row(s) affected)
    David

  • Backup with Exp,  Can  It used in DB with Archive Log Mode ??

    I need information about if When I activate Archive log mode in my database ,after This I will use all before backups with Exp in full,Table,user ...recovery mode .
    any advice are welcome
    thanks
    Jimmy

    If you are using archive log mode, you should back up your database by regularly backing up
    your data files. If you lose a tablespace due to a defecitve disk or need to do a point-in-time recovery due to a user error, you restore your latest copy (if doing point-in-time the latest copy before the
    user error) of the relevant data files an then roll forward using the archived log.
    You can not restore your database from an export and then roll forward using the archived
    logs.
    HTH
    Marcus Geselle

  • Backup with DB13/BRBACKUP

    Hello everyone!
    I try to find out whether and how the folder names created by DB13/BRBACKUP (Full and Inkrementelles backup) can be changed. With the parametre "backup_root_dir" (Default: ORACLE_HOME / sapbackup) you can determine, where BRBACKUP should save the database files. Well, in case of a full and incremental backup Oracle creates a funny folder within the "backup_root_dir" directory and I couldn't found out how to change this, within the "backup_root_dir" created directory. I wanted to plan 4 Full backups for a month (always 1 / week namely on Sundays) named <SID> FullBackup1, <SID> FullBackup2, <SID> FullBackup3 and u2026 4. The fifth full backup should overrite the first backup and so long. Every backup should be saved on the disk, it means I'm not gonna use bands (cause this case is an exception). For incremental backups I have planned a similar destiny. The folder which contains the incremental backup should be called similarly (<SID>InkrBackup1, u2026 6)the numbering goes to 6 instead of 4, because the incremental backups are planned daily from Monday to Saturday. Do you have an idea, how to do this? Thanks a lot for your help!!!

    no at all:-(Unfortunately, there is no parameter (or at least I couldn't find it) which I could use to define the output directory in which all the database files will be saved (in case of full backup you will find here the conrol files, log files and database files). Cause normally if you make a backup with BRBACKUP or BRTOOLS or DB13, then a
    directory will be created in the path defined by the parameter "backup_root_directory"(normally it's "<Oracle_Home>/sapbakup"). This directory has a random name "bjq..." and contains all the mentioned files above. You know at least how to force brbackup to overrite the last full backup? Thx for your help!!!
    Edited by: Attila Zavaczky on Sep 4, 2009 1:39 PM

  • Oracle backup with CA ARCserv question

    Hi
    I back up Oracle database by using CA ARCserv with agents for Oracle which seems to
    be in collaboration with RMAN.
    Just in case, I typed RMAN command 'crosscheck ' and then showed some logs, which
    means that there are still some archivelogs even after deleting backup data by using
    ARCserv.
    I thought that typing 'crosscheck ' would show no log because CA ARCserv is in
    collaboration with RMAN and RMAN would delete all archive logs automatically after backup.
    Is this normal situation? or something is wrong with RMAN / ARCserv ?
    Are there any way to delete all data from oracle database after backup with ARCserv ?

    Hi,
    Welcome. You don't provide enough information and what you do is unclear !
    What is your Oracle version ?
    Can you post your backup scripts ?
    Can your post the exact command and the output you find "strange" ?
    Best regards
    Phil

  • System Backup with database open

    Is is possible to perform a system backup with the database opened, just left the f:\Oracle folder out of the system backup?

    Thanks for your replay.
    The database it's in noarchivelog mode.
    My customer wants to:
    1) export the database;
    2) perform a system backup with the database open;
    The issue here is if it is possible to backup the system without disturbing the database functionality. Because if we do a system backup with the database opened, the database will stop.
    So if I just left the f:\Oracle folder out of the system backup, will the database continue on working well?
    We have already tried to left the folders
    - F:\Oracle\Oradata\db_name
    - F:\Oracle\Ora10g\dbs
    - F:\Oracle\Ora10g\database
    Out of the system backup, but the database always stop and it is necessary human intervention to start it up again.

  • Insufficient space for importing TM backup with migration assistant

    First of all this is not for my main Mountain Lion installation on my Mac Pro, it is on a VM Ware Fusion virtual Mountain Lion installation that is running on my Mac Pro also running Mountain Lion. So this is really more a question dealing with VM Ware Fusion (or Parallels since I have the same issue with that as well) vs. a problem with Mountain Lion. Confused yet? I sure am....
    I am trying to import a time machine backup of my previous Snow Leopard drive into my virtual machine. I did not want to import this into my new drive because I am starting fresh and trying to clean up and Snow Leopard had 10 years worth of crus I am trying to get rid of, but I would still like to have my old SL drive running within VM Ware to help with the transition. I have a Mac Pro with a dual monitor set up so I want it to run in my second monitor and very slowly over the next few weeks get all the files I want and need instead of just importing a bunch of useless crap. 
    Anyway, here is the problem I am having and cannot seem to get past this hurdle. When I open migration assistant inside VM Ware ML  and choose my time machine backup of snow leopard everything works fine until it finishes calculating the size. I deselected all the files and folder I possibly can which leaves 209GB. Even with every single folder deselcted it still give an insufficient storage error I have already resized the VMware drive to 400GB which is more than enough. In my real machine under Finder>Get Info it shows it as 400GB, but within the virtual machine it is only showing the Mac HD to be 40GB which won't allow migration assistant to import the backup due to insuffient storage.
    I have a Mac Pro with tons of disk space so I have plenty of free space to allocate but I have tried everything I can think of but came up blank. Sorry if my explanation was overly verbose or confusing but I wanted to give as much detail as possible. Haven't really fiddled with the settings in VM Ware in a while so I must be missing something.
    So, how do you import files showing as larger than 40GB from a time machine backup with migration assistant without getting an error that there is not enough remaining space? And again, I have already resized the disk to 400GB that shows as 400GB on my real machine but only as 40GB inside the VM Ware Mac HD. I also have Parallels if that is easier to do what I am trying to accomplish.

    I just solved the problem! And this is actually a pretty big deal because experts at the VM Ware Fusion website and many other forums told me it was impossible.
    So here is the guide to running Mountain Lion in VM Ware Fusion or Parallels with a partition larger than 40GB and allows you to import an old user account from Tiger, Snow Leopard, Lion, etc..
    1) Make sure to shut down Mountain Lion in VM
    2) Under VM settings expand hard drive to whatever size you like
    3) You will need either a disk image or an actual DVD of mountain lion, mount that in your VM and choose it as start up. After rebooting open disk utility and partition Mac HD. Instead of only showing the 40GB limit it shold now show the size you created from step 2. Close disk utility and install Mountain Lion as normal. Reboot and unmount disk image.
    4) You now have a virtual Mountain Lion with whatever hard disk size you chose.
    5) Open migration assistant and do your time machine back up as normal from previous systems.
    The reason you might want to do this are many. In my case I wanted to have my old Snow Leopard user account open and running on my second monitor since I just did a fresh install on my Mac Pro. It is easy to import audio, photos, videos but there was a lot of other things I want to take my time bringinging over. There are also many cases where I needed to open up an app and view settings. Now I have my old machine essentially running at the same time as my new install side by side which is fantastic.

  • Backup with Time Machine on 2 different Hard Drive

    Hello,
    I'd like to know if I can backup *all my data on 2 different external hard drives*. The aim is to a double backup in case of big issue.
    I'd like to process on this way:
    - only one of my backup HD at home and the other is at the office
    - when I plug one of the backup HD, its data is updated according to the previous backup with this HD (not with the other one)
    - if I backup my MBPro just before leaving the office (on the office's HD) and I backup my MBPro just after arriving at home (on the home's HD), my 2 HD should have the state and exactly the same backuped data.
    Is it possible ? Otherwise, is there another way to proceed ?
    Thanks for all,
    Malphas

    malphas,
    *Using Time Machine with Multiple Destination Hard Disks*
    Yes, this is possible, as long as you understand how it works. First, Time Machine is only designed to work automatically with one backup disk. If you are going to employ multiple Time Machine disk with a single Mac, then you are going to have to manually select the new drive each time it is attached using the Time Machine Preferences “Change Disk…” button.
    Secondly, Time Machine *+does not+* ordinarily perform file-by-file comparisons to determine what has changed and thus determine what needs to be backed up. Rather, Time Machine relies on FSEvents notifications. This is a log that the system uses to keep track of changes to directories. Rather than scan tens of thousands of files for changes each time, Time Machine simply looks at this log and narrows its’ scan to only the directories that have experienced changes since the last backup.
    Every event that FSEvents records has its’ own ID which includes a time stamp. At the end of every backup, Time Machine stores the last event ID that it processes. When the next backup is initiated, Time Machine looks at this stored ID and determines that it only needs to backup events that have occurred after the time stamp on this last event ID.
    Naturally, for hourly backups, Time Machine does not have to go back very far to find the last known event ID. However, if a Time Machine disk is only being attach for backups every few days, a week, or more, and Time Machine has to go back too far to find the last event ID, then it will give up and simply go into “deep traversal” and do the file-by-file scan on its’ own. There are simply too many events logged by the system for Time Machine to bother looking for the last known event ID. Consequently, expect lengthy “Preparing Backup…” sessions anytime you attach a backup disk infrequently.
    Nevertheless, using Time Machine with multiple destination disk will work. If the two drives are routinely backed up then they will always remain in relative sync.
    Finally, remember to reselect your normal Time Machine disk in the Preferences when the backup to the secondary disk is complete.
    Hope this helped clarifies things a bit for you.
    Cheers!

  • Backup, Archive or Export? iPhoto question

    I don't have that much space on my hard disk and want to keep my photos on DVDs. I've read here that DVD+R is better than -R but what's the best way of storing them? Should I backup, archive or export?
    I am printing contact sheets for each DVD so that once I have them backed up, I will want to remove them from iPhoto and only re-import them when I want to use them.
    I've noticed that if I try and export it will only let me export a maximum of 60 photos, so presumably that's not the way to go? And the Share/Burn only lets me view them in iPhoto and not on any other platform (what happens if Apple replaces iPhoto in years to come??)
    Any suggestions gladly welcomed. Thanks.

    For data disk, it's a tossup between +R and -R as far as I know. If you're burning an video DVD via iDVD then you definitely want -R disks.
    What size are your image files? 60 files on a 4.5 GB drive is about 73 MB each. Are your files that big? Do you want the file to be easily used by iPhoto at a later date and keep their titles, keywords and comments intact? If so, the use the Share->Burn menu item and burn either Events or Albums of photos. Then delete those photos from the library when done. The disk will be readable by iPhoto like this and you can copy photos back into the library if needed for printing or some other project.
    If you just want to store the basic image files, then export the photos to a folder on the desktop using the File->Export->File Export and select the option to include the Title and keywords. They will be written to the new files and readable by iPhoto if imported back into the library at a later date.
    Do you Twango?
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 08 libraries and Leopard. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.

  • Excise invoice is meant for normal exports.

    Hi Gurus,
    In Deemed exports :  While raising ARE-3 its showing the following error message:
    0000000003/2008 excise invoice is meant for normal exports.
    Message no. 4F260
    Please share your ideas to go ahead.
    I have used same excise group but different series group and number ranges for that series group.
    Regards
    Sri

    Maintained number ranges and in J1iin selected expots under bond.
    Its working fine.
    Best Regards
    Sri

  • Why my phone's Android apps not work with normal ph data connection​s???

    After 10.3.1.1779 update my phone Q5 had facing lots prblms... at first I can't download anything from amazon app store with normal deta connection, 2nd android apps not work with normal mobile deta connection its always need wifi, 3rd mobile search engine not worked properly and also contact books always show no contacts after restarts it's fixed bt some time later it's starts again... and pls give update for facebook.... plssss plssss blackberry fix those problem.....

    Can I see your /var/log/Xorg.0.log through Pastebin?

  • Many problems with the 'Export to Text' (.txt) in CR Xi

    Hi,
    I have listed many problems with the 'Export to Text' (.txt) function of CR Xi.
    These problems are related to this export format only (meaning everything works fine in the Viewer or in the 'Export to PDF')...
    - Multi-columns layout do not export as Multi-column (export only a one column);
    - Numeric values with parenthesis for negative values or with a fix currency sign at the leftmost position are not exported correctly;
    - Fields having a Suppress formula which is "WhilePrintingRecords" do not appears when exported;
    - Fields with 'Suppress double value' checked are not always suppressed when exported to Text.
    - 'Keep Group Together' flag is not working.
    - 'Reset Page Number After' simply does not works when exported to text;
    - 'Keep object together' on TextBox/Section is not working.
    - Whenever a group is ending on the last line of a page, the the following page as the same Group header as the previous group with no records until the page is filled, then the PageBreak and PageHeader is missing but the records of the following group appears.
    I would like to know what is the status of the 'Export to Text' function (is it a deprecated function not supported anymore???).
    If still supported, when will these bugs be fixed???
    Thanks

    Hi Rene
    Export to Text is supported till date. Crystal Reports 2008 also supports this with Keep together working however when I tried with format with multiple columns, it didnot show up in the exported text file.
    Regards
    Sourashree

  • Required to take backup with out TSM in PROD server.

    Dear all,
    I required to take backup with out using TSM .
    we have got error in TSM . so that i am getting error .
    Prod server backup is working automated through TSM .
    now i need to take Online backup with out using TSM like DEV and QA (Manual backup) .
    For DEV and QA server backup is manual . i am simply moved the tape to tape library and fired the online backup as well same QA also .
    Now my query is reqiured to take backup with out the help of TSM in PROD server  . Is it possible ?
    2 profiles are configured for Online backup and offline backup .
    pls check the below profiles .
    initSIDdaily.sap -
    > Online backup .
    initSIDweekly.sap -
    > Offline backup
    is there any parameter i need to change ?
    Kindly advise

    Dear all ,
    I have initialised the Tape IRPB01 , after i have try to tkae backup , i got below error .
    BR0051I BRBACKUP 7.00 (48)
    BR0055I Start of database backup: bedqreyq.ant 2010-07-12 13.19.44
    BR0484I BRBACKUP log file: /oracle/IRP/sapbackup/bedqreyq.ant
    BR0477I Oracle pfile /oracle/IRP/102_64/dbs/initIRP.ora created from spfile /ora
    cle/IRP/102_64/dbs/spfileIRP.ora
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.45
    BR0319I Control file copy created: /oracle/IRP/sapbackup/cntrlIRP.dbf 15122432
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.46
    BR0057I Backup of database: IRP
    BR0058I BRBACKUP action ID: bedqreyq
    BR0059I BRBACKUP function ID: ant
    BR0110I Backup mode: ALL
    BR0077I Database file for backup: /oracle/IRP/sapbackup/cntrlIRP.dbf
    BR0061I 42 files found for backup, total size 158434.742 MB
    BR0143I Backup type: online
    BR0112I Files will not be compressed
    BR0130I Backup device type: tape
    BR0102I Following backup device will be used: /dev/rmt0.1
    BR0103I Following backup volume will be used: IRPB01
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.46
    BR0256I Enter 'c[ont]' to continue, 's[top]' to cancel BRBACKUP:
    c
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.52
    BR0257I Your reply: 'c'
    BR0259I Program execution will be continued...
    BR0208I Volume with name IRPB01 required in device /dev/rmt0.1
    BR0210I Please mount BRBACKUP volume, if you have not already done so
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.52
    BR0256I Enter 'c[ont]' to continue, 's[top]' to cancel BRBACKUP:
    c
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.55
    BR0257I Your reply: 'c'
    BR0259I Program execution will be continued...
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.55
    BR0226I Rewinding tape volume in device /dev/rmt0 ...
    BR0351I Restoring /oracle/IRP/sapbackup/.tape.hdr0
    BR0355I from /dev/rmt0.1 ...
    BR0241I Checking label on volume in device /dev/rmt0.1
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.55
    BR0226I Rewinding tape volume in device /dev/rmt0 ...
    BR0202I Saving /oracle/IRP/sapbackup/.tape.hdr0
    BR0203I to /dev/rmt0.1 ...
    BR0209I Volume in device /dev/rmt0.1 has name IRPB01
    BR0202I Saving init_ora
    BR0203I to /dev/rmt0.1 ...
    BR0202I Saving /oracle/IRP/102_64/dbs/initIRPdaily.sap
    BR0203I to /dev/rmt0.1 ...
    BR0280I BRBACKUP time stamp: 2010-07-12 13.19.59
    BR0198I Profiles saved successfully
    BR0280I BRBACKUP time stamp: 2010-07-12 13.20.00
    BR0315I 'Alter tablespace PSAPSR3 begin backup' successful
    BR0202I Saving /oracle/IRP/sapdata1/sr3_1/sr3.data1
    BR0203I to /dev/rmt0.1 ...
    BR0278E Command output of 'LANG=C dd obs=64k bs=64k if=/oracle/IRP/sapdata1/sr3_
    1/sr3.data1 of=/dev/rmt0.1':
    dd: /oracle/IRP/sapdata1/sr3_1/sr3.data1: Invalid argument
    BR0280I BRBACKUP time stamp: 2010-07-12 13.20.00
    BR0279E Return code from 'LANG=C dd obs=64k bs=64k if=/oracle/IRP/sapdata1/sr3_1
    /sr3.data1 of=/dev/rmt0.1': 2
    BR0222E Copying /oracle/IRP/sapdata1/sr3_1/sr3.data1 to/from /dev/rmt0.1 failed
    due to previous errors
    BR0280I BRBACKUP time stamp: 2010-07-12 13.20.02
    BR0317I 'Alter tablespace PSAPSR3 end backup' successful
    BR0056I End of database backup: bedqreyq.ant 2010-07-12 13.20.00
    BR0280I BRBACKUP time stamp: 2010-07-12 13.20.02
    BR0054I BRBACKUP terminated with errors
    BR0292I Execution of BRBACKUP finished with return code 5
    BR0668I Warnings or errors occurred - you can continue to ignore them or go back
    to repeat the last action
    BR0280I BRTOOLS time stamp: 2010-07-12 13.20.02
    BR0670I Enter 'c[ont]' to continue, 'b[ack]' to go back, 's[top]' to abort:
    Kindly Advise

  • I have just upgraded to Mavericks and have been using Time Machine on an external disk with Snow Leopard.  Can I continue to backup with Time Machine on the same external disk or do I need a new disk since the operating system has changed?

    I have just upgraded to Mavericks and have been using Time Machine on an external disk with Snow Leopard.  Can I continue to backup with Time Machine on the same external disk or do I need a new disk since the operating system has changed?

    Hi there,
    I found that Time Machine in Mavericks will sort it all out for you. You shouldn't need to buy another backup drive, unless you have insufficient space left and can't afford to delete whats on there. It should just work fine.

  • How can I create a new TC backup with ethernet, so I don't have to wait two days for a new wireless backup?

    How can I create a new TC backup with ethernet, so I don't have to wait two days for a new wireless backup?
    Several times in the last year, I've gotten a message that Time Machine needs to completely redo my backup. Using the wireless connection, this takes almost two days. Is there a way to do the backup with ethernet and then switch back to wireless? Thanks.

    May I know what is needed to make sure the MacBook is able to see Time Capsule on ethernet?
    Connect an Ethernet cable from one of the LAN <-> ports on the 2Wire gateway to the WAN port (circle of dots icon) on the Time Capsule.
    If AirPort Utility cannot "see" the Time Capsule now, you will need to perform a "hard reset" by holding in the reset button for 7-8 seconds or so and then reconfigure the Time Capsule. You can use the same Time Capsule name and password, etc. as before.
    Configure the Time Capsule to "Create a wireless network" and select the Bridge Mode option when it appears during the setup using AirPort Utility.
    Once the Time Capsule is configured, restart the entire network again. Power down everything, start the 2Wire first and then start each other device after that one at a time about a minute apart.
    Now you can connect your Mac directly to one of the LAN <-> ports on the Time Capsule for the backup using another Ethernet cable. In general, about 20-25 GB per hour will transfer.
    The Time Capsule will broadcast a much faster wireless network than the 2Wire can provide, so you might want to leave the setup "as is" for awhile after the first backup is complete. If you decide to use the Time Capsule as your main wireless access point, you would want to turn the wireless off on the 2Wire since two wireless networks in close proximity can create interference problems.
    Or, if you want to use the wireless on the 2Wire, you could turn off the wireless on the Time Capsule. Then backups will occur over the 2Wire wireless, or over Ethernet.
    I don't really recommend the "Join a wireless network" setting on the Time Capsule for most users, but you could go back to that setup as well if you want after the first backup is complete.

Maybe you are looking for

  • Bluetooth Mouse no longer recognized

    I connected a (Gigabyte) bluetooth mouse and used it for several weeks on my MBP, in both OS X and Windows 7, with no problems. Recently, it stopped connecting in both OS X and Windows 7, so I've tried several solutions that I've found (Removing Libr

  • How do I load iWork from old imac to my new one?

    I just purchased a new iMac.  I had iWork '08 on my old one.  How do I install iWork that I already own onto my new computer?  I have discs, but of course the new computer doesn't have a disc drive.

  • JSP iframe src = 'a file that is created by another JSP'  Error

    I have a JSP file that has an iframe into which, I want to load a .pdf file. Firstly, the iframe should contain nothing(and it does so), and when I press a button, another JSP is called, which creates a .pdf file and puts it in the WEB directory of t

  • Creating absence

    Hi experts i have this error when creating absence "Only records of less than one day allowed for attendance/absence type 0190" Please help !! Thanks

  • Please help: Is U400 the way to go? Urgent

    Hi, I currently have 3 options open for buying a new Laptop. Keep in mind that I'll be buying this from the US, but will be used in India and hence the quality of hardware right of the box is important as I mostly will not have warranty back home. 3