HANA - backup at User level

Hi..
Is it possible in SAP HANA to take backup at user level (to take backup of specific user including schemas, objects, constraints etc)?  Currently upto release SPS07, I can only see "full DB backup" option as available.
Thanks..

HI Virendra
nope - like most DBMS SAP HANA does not support user or schema based backup/recovery.
You may however perform an export/import to store a snapshot of a specific users data.
- Lars

Similar Messages

  • Ideas for Providing User Level Data Backup and Restore

    I'm looking for ideas for implementing a user level application data backup and restore in an Apex app.
    What would be great is to have a user be provided an export file and a way to import this file. A bit overkill but hopefully never needed.
    Another option that is perfectly doable is a report that simply provides a means to create an export of the data. Since I already have an interface I can use an export to interface an export.
    Any thoughts?
    Hopefully I'm missing something already there for an end user to use.

    jlincoln wrote:
    "Do you mean "export" and "import" colloquially, or in the specific sense of the exp/imp/datapump utilities?"': I mean as in imp/exp Oracle utilities. Generally speaking, it would be neat to be able to export and import via an Apex an application. In this hosted environment I don't have that access but would this be a bad idea if you don't care about the existing data in the schema in which the data resides?I can envisage a mechanism using <tt>exp/imp</tt>, but since it requires <tt>dbms_scheduler</tt> external jobs and access to the file system it's highly unlikely to be possible in a hosted environment. (Unless you're doing the hosting?)
    Backup: Necessary for piece of mind and flexibility. I am working on a VB/Access user who does this today to get to the point when they can be comfortable with the backups occurring regularly and by the hosting site's DBA group.
    Restore: Like I said. I am working on a VB/Access user who does this today to get to the point when they can be comfortable with the backups occurring regularly and by the hosting site's DBA group. This is a very small data set. A restore would simply remove existing data and replace it with the new data.My opinion is that time would be better spent working on the user rather than a redundant backup and restore feature. Involve them in a disaster recovery exercise with whoever is hosting the environment to prove that their data is safe. Normally the inclusion of data in regular, effective database backups is sold as a major feature of APEX solutions.
    "What about security/privacy when this data ends up in uncontrolled environments?": I don't understand the point of this question. The data should not end up in uncontrolled environments. Just like the data in the database or its backups.Again, having data in a central, shared location protected by multiple levels of application, database, and OS security is usually seen as a plus for APEX over VB/Access. Exporting the data in toto to a PC/laptop that can be stolen or lost, and where it can be copied to USB drives/phones/email loses this protection.
    User Level: Because the end user must have access to the backup and restore mechanisms of the application.
    Application Data: The application data. Less than 10MB. Very small. It can be exported in a flat file downloaded by the end user. This file can then be used to upload and import via an existing application interface. For example.
    "I'm struggling to parse this for meaning.": When I say I have an existing interface I am referring to a program residing in the Apex application that will take data from a flat table structure (i.e. interface table), validate the data, derive data, and load into the target table structure.Other than the report export capability linked to above, there's nothing built-in to APEX that comes close to your requirement. If the data is simple enough that it can be handled in such a report, and you have a process that can read and recreate this export, then you have your backup/restore capability. If the data can't be handled in a simple report, then you'll need a more complex PL/SQL process to generate the file.

  • Comman to take user level backup(tables and data in table for a particular)

    Hi ,
    I need to create a sql script to take user level back up i.e i need to take back up ,(export the tables and data in those tables in particular user) to my pc..
    i am able to do the same from unix command ,but not able to do the samq in sql script
    Please advice.

    Hi ,
    I need to create a sql script to take user level back up i.e i need to export the tables and data in those tables in particular user to my pc..
    i am able to do the same from unix command ,but not able to do the samq in sql script
    Please advice

  • Backup RMAN Incremental Level 1 without Level 0 - 10gR2

    Hi,
    I'm a bit confused after some tests.
    The Question: It's mandatory to perform backup incremental level 0 before the level 1 using Oracle 10gR2 ?
    On Oracle 11.2 it's mandatory but on Oracle 10.2.0.5 I don't know.
    Test on 11.2. If doesn't exists level 0 rman take care about it and perform automaticaly level 0 before level1
    RMAN> backup incremental level 1 database;
    Starting backup at 29-APR-11
    using channel ORA_DISK_1
    no parent backup or copy of datafile 1 found
    channel ORA_DISK_1: starting incremental level 0 datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    ...Docs said:
    Incremental backups capture only those blocks that change between backups in each datafile.
    In a typical incremental backup strategy, a level 0 incremental backup is used as a starting point. A level 0 backup captures all blocks in the datafile.
    So, on Oracle 10.2.0.5 this not happen like on 11.2:
    Perfoming backup level 1 without level 0:
    RMAN> list backup;
    RMAN> backup incremental level 1 database;
    Starting backup at 29-APR-11
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting incremental level 1 datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00001 name=+DG_ORCL/db10g/datafile/system.260.749756975
    channel ORA_DISK_1: starting piece 1 at 29-APR-11
    channel ORA_DISK_1: finished piece 1 at 29-APR-11
    piece handle=+DG_FRA/db10g/backupset/2011_04_29/nnndn1_tag20110429t190340_0.260.749761421 tag=TAG20110429T190340 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
    channel ORA_DISK_1: starting incremental level 1 datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel ORA_DISK_1: starting piece 1 at 29-APR-11
    channel ORA_DISK_1: finished piece 1 at 29-APR-11
    piece handle=+DG_FRA/db10g/backupset/2011_04_29/ncsnn1_tag20110429t190340_0.262.749761449 tag=TAG20110429T190340 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:05
    Finished backup at 29-APR-11
    RMAN> backup archivelog all delete input;
    Starting backup at 29-APR-11
    current log archived
    using channel ORA_DISK_1
    Finished backup at 29-APR-11
    RMAN> shutdown abort;
    Oracle instance shut downDelete all files on ASM (except SPFILE).
    ASMCMD> cd +DG_ORCL/DB10g
    ASMCMD> ls
    PARAMETERFILE/
    spfiledb10g.oraSo, let's perfom restore of database
    oracle@butao:/home/oracle> rman target /
    Recovery Manager: Release 10.2.0.5.0 - Production on Fri Apr 29 19:06:52 2011
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    connected to target database (not started)
    RMAN> startup nomount
    Oracle instance started
    Total System Global Area     293601280 bytes
    Fixed Size                     2095872 bytes
    Variable Size                 92275968 bytes
    Database Buffers             192937984 bytes
    Redo Buffers                   6291456 bytes
    RMAN> restore controlfile from '+DG_FRA/db10g/backupset/2011_04_29/ncsnn1_tag20110429t190340_0.262.749761449';
    Starting restore at 29-APR-11
    using channel ORA_DISK_1
    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:05
    output filename=+DG_ORCL/db10g/controlfile/current.263.749761699
    output filename=+DG_FRA/db10g/controlfile/current.263.749761699
    Finished restore at 29-APR-11
    RMAN> startup mount
    database is already started
    database mounted
    released channel: ORA_DISK_1
    RMAN> restore database;
    Starting restore at 29-APR-11
    Starting implicit crosscheck backup at 29-APR-11
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=156 devtype=DISK
    Crosschecked 1 objects
    Finished implicit crosscheck backup at 29-APR-11
    Starting implicit crosscheck copy at 29-APR-11
    using channel ORA_DISK_1
    Finished implicit crosscheck copy at 29-APR-11
    searching for all files in the recovery area
    cataloging files...
    cataloging done
    List of Cataloged Files
    =======================
    File Name: +dg_fra/DB10G/BACKUPSET/2011_04_29/ncsnn1_TAG20110429T190340_0.262.749761449
    File Name: +dg_fra/DB10G/BACKUPSET/2011_04_29/annnf0_TAG20110429T190442_0.264.749761485
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to +DG_ORCL/db10g/datafile/system.260.749756975
    restoring datafile 00002 to +DG_ORCL/db10g/datafile/undotbs1.261.749757085
    restoring datafile 00003 to +DG_ORCL/db10g/datafile/sysaux.262.749757095
    restoring datafile 00004 to +DG_ORCL/db10g/datafile/users.264.749757107
    channel ORA_DISK_1: reading from backup piece +DG_FRA/db10g/backupset/2011_04_29/nnndn1_tag20110429t190340_0.260.749761421
    channel ORA_DISK_1: restored backup piece 1
    piece handle=+DG_FRA/db10g/backupset/2011_04_29/nnndn1_tag20110429t190340_0.260.749761421 tag=TAG20110429T190340
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:26
    Finished restore at 29-APR-11
    RMAN> recover database;
    Starting recover at 29-APR-11
    using channel ORA_DISK_1
    starting media recovery
    archive log thread 1 sequence 27 is already on disk as file +DG_FRA/db10g/onlinelog/group_3.259.749756971
    archive log thread 1 sequence 28 is already on disk as file +DG_FRA/db10g/onlinelog/group_1.257.749756963
    archive log filename=+DG_FRA/db10g/onlinelog/group_3.259.749756971 thread=1 sequence=27
    archive log filename=+DG_FRA/db10g/onlinelog/group_1.257.749756963 thread=1 sequence=28
    media recovery complete, elapsed time: 00:00:02
    Finished recover at 29-APR-11
    RMAN> alter database open resetlogs;
    database opened
    RMAN>See I don't need level 0 backup to restore level 1.
    Thanks,
    Levi Pereira

    Hi Gokhan,
    Thank you for point this.
    After spending a time studying about this I find out this:
    Your expanation apply only in Oracle 10gR1/R2.
    Because there is differences between RMAN Version 10gR1/R2 and 11gR1/R2 about Incremental Level 1 and this confuse me.
    Oracle 10gR1/R2 run only one backup incremental level 1 even if level 0 not exists .
    Oracle 11gR1/R2 run two backups incremental if level 0 not exists. i.e level 0 first and after that level 1.
    Oracle 10gR2
    If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility is >=10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN.
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup004.htm
    Thats means wich RMAN run level 0 with name level 1. (i.e: Only one Backup) This is confuse.
    Oracle 11gR1
    If no level 0 backup is available in either the current or parent incarnation, then the behavior varies with the compatibility mode setting. If compatibility is >=10.0.0, RMAN copies all blocks that have been changed since the file was created. Otherwise, RMAN behaves as it did in previous releases, by generating a level 0 backup.
    http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmcncpt.htm#BRADV89500
    Oracle 11gR2
    If no level 0 backup is available in either the current or parent incarnation, then the behavior varies with the compatibility mode setting. If compatibility is >=10.0.0, RMAN copies all blocks that have been changed since the file was created. Otherwise, RMAN generates a level 0 backup.
    http://download.oracle.com/docs/cd/E11882_01/backup.112/e10642/rcmcncpt.htm#BRADV89500
    Thats means wich RMAN run (automatically) level 0 first after finish level 0 backup RMAN run level 1 backup (i.e Two Backups). This seems right.
    Regards,
    Levi Pereira

  • Can we create wallet at User Level to implement TDE in Oracle 10g

    Hi
    I am going to use a Oracle 10g TDE security feature for data security.I have gone through with lots document.Everywhere there is mention to open or close a Wallet at system level.I mean ALTER SYSTEM..that means except DBA no one can see the encrypted column.
    But my requirement is bit different,I want to encrypt the column based on user.
    lets take example- Suppose we have one table TEST with C1,C2,C3,C4,C5,C6 column and there is U1,U2,U3 user.I want to encrypt C1 and C3 for U1 , C2 and C5 for U2 , C4 and C6 for U3 and U1,U2 and U3 can see only all columns except encrypted column.
    My question is Can we apply TDE at User level rather than system level.
    Any ideas or thought would be appreciable.
    Thanks in advance.
    ANwar

    The idea of TDE is to provide data protection on storage media, so when your backup tapes drop from the truck or the hard disk of a stolen laptop is sold online, encrypted data remains encrypted and can't be read by anyone.
    It seems to me as if you try to achieve access control by encryption, which you don't need: If users have sufficient privileges or the business need to see data, then they should be granted access and see the data de-crypted. Otherwise, access control mechanisms (roles, views, VPD, OLS) should kick in and hide the rows from them.
    So, for day-to-day business of your database, the wallet needs to be open, so that the database can de-crypt data for users who have been granted to see credit card numbers etc., but then limit access to credit card numbers they are not allowed to see with other measures. There is a little hands-on for TDE and VPD here:
    http://www.oracle.com/technology/obe/10gr2_db_vmware/security/tde/tde.htm
    Hope this helps,
    Peter

  • Microsoft Lync Server 2013, Backup Service user store backup module detected items having pool ownership conflict during import.

    Dear Team,
    I have two Enterprise lync 2013 pools, abcPool and abcpool1. abcPool1 has got two servers, Server1 and server2. and abcpool has one FE server named "Server 3". and they have pool pairing.
    Replication was fine between them when i had only one FE server in each pool, one day FE service broke on one of the FE server on abcpool1 and failed to start so i had to do failover to another pool, at that time i introduced one more FE in abcPool1, that
    why now 2 FEs in abcPool2. Server1 FE service was resolved by reinstalling the binaries. However after that im unable to get the backupservice state to normal, i tried the below articles with no luck,
    http://social.technet.microsoft.com/Forums/lync/en-US/0403621e-26b6-4cd0-bbca-8534a20de665/backup-service-pool-ownership-conflict-during-import?forum=lyncdeploy 
    http://msucmenow.blogspot.in/2013/05/troubleshooting-lync-2013-pool-pairing.html
    "Event on Server 1"
    Log Name:      Lync Server
    Source:        LS Backup Service
    Date:          1/21/2014 8:02:33 AM
    Event ID:      4073
    Task Category: (4000)
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      ABC.net
    Description:
    Microsoft Lync Server 2013, Backup Service user store backup module detected items having pool ownership conflict during import.
    Items having pool ownership conflict: 
    ItemId: 1b3be172-b121-43cf-bd4e-b3d368eae6a9, DocId: 7972, DocName: urn:hcd:[email protected]
    ItemId: 1b3be172-b121-43cf-bd4e-b3d368eae6a9, DocId: 7973, DocName: urn:lcd:[email protected]
    ItemId: 1b3be172-b121-43cf-bd4e-b3d368eae6a9, DocId: 7974, DocName: urn:upc:[email protected]
    PS C:\Users\lyncadmin> Get-CsBackupServiceStatus -PoolFqdn pool1.net | fl
    ActiveMachineFqdn   : abc1.net
    OverallExportStatus : SteadyState
    OverallImportStatus : ErrorState
    BackupModules       : {UserServices.PresenceFocus:[SteadyState,ErrorState],
                          ConfServices.DataConf:[FinalState,NormalState],
                          CentralMgmt.CMSMaster:[FinalState,NotInitialized]}
    Following error in "Lync Server" logs on server3 on abcPool.
    Log Name:      Lync Server
    Source:        LS Backup Service
    Date:          1/21/2014 9:37:47 AM
    Event ID:      4069
    Task Category: (4000)
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:     SQL1.net
    Description:
    Microsoft Lync Server 2013, Backup Service user store backup module encountered an exception that was handled gracefully when importing document batch.
    Batch file: UserServices\PresenceFocus\1-UserServices-8\Data\488bc218-9954-4caf-a5da-89efdb7b85a7_0_1562.xml.
     Exception: System.Data.SqlClient.SqlException (0x80131904): Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'dbo.Batch' directly or indirectly in database 'rtcxds' to update, delete, or
    insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.
    Log Name:      Lync Server
    Source:        LS Backup Service
    Date:          1/21/2014 9:52:45 AM
    Event ID:      4064
    Task Category: (4000)
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:     SQL1.net
    Description:
    Microsoft Lync Server 2013, Backup Service user store backup module encountered an exception that was handled gracefully during export.
    Additional Message: 
     Exception: System.IO.IOException: The process cannot access the file '\\SQl1.net\LyncShare\1-BackupService-10\BackupStore\UserServices\PresenceFocus\Cookie\Cookie.zip' because it is being used by another process.
       at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
       at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath,
    Boolean checkHost)
       at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy)
    Praveen | MCSE Messaging 2003

    When you add a new FE in pool acdpool1, please check you have run the following:
    <system drive>\Program Files\Microsoft Lync Server 2013\Deployment\Bootstrapper.exe
    For the details, check
    http://technet.microsoft.com/en-us/library/jj204773.aspx
    Lisa Zheng
    TechNet Community Support

  • How to refresh user level schema in apps

    Guys.
    how to refresh user level schema in oracle apps, My DB is 10g and apps version is 11.5.10.2 and OS: HP UX, Can any one give me some detail explanation.
    Thanks

    I would recommend Datapump over Export/Import because it is faster and allows you to restart the job in case it failed.
    In addition to Export/Import/Datapump, you can use Transportable Tablespaces with RMAN.
    Creating Transportable Tablespace Sets from Backup with RMAN
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/ontbltrn.htm#CACDBIHC
    Note: 371556.1 - How move tablespaces across platforms using Transportable Tablespaces with RMAN
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=371556.1

  • Backup form db2 level hanging

    Dear experts.
    my sap is installed on HP IVM(vertual server) and i have a NFS file system /backup mounted for taking the backup in base OS.
    we are using SAN.
    i issue is when i am taking the backup from db2 level it is creating a file on the location but its not writing any thing after 5-6 Hr also its only 0kb.  and it hanging..my backup from symantic netbackup is running fine. ..
    and i found the whenever i am executing  backup command from db2..not db2 command is working its saying backup not finished.
    and then i have to reboot the os . then only its working .
    please suggest
    thanks
    sadiq

    Hi,
    Could you please check whether db2<sid> user has write permission on this folder ?
    Also, could you please paste db2diag.log when you run the backup and it hangs ? Please also paste output of below command when you start the backup:
    db2 list utilities show detail
    Please also let us know which backup are scheduling from db2 command line.
    Thanks
    Sunny

  • Make all the forms at a user level or responsibility level to be read only

    Hi,
    Please suggest me to make all the forms at a user level or responsibility level to be read only. So that when a particular user logs in, he gets all the form in read only mode or at a particular responsibility all the forms are read only so that we can attach this responsibility to the user for the same purpose.
    Any ideas will be highly appreciated.

    check this blog,
    http://www.oracleappshub.com/11i/oracleapps-responsibility-vs-sap-functions/
    Re: How to change OM responsibility as read-only in oracle applications 11i
    read only responsibility-user

  • How to create a profile value at user level programatically

    Dear all,
    I want to create a profile value at user level programatically, I refer to the developer guide and try to use fnd_profile.put() to create a new value.
    But I find out the value is just created in session level, not be inserted into base table.
    So is there anyone know how to realize this function in PL/SQL?
    Any idea is appreciated.
    Best Regards,
    Kenny

    Check Note: 364503.1 - How to Set a System Profile Value Without Logging in to the Applications
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=364503.1

  • Error while performing Risk Analysis at user level for a cross system user

    Dear All,
    I am getting the below error, while performing the risk analysis at user level for a cross system (Oracle) user.
    The error is as follows:
    "ResourceException in method ConnectionFactoryImpl.getConnection(): com.sap.engine.services.connector.exceptions.BaseResourceException: Cannot get connection for 120 seconds. Possible reasons: 1) Connections are cached within SystemThread(can be any server service or any code invoked within SystemThread in the SAP J2EE Engine), 2) The pool size of adapter "SAPJ2EDB" is not enough according to the current load of the system or 3) The specified time to wait for connection is not enough according to the pool size and current load of the system. In case 1) the solution is to check for cached connections using the Connector Service list-conns command, in case 2) to increase the size of the pool and in case 3) to increase the time to wait for connection property. In case of application thread, there is an automatic mechanism which detects unclosed connections and unfinished transactions.RC:1
    Can anyone please help.
    Regards,
    Gurugobinda

    Hi..
    Check the note # SAP Note 1121978
    SAP Note 1121978 - Recommended settings to improve peformance risk analysis.
    Check for the following...
    CONFIGTOOL>SERVER>MANAGERS>THREADMANAGER
    ChangeThreadCountStep =50
    InitialThreadCount= 100
    MaxThreadCount =200
    MinThreadCount =50
    Regards
    Gangadhar

  • LaserJet P1505n printing slow just for user-level accounts in Win7

    I have several workstations running Win7 Pro 64-bit that have been installed as replacements for XP machines.  All of them print to one of several P1505n printers, and are using the latest drivers from HP.  Under XP there were no problems printing to these printers, but the Win7 machines have significant delays when trying to print.  The Windows test page prints instantaneously, but printing from any other application has a delay of up to a full minute before the job begins to print.  Once the job prints, it prints without issue.
    One thing that I have noticed during my testing seems to point to permissions.  If I am logged in using my admin-level account, everything prints as it should, with no delays at all.  Once I log in with a user-level account, however, the delays begin.  I found the driver files at C:\Windows\System32\spool\drivers\x64\3, but giving "everyone" full control over those files does not help.
    Is there anything else that I should be looking at?
    Thanks in advance!
    Donny

    In the end, I was able to resolve the problem by installing the Vista x64 drivers.  No playing with permissions necessary.

  • Problem in Import DMP-User Level to another tablespace

    Hi,
    I have some problem in recovering a user objects, using from exp logical dump file.I am using 8172 version of oracle.I took a exp backup from user 'john'(NOTE all objects of john are from "KATTY" tablespace).I tried to import to another database where there is no KATTY tablespace.While trying to do so,the operation wasn't succeeded becoz of the non existence of tablespace at the target database.But in this situation,I dont want to create tablespace called "KATTY",to make this successful,since there is no space in my sun box.Can some one suggest me an alternate method using imp(without creating the tablespace).

    create you new user john in the new database prior to import. Ensure user john has a default tablespace on the new database. When import runs, if it cannont find the oeiginal tablespace, it will create objects in the default tablespace.

  • Time Machine does not backup home/user directory (on separate drive)

    I recently installed a SSD into my Mini. Due to size restrictions, my home/user directory has to be kept on another drive. I retained the stock 1TB drive that came with the Mini for this.
    Ok, installed the SSD, restored a Time Machine backups (sans user data). Used a different admin user and configured my user to use the 1TB drive for it's home directory (/Volumes/1TB/home/<user>). Restart, log in as my user, all is good. All data, settings, etc is there. Everything looks normal.
    Time Machine REFUSES to backup this directory. It will backup the 1TB drive and anyting else I create in it, but not the home directory. I tried every permission trick I could think of or found online. I even tested it further by formatting the 1TB drive fresh, adding a new user, configuring the user to use the 1TB for their home directory and it still won't back it up (this was a test of permissions the OS set, to make sure I didn't change my data perms somewhere along the way). Time Machine would not backup the new user's home directory on the 1TB drive.
    Any thoughts? I can't be the first person to have their home directory on a non-OS drive.
    If I were to create a folder/file in /Volumes/1TB/<test file> ... Time Machine gets it perfect. It just will NOT touch /Volumes/1TB/home/<anything here>
    Thanks!

    Open the Time Machine preference pane and unlock the settings, if necessary. Click the Options button. If there is one particular folder with items that are not being backed up reliably, add it to the list of excluded items. If there are many such folders, add your home folder to the list, or add a whole volume (i.e., what Apple calls a "disk.") Save the changes.
    Start a backup, or wait for one to happen automatically. When it's done, open the preference pane again and remove the exclusion(s) you made earlier. Back up again and see whether there's a change.

  • Windows Server 2012 Group Policy Block USB Storage devices @ User Level Not getting applied on a Domain Client machine with Windows Server 2008 R2. Why?

    Hello,
    I have a Windows Server 2012 R2.
    I have configured the Group Policy on it to block the usage of USB - Storage Devices @ user level on the client machines. It works properly for my Windows 7 client machines but it's not working on one of the machine having Windows Server 2008 R2 installed
    on it (this machine is also a domain client in the same domain).
    I will really be thankful if anyone can suggest some solution to this issue.
    Please feel free to write back in-case I have missed anything obvious to be shared.
    Thanks!
    -Vinay Pugalia
    If a post answers your question, please click "Mark As Answer" on that post or
    "Vote as Helpful".
    Web : Inkey Solutions
    Blog : My Blog
    Email : Vinay Pugalia

    Hi,
    Any update?
    Just checking in to see if the suggestions were helpful. Please let us know if you would like further assistance.
    Best Regards,
    Andy Qi
    TechNet
    Subscriber Support
    If you are TechNet
    Subscription user and have any feedback on our support quality, please send your feedbackhere.
    Andy Qi
    TechNet Community Support

Maybe you are looking for