Enabling deduplication on existing DPM storage

Hi,
We have a DPM server 2012 SP1 running on Windows 2012. We have two disks (locally attached) in the server for storing backups. They are 3.7 TB + 6.5 TB assigned to DPM storage pool. Currently 1.9 TB free space is left in DPM storage pool. So, we are
thinking of to enable data deduplication on DPM storage. Please help me in answering the below queries.
1) Is it possible to enable deduplication on DPM Storage pool?
2)If yes, is there any benifit or impact?
Thanks,
Umesh.S.K

Hi Umesh,
thanks.
I have been looking through the TechNet articles for DPM 2012 SP1 and it only talks about the ability to backup deduplicated volumes. I know DPM 2012 and 2012 R2 can backup deduplicated volumes but don't offer deduplication themselves.
I have also looked at Server 2012 and 2012 R2 with regards to deduplication of DPM storage pools and as outlined in this TechNet post, it was not possible with either at the time of its writing.
https://social.technet.microsoft.com/Forums/en-US/ae2dd0b6-27a1-4a5c-a10a-b751dd8c0ff8/dpm-2012-r2-storage-pool-deduplication?forum=dpmstorage
However the newer blog post mentioned earlier this year is newer and announces the new ability to offer deduplication of DPM storage pools under very specific configurations. This scenario is enabled with the combination of DPM 2012 R2 UR4 and Windows 2012
R2 host and file servers with latest KBs on them.
Essentially it looks to me as though, allowing DPM to backup to VHDX files on SOFS just means you can use the same deduplication technology that is used when using it for VDI VHDX shared in a SOFS so still fairly specific config.
http://blogs.technet.com/b/dpm/archive/2015/01/06/deduplication-of-dpm-storage-reduce-dpm-storage-consumption.aspx
I have a couple of people hoping this changes but unless im wrong think this is the case as it currently stands.
If this is not the case, it would be good to hear about it.
Kind regards Michael

Similar Messages

  • DPM Storage Pool on Windows Server 2012R2 iSCSI target Storage Spaces - supported?

    Hi,
    for migration purpose we need more space for one of our DPM 2012R2 Servers.
    And we cannot buy more space for our NetAPPs..
    So our idea was to use an Windows Server 2012R2 as an iSCSI target and use Storage Spaces.
    I remember something that using an iSCSI target on Windows 2008 was not supported.
    Is this configuration supported when usinf DPM 2012R2 on Win 2012R2 and the target is Win2012R2 aswell?
    Thanks in advance
    regards
    /bkpfast
    My postings are provided "AS IS" with no warranties and confer no rights

    Hi,
    This talks about
    Virtual DPM servers, but the same is true for Physical DPM servers when it comes to using .VHD(x) files for DPM storage pool.
    Virtual DPM installations do not support the following:
    Windows 2012 Storage Spaces.
    Virtual hard drives built on top of storage spaces.
    Local or remote hosting of VHDX files on Windows 2012 storage spaces.
    Enabling Disk Dedupe on volumes hosting virtual hard drives.
    Using synthetic FC to connect to tape drives.
    Windows 2012 iSCSI targets (which use virtual hard drives) as a DPM storage pool.
    NTFS compression for volumes hosting VHD files used in the DPM storage pool.
    Bitlocker on volumes hosting VHD files used for the storage pool.
    A native 4K sector size of physical disks for VHDX files in the DPM storage pool.
    Virtual hard drives hosted on Windows 2008 servers.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Non-enabled or non-existent descriptive flexfield segment

    hi m getting this Error when m trying to Save Text Field value in DFF segment
    A value has been provided for a non-enabled or non-existent descriptive flexfield segment. (SEGMENT=ATTRIBUTE17) (VALUE=Y)
    regards,
    Vivek

    Are you trying to use a DFF segment which is not used in the current context. This means you also have the DFF segments being used on the Page.

  • Can iCloud be use for existing photo storage

    can icloud be used for storage of all my existing photos as long as i buy the extra storage?

    No. iCloud is not a backup and Photo Stream has 1000 photos and 1 month limitation.
    You can use Dropbox app for that.

  • Enabling a disk in T3 Storage Array

    Currently using a T3 Storage Array where one of the drive slots is showing disabled. We have already replaced the bad disk and power cycled the array to try and enable the drive slot with no success. Does anyone know of a way to enable the drive slot other then deleting the volume and re-adding it?

    Actually i was able to try it just yesterday without success.  At the moment i have all of the disks previously mentioned installed, plus the new 240gb SSD.  For some reason I am unable to add the new disk to the storage pool, and as a result unable
    to remove the old 64gb disk.  
    Shaon, were you able to add a disk to a storage pool that contained a tiered storage virtual disk that consumed 100% of the storage pool space?  When I try the options are all grayed out.
    I do have backups of all of this data and I am capable of deleting the whole volume and starting over.  However, this is new technology I would like to get familiar with in case I am presented with a similar problem in the future without the luxury
    of backups.

  • Enable LTE doesnt Exist

    Hi i need help . . I bought an iphone5 Verizon from states. actually i live here in phillippines. and my problem is when i go to General > Cellular > there are two option which is: cellular data and the other one is enable 3G, my problem is the
    "Enable LTE" bar does not exist.
    kindly Help me please

    There are no supported carriers with LTE for the iPhone in the Philippines at this time.
    http://support.apple.com/kb/ht1937

  • Running Enable-Mailbox for existing user error message

    Hey guys,
      I'm trying to enable an existing users mailbox from a powershell command running from my DC and im getting an error message 
    Enable-Mailbox is not recognized as a name of a cmdlet.
    I have done import-modules activedirectory at the begining of the script to import a use rfrom a csv file and everything is working great except this final step.
    am i missing something here ?
    Thanks as always
    Rich

    Duplicate:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/653c0a19-ee93-4662-9084-8676c99f9024/running-the-exchange-management-shell-from-a-powershell-script-on-another-server?forum=winserverpowershell
    Microsoft Certified Trainer
    MCSE: Desktop, Server, Private Cloud, Messaging
    Blog: http://365lab.net

  • Enable ACLs on existing volume- have you done this?

    I'm considering enabling ACLs on an existing (in use) SAN volume. I'm still running xsan 1.4.2. The MDCs are on 10.5.5 and soon the clients will be as well.
    Have any of you enabled ACLs 'after the fact', and if so, did you have any problems?
    I'm using local users right now, and I want to move to OD users and get away from the umask nonsense.

    Do you see a n "Old Firefox Data" folder on the desktop in case Firefox got reset?
    Do you have more than one profile folders present if you check that?
    You can use this button to go to the current Firefox profile folder:
    *Help > Troubleshooting Information > Profile Directory: Show Folder (Linux: Open Directory; Mac: Show in Finder)
    Go one level up to the Mozilla\Firefox\Profiles\ folder in case a new profile got created.
    *http://kb.mozillazine.org/Recovering_a_missing_profile

  • Enabling ACLs on existing sharepoint

    Howdy - I've read through a lot of the other posts on ACLs but this particular topic didn't seem to be covered there.
    I have a handful of "Shares" on a volume that currently does not have ACLs enabled, and like most people, I've been constantly resetting the file permissions so that people in various groups (HS Faculty, Yearbook, etc) are able to read, write, etc.
    I'd like to try using ACLs on there in the hopes that these shares become a little easier to manage, BUT the volume (an XRAID) also has users home directories on it, and I am concerned that enabling ACLs on the volume could somehow screw things up there.
    If I enable ACLs, I can use them just on my "share points" and not on my home directories, right? ACLs are activated on a per-folder level, correct?
    System is an XServe G5 (2x 2Ghz) with 2GB RAM, running OS X Server 10.4.2 - connected to single channel of XRAID with 1.09TB (RAID 0+1).
    If anyone has suggestions for me (besides re-reading Gerrit's ACL Tips posts, which I will do anyway), I'd appreciate it.
    G5 iMac   Mac OS X (10.4.4)  

    ACLs are enabled at the volume level, but only applied to the folders you choose. For example, we use a volume (Data) that contains multiple folders (Homes, Shared, and Web). We use ACLs only in the Shared directory and continue to rely on POSIX for the others. It works fine.
    One caveat: we don't use XRAID.

  • System Center 2012R2 Storage Pool Deduplication?

    I am trying to find out if it is possible to deduplicate data on a System Center 2012R2 Storage Pool on Server 2012R2.  I am aware that DPM will back up deduplicated data in a deduplicated state, but this is about deduplicating data backed
    up from multiple sources.  Searching shows me that deduplicating System Center 2012 Storage Pools on Server 2012 was not supported, but the threads I found that showed this either had screenshots or quotes of the information instead of links to the documentation. 
    Is it possible/supported to deduplicate data within a System Center 2012R2 Storage Pool on Server 2012R2?

    Hi,
    Yes, if you virtualize your DPM server, it is possible to enable deduplication of DPM storage pool disks.
    Please see the following white paper:
    Deduplicating DPM storage
    We are working on a better solution long term.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Are ReplicatedCache entries stored even on non-storage enabled members?

    I noticed some unexpected behavior in my program:
    * I have a ReplicatedCache defined in my application
    * I have 1 localstorage enabled JVM and 2 non-storage enabled JVMs
    * When the storage node crashes, the 2 other JVMs received a message to do some processing. Those 2 non-storage JVMs tried to invoke NamedCache.values() on the ReplicatedCache
    ** To my surprise the values() returned the correct data even though the storage node had crashed
    ** The Coherence doc describes ReplicatedCache as "A replicated cache is a clustered, fault tolerant cache where data is fully replicated to *every member* in the cluster"
    * Does "every member" include non-storage enabled members too? Or is this a ReplicatedCache bug?
    * Does a NearCache also support this "feature"?

    Hi Aswin,
    Yes a replicated cache stores the same data on all nodes regardless of whether they are storage enabled or not.
    Near caches will only hold values that have previously been accessed via a get or getAll etc. I am not 100% sure but I suspect you get an error if you try to get from a cache with near caching on a storage disabled client when the storage enabled server has gone even if the data is in the near-cache part.
    JK

  • Use Storsimple LUN as storage pool in DPM

    Can we use Storsimple LUN as the storage pool and assign to DPM?
    Is this supported? 
    Lai (My blog:- http://www.ms4u.info)

    Hi,
    DPM tracks changes of protected files at the block level, and applies block level changes to files sitting on the DPM replica.  DPM does not operate at the file level, so if storsimple device was used in DPM storage pool
    and were to truncate files as part of it’s offload to cloud operation and files changed on the protected server, when DPM tried to apply that block level change to the files – that would cause corruption.   Also consistency checks perform block level
    comparisons to NTFS structures and file data, and again, DPM would basically see the work done by Storsimple as mismatched data and end up brining over a full copy of the file and re-writing it to the replica volume.
    Simply said, the technologies are not compatible.
    With that said, using Storsimple as a depository for VTL media (like firestreamer) would be supported by DPM, but not sure that has actually been tested by anybody at Microsoft.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Data Protection Manager 2012 - Inconsistent when backing up Deduplicated File Server

    Protected Server
    Server 2012 File Server with Deduplication running on Data drive
    DPM Server
    Server 2012
    Data Protection Manager 2012 Service Pack 1
    We just recently upgraded our DPM server from DPM 2010 to DPM 2012 primarily because it is supposed to support Data Deduplication. Our primary File server that holds our home directories etc. is limited on space and was quickly running low so just after
    we got DPM 2012 in place we optimized the drive on the file server which compressed the data about 50%. Unfortunately shortly after enabling deduplication the protected shares on the deduplicated volume are getting a Replica is Inconsistent error.
    I continually get Replica is Inconsistent for the Server that has deduplication running on it. All of the other protected servers are being protected as they should be. I have run a consistency check multiple times probably about 10 times and it keeps going
    back to Replica is inconsistent. The replica volume shows that it is using 3.5 TB and the Actual protect volume is 4TB in size and has about 2.5 TB of data on it with Deduplication enabled.
    This is the details of the error
    Affected area:   G:\
    Occurred since: 1/12/2015 4:55:14 PM
    Description:        The replica of Volume G:\ on E****.net is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with
    consistency check. You can recover data from existing recovery points, but new recovery points cannot be created until the replica is consistent.
    For SharePoint farm, recovery points will continue getting created with the databases that are consistent. To backup inconsistent databases, run a consistency check on the farm. (ID 3106)
    More information
    Recommended action: 
    Synchronize with consistency check.
    Run a synchronization job with consistency check...
    Resolution:        
    To dismiss the alert, click below
    Inactivate
    Steps taken to resolve - I’ve spent some time doing some searches and haven’t found any solutions to what I am seeing. I have the data deduplication role installed on the DPM server which has been the solution for many people seeing similar issues. I have
    also removed that role and the added it back. I have also removed the protected server and added it back to the protection group. It synchronizes and says consistent then after a few hours it goes back to inconsistent. When I go to recovery it shows that I
    have recovery points and it appears that I can restore but because the data is inconstant I don’t feel I can trust the data in the recovery points. Both the protected server and the DPM servers’ updates are managed via a WSUS server on our network.
    You may suggest I just un-optimize the drive on the protected server however after I have optimized the drive it takes a large amount more of space to un-optimize it (Anyone know why that is) anyways the drive isn’t large enough to support un-optimization.
    If anyone has any suggestions I would appreciate any help. Thanks in advanced.

    Ok I ran a consistency check and it completed successfully with the following message. However after a few minutes of it showing OK it now shows Replica is Inconsistent again.
    Type: Consistency check
    Status: Completed
    Description: The job completed successfully with the following warning:
     An unexpected error occurred while the job was running. (ID 104 Details: Cannot create a file when that file already exists (0x800700B7))
     More information
    End time: 2/3/2015 11:19:38 AM
    Start time: 2/3/2015 10:34:35 AM
    Time elapsed: 00:45:02
    Data transferred: 220.74 MB
    Cluster node -
    Source details: G:\
    Protection group members: 35
     Details
    Protection group: E*
    Items scanned: 2017709
    Items fixed: 653
    There was a log for a failed synchronization job from yesterday here are the details of that.
    Type: Synchronization
    Status: Failed
    Description: The replica of Volume G:\ on E*.net is not consistent with the protected data source. (ID 91)
     More information
    End time: 2/2/2015 10:04:01 PM
    Start time: 2/2/2015 10:04:01 PM
    Time elapsed: 00:00:00
    Data transferred: 0 MB
    Cluster node -
    Source details: G:\
    Protection group members: 35
     Details
    Protection group: E*

  • Windows 2012 R2 Deduplication Problems, 0 B SavedSpace and 0 OptimizedFiles

    Objective:
    On a Windows 2012 R2 server, enable Deduplication on a new 20 TB D: physical volume. This volume will become a target for Veeam Backup and Replication backups.
    Problem:
    Deduplication fails to dedupliate anything. Get-DedupStatus always reports 0 B SavedSpace and 0 OptimizedFiles.
    PS D:\> Get-DedupStatus
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 D:
    Troubleshooting Steps:
    Uninstalled Role/Feature > Deduplication (using GUI)
    Rebooted Server
    Deleted the D: Volume (using GUI)
    Rebooted Server
    Created a new 2 TB D: Volume (using GUI)
    Copied 500 MB of random data from a networked file server to D: (using GUI)
    Made four local copies of that same folder on D: for a total of about 2 GB of data
    Installed Role/Feature > Deduplication (using GUI)
    Did not enable or apply Deduplication to the D: drive just yet
    Ran DDPEVAL.exe D: (the Data Deduplication Savings Evaluation Tool) and received these results
    PS D:\> ddpeval.exe D:
    Data Deduplication Savings Evaluation Tool
    Copyright (c) 2013 Microsoft Corporation. All Rights Reserved.
    Evaluated folder: D:
    Evaluated folder size: 2.02 GB
    Files in evaluated folder: 451
    Processed files: 285
    Processed files size: 2.02 GB
    Optimized files size: 171.26 MB
    Space savings: 1.85 GB
    Space savings percent: 91
    Optimized files size (no compression): 328.66 MB
    Space savings (no compression): 1.69 GB
    Space savings percent (no compression): 84
    Files excluded by policy: 166
    Small files (<32KB): 166
    Files excluded by error: 0
    Status 1 of Troubleshooting:
    I uninstalled the Deduplication role, rebooted, and reinstalled the Deduplication role. Before applying Deduplication to a newly created volume, I populated that volume with about 2 GB of data. I then used the ddpeval.exe tool to determine the results that
    I should achieve if Deduplication was enabled on this volume.
    Continue Troubleshooting Steps:
    Server Manager > File and Storage Services > Volumes > D: > Configured Data Deduplication > enabled "General purpose file server > Apply
    Used PowerShell > Get-DedupStatus to confirm D: was enabled
    Used PowerShell > Start-DedupJob D: -Type Optimization -Full to manually run Deduplication on volume D:
    Used PowerShell > Get-DedupJob every 2 seconds to monitor the Progress and State of the deduplication job
    The Start-DedupJob always runs less than 1 minute, has a Progress of either 0% or 100%, and has a State of Queued, Running, or Completed.
    Then ran PowerShell > Get-DedupStatus again to check my results
    Results remain 0 B SavedSpace, 0 OptimizedFiles
    PowerShell Results
    PS D:\> Get-DedupStatus
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 D:
    PS D:\> Start-DedupJob D: -Type Optimization -Full
    Type ScheduleType StartTime Progress State Volume
    Optimization Manual 0 % Queued D:
    PS D:\> Get-DedupJob
    Type ScheduleType StartTime Progress State Volume
    Optimization Manual 1:22 PM 0 % Running D:
    PS D:\> Get-DedupJob
    Type ScheduleType StartTime Progress State Volume
    Optimization Manual 1:22 PM 100 % Completed D:
    PS D:\> Get-DedupStatus
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 D:
    Closing:
    I don't have any problems or errors installing or running Deduplication. I simply have no results. More importantly, my Deduplication results do not match the Data Deduplication Savings Evaluation Tool's predictions. How should I further troubleshoot?
    Thank you for your time.

    Thanks for responding and introducing me to Update-DedupStatus. Sadly, no change in results. Note that I again deleted my volumes and reinstalled the deduplication role. This time I'm working on a newly created E: volume.
    Update-DedupStatus result:
    PS E:\> Get-DedupStatus
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 E:
    PS E:\> Start-DedupJob E: -Type Optimization -Full
    Type ScheduleType StartTime Progress State Volume
    Optimization Manual 0 % Queued E:
    PS E:\> Get-DedupStatus
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 E:
    PS E:\> Update-DedupStatus E:
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 E:
    PS E:\> Start-DedupJob E: -Type Optimization -Full
    Type ScheduleType StartTime Progress State Volume
    Optimization Manual 0 % Queued E:
    PS E:\> Get-DedupStatus
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 E:
    PS E:\> Update-DedupStatus E:
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 E:
    PS E:\> Get-DedupJob
    PS E:\> Get-DedupStatus
    FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
    2 TB 0 B 0 0 E:
    PS E:\>
    Connection/Drive/RAID Type:
    The 20 TB physical disk is behind an IBM ServeRAID M5110 SCSI Disk Device controller. It consists of 12 4 TB SATA drives configured as a RAID 10. All drives and controller are direct attached (local) on an IBM x3630 M4 server.

  • DPM 2012 R2 upgrade fails to run dpmsync -restoredb.

    Hi,
    Im experiencing some problems when trying to restore dpmdb.bak
    Original setup was Win2012 Std. with DPM 2012 SP1 and latest RU, on a physical HP DL 360 Gen8
    SQL is a local instance 2008 R2
    Upgrade failed at the very end saying "failed to create dpm service", it was suggested that the probable cause
    was that the system was pending reboot wich it was not!
    I followed the error message and uninstalled DPM, rebooted and reinstalled DPM2012 R2, during install I had to point out an existing DPM instance/database wich I did, klicked the "check and install" and went on installing without any further problems.
    I also checked "retain data" during the install.
    When trying to restore the db using syntax "dpmsync -restoredb -dbloc c:\temp\dpmdb.bak -instancename msdpm2012 -dpmdbname dpmdb" I get the following error.
    "Error ID: 454
    DpmSync failed to connect to the specified SQL Server instance. Make sure that y
    ou have specified a valid SQL server instance associated with DPM and you are lo
    gged on as a user with administrator privileges.
    Detailed Error: A network-related or instance-specific error occurred while esta
    blishing a connection to SQL Server. The server was not found or was not accessi
    ble. Verify that the instance name is correct and that SQL Server is configured
    to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could
    not open a connection to SQL Server)"
    I enabled Named Pipes and entered TCP/IP port 1433 in SQL configuration manager and ran the command again with the same result.
    When checking the databse using SQL Mgmt. Studio I found out that there was now two databases, the original MSDPM and
    a new db named MSDPM_Servername, confusion!
    Now, my SQL knowledge is poor to say the least so I'm hoping that someone has a solution or a tip to push me forward.
    Any suggestion would be highly appreciated.
    Thank you.
    Peter Åkerblom Stockholm.

    Additional information.
    We found out that the dpmsync -restoredb command failed with reason "access denied", it could not overwrite the original db.
    So we dropped the original db and moved it out of its default folder and ran the command again.
    The dpmdb did this time in fact get restored to default path but some errors.
    From event log:
    The description for Event ID 3750 from source MSDPM cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Version 4.1.3313.0 of the DPM database, and version 4.2.1205.0 of the DPM application are not compatible.
    Problem Details:
    <DatabaseVersionMismatch><__System><ID>29</ID><Seq>0</Seq><TimeCreated>2014-10-21 07:45:57</TimeCreated><Source>d:\btvsts\21011\private\product\engine\service\dll\servicemodule.cpp</Source><Line>183</Line><HasError>True</HasError></__System><DatabaseVersion>4.1.3313.0</DatabaseVersion><BinariesVersion>4.2.1205.0</BinariesVersion></DatabaseVersionMismatch>"
    From what I understand a database version upgrade should be taken care of during dpmsync-restoredb but I must be miskaten.
    Any suggestions on how to upgrade the databse ?
    Peter.

Maybe you are looking for

  • All Day Event 2 weeks long stops mid-way through (in Week View)

    I created a new all-day event that spans 2 weeks: July 6 - July 19. In Month View, it shows as a continuous line covering the 2 weeks. But in Week View, the event stops after the 10th day, on July 15...then shows up again when I click to the next wee

  • Can I purchase a Thunderbolt card for my Mac Pro?

    Hi everyone, I was just wondering if Apple sells any Thunderbolt cards for a system which does not currently have the Thunderbolt ports on it?  Thanks!

  • Question on LDAPSync Post Enable Provision Users to LDAP task

    Hi All, Can you please clarify my doubt on I created a user "testaccount" in OIM and via ldapsync, it gets created in OID. Now, I manaully deleted that user "testaccount" in OID and wants to recreate the user account again in OID. Will this schedule

  • Import from Excel as a user option.

    As a developer, I have imported data from Excel, Access, delimited text, etc. But, that's a bit too much for my typical user. Now I'm being hit with a requirement where the user wants to be able to browse to an XLS file on their desktop, and have it

  • French language messages.

    My forms created with Designer 7.0 are working fine. My users are using Reader 7 and 8. My problem is some of the users are French and are of course receiving the standard Acrobat messages: "Data typed into this form will not be saved..." etc. I have