Best Practices for Integrating UC-5x0's with SBS 2003/8?

Almost all of Cisco's SBCS market is the small and medium business space.  Most, if not all of these SMB's have a Microsoft Small Business Server 2003 or 2008. It will be critical, In order for Cisco to be considered as a purchase option, that the UC-5x0 integrates well into these networks.
To that end, I see a  lot of talk here about how to implement parts and pieces of this, but no guidance from Cisco, no labs and no best practices or other documentation. If I am wrong, please correct me.
I am currently stumbling through and validating these configurations myself, Once complete, I will post detailed recommendations. However, it would have been nice to have a lab to follow instead of having to learn from each mistake.
Some of the challanges include;
1. Where should the UC-540 be placed: As the gateway for QOS or behind a validated UC-5x0 router/security appliance combination
2. Should the Microsoft Windows Small Business Server handle DCHP (as Microsoft's documentation says it must), or must the UC-540 handle DHCP to prevent loss of features? What about a DHCP relay scheme?
3. Which device should handle DNS?
My documentation (and I recommend that any Cisco Lab/Best Practice guidence include it as well) will assume the following real-world scenario, the same which applies to a majority of my SMB clients;
1. A UC-540 device utilizing SIP for the cost savings
2. High Speed Internet with 5 static routable IP addresses
3. An existing Microsoft Small Business Server 2003/8
4. An additional Line of Business Application or Terminal Server that utilizes the same ports (i.e. TCP 80/443/3389) as the UC-540 and the SBS, but on seperate routable IP's (Making up crazy non-standard port redirections is not an option).
5. A employee who teleworks from various places that provide a seat and a network jack, which is not under our control (i.e. a employees home, a clients' office, or a telework center). This teleworker should use the built in VPN feature within the SPA or 7925G phones because we will not have administrative access to any third party's VPN/firewall.
Your thoughs appreciated.

Progress Report;
The following changes have been made to the router in support of the previously detailed scenario. Everything appears to be working as intended.
DHCP is still on the UC540 for now. DNS is being performed by the SBS 2008.
Interestingly, the CCA still works. The NAT module even shows all the private mapped IP's, but no the corresponding public IP's. I wouldnt recommend trying to make any changes via the CCA in the NAT module.  
To review, this configuration assumes the following;
1. The UC540 has a public IP address of 4.2.2.2
2. A Microsoft Small Business Server 2008 using an internal IP of 192.168.10.10 has an external IP of 4.2.2.3.
3. A third line of business application server with www, https and RDP that has an internal IP of 192.168.10.11 and an external IP of 4.2.2.4
First, backup your current configuration via the CCA,
Next, telent into the UC540, login, edit, cut and paste the following to 1:1 NAT the 2 additional public IP addresses;
ip nat inside source static tcp 192.168.10.10 25 4.2.2.3 25 extendable
ip nat inside source static tcp 192.168.10.10 80 4.2.2.3 80 extendable
ip nat inside source static tcp 192.168.10.10 443 4.2.2.3 443 extendable
ip nat inside source static tcp 192.168.10.10 987 4.2.2.3 987 extendable
ip nat inside source static tcp 192.168.10.10 1723 4.2.2.3 1723 extendable
ip nat inside source static tcp 192.168.10.10 3389 4.2.2.3 3389 extendable
ip nat inside source static tcp 192.168.10.11 80 4.2.2.4 80 extendable
ip nat inside source static tcp 192.168.10.11 443 4.2.2.4 443 extendable
ip nat inside source static tcp 192.168.10.11 3389 4.2.2.4 3389 extendable
Next, you will need to amend your UC540's default ACL.
First, copy what you have existing as I have done below (in bold), and paste them into a notepad.
Then, im told the best practice is to delete the entire existing list first, finally adding the new rules back, along with the addition of rules for your SBS an LOB server (mine in bold) as follows;
int fas 0/0
no ip access-group 104 in
no access-list 104
access-list 104 remark auto generated by SDM firewall configuration##NO_ACES_24##
access-list 104 remark SDM_ACL Category=1
access-list 104 permit tcp any host 4.2.2.3 eq 25 log
access-list 104 permit tcp any host 4.2.2.3 eq 80 log
access-list 104 permit tcp any host 4.2.2.3 eq 443 log
access-list 104 permit tcp any host 4.2.2.3 eq 987 log
access-list 104 permit tcp any host 4.2.2.3 eq 1723 log
access-list 104 permit tcp any host 4.2.2.3.35 eq 3389 log 
access-list 104 permit tcp any host 4.2.2.4 eq 80 log
access-list 104 permit tcp any host 4.2.2.4 eq 443 log
access-list 104 permit tcp any host 4.2.2.4 eq 3389 log
access-list 104 permit udp host 116.170.98.142 eq 5060 any
access-list 104 permit udp host 116.170.98.143 any eq 5060
access-list 104 deny   ip 10.1.10.0 0.0.0.3 any
access-list 104 deny   ip 10.1.1.0 0.0.0.255 any
access-list 104 deny   ip 192.168.10.0 0.0.0.255 any
access-list 104 permit udp host 116.170.98.142 eq domain any
access-list 104 permit udp host 116.170.98.143 eq domain any
access-list 104 permit icmp any host 4.2.2.2 echo-reply
access-list 104 permit icmp any host 4.2.2.2 time-exceeded
access-list 104 permit icmp any host 4.2.2.2 unreachable
access-list 104 permit udp host 192.168.10.1 eq 5060 any
access-list 104 permit udp host 192.168.10.1 any eq 5060
access-list 104 permit udp any any range 16384 32767
access-list 104 deny   ip 10.0.0.0 0.255.255.255 any
access-list 104 deny   ip 172.16.0.0 0.15.255.255 any
access-list 104 deny   ip 192.168.0.0 0.0.255.255 any
access-list 104 deny   ip 127.0.0.0 0.255.255.255 any
access-list 104 deny   ip host 255.255.255.255 any
access-list 104 deny   ip host 0.0.0.0 any
access-list 104 deny   ip any any log
int fas 0/0
ip access-group 104 in
Lastly, save to memory
wr mem
One final note - if you need to use the Microsoft Windows VPN client from a workstation behind the UC540 to connect to a VPN server outside your network, and you were getting Error 721 and/or Error 800...you will need to use the following commands to add to ACL 104;
(config)#ip access-list extended 104
(config-ext-nacl)#7 permit gre any any
Im hoping there may be a better way to allowing VPN clients on the LAN with a much more specific and limited rule. I will update this post with that info when and if I discover one.
Thanks to Vijay in Cisco Tac for the guidence.

Similar Messages

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

  • Best practices for 2 x DNS servers with 2 x sites

    I am curious if someone can help me with best practices for my DNS servers.  Let me give my network layout first.
    I have 1 site with 2 x Windows 2012 Servers (1 GUI - 10.0.0.7, the other CORE - 10.0.0.8) the 2nd site connected via VPN has 2 x Windows 2012R2 Servers (1 GUI - 10.2.0.7, the other CORE - 10.2.0.8)  All 4 servers are promoted to DC's and have DNS services
    running.
    Here goes my questions:
    Site #1
    DC-01 - NIC IP address for DNS server #1 set to 10.0.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.0.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    Site #2
    DC-01 - NIC IP address for DNS server #1 set to 10.2.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.2.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local
    > properties > Name Servers should I have all of my other DNS servers, or should I have my WAN DNS servers? In a single server scenario I always put my WAN DNS server but a bit unsure in this scenario. 
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > General > Type should all servers be set to
    Active Directory - Integrated > Primary Zone? Should any of these be set to
    Secondary Zone?
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > Zone Transfers should I allow zone transfers?
    Would the following questions be identical to the Forward Lookup Zone mydomain.local as well?

    I am curious if someone can help me with best practices for my DNS servers.  Let me give my network layout first.
    I have 1 site with 2 x Windows 2012 Servers (1 GUI - 10.0.0.7, the other CORE - 10.0.0.8) the 2nd site connected via VPN has 2 x Windows 2012R2 Servers (1 GUI - 10.2.0.7, the other CORE - 10.2.0.8)  All 4 servers are promoted to DC's and have DNS services
    running.
    Here goes my questions:
    Site #1
    DC-01 - NIC IP address for DNS server #1 set to 10.0.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.0.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    Site #2
    DC-01 - NIC IP address for DNS server #1 set to 10.2.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.2.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local
    > properties > Name Servers should I have all of my other DNS servers, or should I have my WAN DNS servers? In a single server scenario I always put my WAN DNS server but a bit unsure in this scenario. 
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > General > Type should all servers be set to
    Active Directory - Integrated > Primary Zone? Should any of these be set to
    Secondary Zone?
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > Zone Transfers should I allow zone transfers?
    Would the following questions be identical to the Forward Lookup Zone mydomain.local as well?
    Site1
    DC1: Primary 10.0.0.7. Secondary 10.0.0.8. Tertiary 127.0.0.1
    DC2: Primary 10.0.0.8.  Secondary 10.0.0.7. Tertiary 127.0.0.1
    Site2
    DC1: Primary 10.2.0.7.  Secondary 10.2.0.8. Tertiary 127.0.0.1
    DC2: Primary 10.2.0.8.  Secondary 10.2.0.7. Tertiary 127.0.0.1
    The DC's should automatically register in msdcs.  Do not register external DNS servers in msdcs or it will lead to issues. Yes, I recommend all zones to be set to AD-integrated. No need to allow zone transfers as AD replication will take care
    of this for you.  Same for mydomain.local.
    Hope this helps.  

  • Tips n Tricks/Best Practices for integrating iPhone, iPad and MacBook Pro

    My wife just purchased an iPhone, iPad and Macbook Pro for her non profit consulting business and I was wondering if a tips and tricks or best practices for efficiently and productively integrating these devices exists?

    http://www.apple.com/icloud/

  • What is best practice for integration with freight forwarders?

    Hello,
    We are looking into the possibilities for automatically exchanging data with one of our freight forwarders. We will send them our shipment information and they will send back shipment status and date information including some additional information like the house bill. Sending the shipment data from our SAP (ECC 6) system is no issue, we have done that before. What is new to us is receiving back the status updates from the forwarder. Is there a kind of best practice of where to store this information on the shipment (or in a separate tabel) and what standard function module or BADI to use for this?
    We are using ECC 6.0 sales and distribution, but no transportation management or SCM modules.
    Would, like to hear the experiences of people who have done this type of intergration with their forwarders.
    Regards,
    Ed

    SAP have added SAP TM 8.10 as a separate package which is also integrated with R/3 which means, a separate server is required if SAP TM needs to be implemented which will take care of your expectations.  For more information on this, search in Google so that you will get couple of documentations on this topic.
    G. Lakshmipathi

  • Best practice for oracle 10.2 RAC with ASM

    Did any one tried/installed Oracle 10.2 RAC with ASM and CRS ?
    What is the best practice?
    1. separate home for CRS, ASM and Oracle Database?
    2. separate home for CRS and same home for ASM and Oracle Darabase?
    we set up the test environment with separate CRS, ASM and Oracle database homes, but we have tons of issues with the listener, spfile and tnsnames.ora files. So, seeking advise from the gurus who implimeted/tested the same ?

    I am getting ready to install the 10gR2 database software (10gR2 Clusterware was just installed ) and I want to have a home for ASM and another for database as you suggest. I have been told that 10gR2 was to have a smaller set of binaries that can be used for the ASM home ... but I am not sure how I go about installing it. The first look at the installer does not seem to make it obvious...Is it a custom build option?

  • Best practices for setting up RDS pool, with regards to profiles /appdata

    All,
    I'm working on a network with four physical sites and currently using a single pool of 15 RDS servers with one broker. We're having a lot of issues with the current deployment, and are rethinking our strategy. I've read a lot of conflicting information on how
    to best deploy such a service, so I'd love some input.
    Features and concerns:
    Users connect to the pool from intranet only.
    There are four sites, each with a somewhat different local infrastructure. Many users are connecting to the RDS pool via thin clients, although some locations have workstations in place.
    Total user count that needs to be supported is ~400, but it is not evenly distributed - some sites have more than others.
    Some of the users travel from one site to another, so that would need to be accounted for with any plans that involve carving up the existing pool into smaller groups.
    We are looking for a load-balanced solution - using a different pool for each site would be acceptable as long as it takes #4 and #7,8 into account.
    User profile data needs to be consistent throughout: My Docs, Outlook, IE favorites, etc.
    Things such as cached IE passwords (for sharepoint), Outlook settings and other user customization needs to be carried over as well.
    As such, something needs to account for the information in AppData/localroaming, /locallow and /local between these RDS servers.
    Ideally the less you have to cache during each logon the better, in order to reduce login times.
    I've almost never heard anything positive about using roaming profiles, but is this one of those rare exceptions? Even if we do that, I don't believe that covers the information in <User>/AppData/*  (or does it?), so what would be the best
    way to make sure that gets carried over between sessions inside the pool or pools?
    The current solution involves using 3rd party apps, registry hacks, GPOs and a mashup of other things and is generally considered to be a poor fit for the environment. A significant rework is expected and acceptable. Thinking outside the box is fine!
    I would relish any advice on the best solutions for deployment! Thank you!

    Hi Ben,
    Thank you for posting in Windows Server Forum.
    Please check below blogs and document which helps to understand some basic requirement and to setup the new environment with proper guided manner.
    1. Remote Desktop Services Deployment Guide
    (Doc)
    2. Step by Step Windows 2012 R2 Remote Desktop Services –
    Part 1, 2,3 & 4
    3.Deploying a 2012 / 2012R2 Remote Desktop Services (RDS) farm
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Best practices for a development/production scenario with ORACLE PORTAL 10G

    Hi all,
    we'd like to know what is the best approach for maintaining a dual development/production portal scenario. Specially important is the process of moving from dev. to prod. and what it implies in terms of portal availability in the production environment.
    I suppose the best policy to achieve this is to have two portal instances and move content via transport sets. Am I right? Is there any specific documentation about dev/prod scenarios? Can anybody help with some experiences? We are a little afraid regarding transport sets, as we have heard some horror stories about them...
    Thanks in advance and have a nice day.

    It would be ok for a pair of pages and a template.
    I meant transport sets failed for moving an entire pagegroup (about 100 pages, 1Gb of documents).
    But if your need only deals with a few pages, I therefore would direclty developp on the production system : make a copy of the page, work on it, then change links.
    Regards

  • Best practice for taking Site collection Backup with more than 100GB

    Hi,
    I have site collection data is more than 100 GB. Can anyone please suggest me the best practice to take backup?
    Thanks in advance....
    Regards,
    Saya

    Hi
    i think Using powershell script we can do..
    Add this command in powershell
    Add-PSSnapin Microsoft.SharePoint.PowerShell
    Web application backup & restore
    Backup-SPFarm -Directory \\WebAppBackup\Development  -BackupMethod Full -Item "Web application name"
    Site Collection backup & restore
    Backup-SPSite http://1632/sites/TestSite  -Path C:\Backup\TestSite1.bak
    Restore-SPSite http://1632/sites/TestSite2  -Path C:\Backup\TestSite1.bak -Force
    Regards
    manikandan

  • Best practice for calling an AM method with parameters

    Which will be the best way to call an AM method with parameters from a backing bean.
    I usually use the BindingContainer to get the operation binding and then call execute function. But when the method have parameters, how to do it?
    Thanks

    Hi,
    same:
    operationBinding.getParamMap().put("argument1Name", argument1Value);
    operationBinding.getParamMap().put("argument2Name", argument2Value);
    operationBinding.execute();
    Frank

  • Best Strategy for Integrating Crystal/Business Objects with OpenACS Environment

    Post Author: devashanti
    CA Forum: Deployment
    I'm working for a client that uses AOL server and OpenACS for their web services/applications. I need suggestions on the best strategy to integrate a reporting solution using Business Objects XI. Ideally I'd like to send an API call from our web application's GUI to the Crystal API with report parameter values to pass into specific reports called via the API - I can get it down to one integer value being passed - or if this is not possible a way to seamlessly, from the end user perspective, move into a reporting module. We are using an Oracle backend database. I'm experienced with creating stored procedures and packages for reporting purposes.
    Although I have many years of experience integrating the Crystal active X controls into n-tier client server applications, the past few years I have had little opportunity to work with Business Objects and the newer versions of Crystal or web based solutions with Crystal Reports. I signed up to try out crystalreports.com, but I doubt my client will find this solution acceptable for security reasons as the reports are for an online invoicing system we are developing. However we can set up a reports server in-house if necessary, so it gives me some testing ground.
    Can anyone provide suggestions for a doable strategy?

    Please post this query to the Business Objects Enterprise Administration forum:
    BI Platform
    That forum is monitored by qualified technicians and you will get a faster response there. Also, all BOE queries remain in one place and thus can be easily searched in one place.
    Thank you for your understanding,
    Ludek

  • Best Practice for setting up an office with an extreme

    I am looking for some great info for setting up a Business Network using Comcast Business Highspeed. I rencently purchased an Airport Extreme and an Airport Express for a network extender and I am trying to understand what the optimal setup is for this kind of setup. Comcast provides a modem that manages DHCP but I am not sure if it makes sense to use the Airport Extremes built in DHCP or setup the Extreme as a bridge to the Modem and let DHCP be handled there. I am expecting to have anywhere from 30-60 devices on the network depending on the day. Is there any info out there that would help me better understand Apple's recomendation or do any of you have some good info for me? Thanks for the help in advance.

    Unless the Comcast modem/router or gateway device has an available option to be configured as a simple modem.....and....this type of configuration is supported by Comcast, the decision about DHCP service has already been made for you.
    In that case, configure the AirPort Extreme in Bridge Mode to allow the Comcast modem/router to control the routing functions on the network. You will have to check with Comcast to insure that the DHCP range of the modem/router will supply an adequate number of IP addresses to meet your needs.
    That would probably mean a pool of at least 100 or more IP addresses.
    Connect the AirPort Express using an Ethernet cable to one of the LAN <-> ports on the AirPort Extreme if you want optimum bandwidth performance for the network.
    Keep in mind that all devices will share the same Internet connection bandwidth, so if you have 50 devices on the network at one time, and you have a 50 Mbps Internet connection, each device will be allowed about 1 Mbps of bandwidth.
    That may...or may not.....be adequate for your needs depending on how active the devices will be at the time.
    If 50 users are all trying to update their email simultaneously, things are going to be sluggish.

  • Best practice for managing unofficial user repo with git?

    G'day Archers,
    (I hope this is the right subforum for my question -- feel free to move.)
    I am currently playing around with setting up an unofficial repo. So far so good, except I am trying to make sure I do things "properly".
    I am currently deploying my website according to this tip I've seen recommended. I.e. on the server, set up a bare repo in say ~/src/www.git with a post-receive hook that states something like
    GIT_WORK_TREE=$HOME/www_public git checkout -f
    where ~/www_public is the apache directory root for the website.
    Doing it this way, I can push and pull changes between my desktop, the server and my laptop and the .git files don't end up visible to the public.
    Now I want to add my repo to this system, but repo-add creates ".tar.gz" files. According to github help, these are better off ignored by git.
    One solution that occurs to me is to add a pre-commit that decompresses the db into plain text and a post-receive that recompresses to a form that pacman will understand. However, I wonder if there is a simpler solution that I'm missing? I notice that Debian seems to have a tool that will do what I want, for Debian users. Does something equivalent exist for Archlinux, or do any Archers who run unofficial repos have any tips for me based on their experience/mistakes/successes of the past?

    G'day Archers,
    (I hope this is the right subforum for my question -- feel free to move.)
    I am currently playing around with setting up an unofficial repo. So far so good, except I am trying to make sure I do things "properly".
    I am currently deploying my website according to this tip I've seen recommended. I.e. on the server, set up a bare repo in say ~/src/www.git with a post-receive hook that states something like
    GIT_WORK_TREE=$HOME/www_public git checkout -f
    where ~/www_public is the apache directory root for the website.
    Doing it this way, I can push and pull changes between my desktop, the server and my laptop and the .git files don't end up visible to the public.
    Now I want to add my repo to this system, but repo-add creates ".tar.gz" files. According to github help, these are better off ignored by git.
    One solution that occurs to me is to add a pre-commit that decompresses the db into plain text and a post-receive that recompresses to a form that pacman will understand. However, I wonder if there is a simpler solution that I'm missing? I notice that Debian seems to have a tool that will do what I want, for Debian users. Does something equivalent exist for Archlinux, or do any Archers who run unofficial repos have any tips for me based on their experience/mistakes/successes of the past?

Maybe you are looking for