Best practice for oracle 10.2 RAC with ASM

Did any one tried/installed Oracle 10.2 RAC with ASM and CRS ?
What is the best practice?
1. separate home for CRS, ASM and Oracle Database?
2. separate home for CRS and same home for ASM and Oracle Darabase?
we set up the test environment with separate CRS, ASM and Oracle database homes, but we have tons of issues with the listener, spfile and tnsnames.ora files. So, seeking advise from the gurus who implimeted/tested the same ?

I am getting ready to install the 10gR2 database software (10gR2 Clusterware was just installed ) and I want to have a home for ASM and another for database as you suggest. I have been told that 10gR2 was to have a smaller set of binaries that can be used for the ASM home ... but I am not sure how I go about installing it. The first look at the installer does not seem to make it obvious...Is it a custom build option?

Similar Messages

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

  • Universe Design Best Practices for Oracle

    Hello All,
    We recently moved from XIR2 on MS SQL 2005 to XI 3.1 on Oracle. This has been a difficult move for us as my team is new to oracle. I'm currently working on several performance issues between BOBJ and Oracle and am looking for documentation on best practices for universe design with oracle. I've foudn tidbits here and there regrading using parameters joing_by_sql and boundary_weight_table, wondering if there are other options out there that might help. We have queries taking 45+ minutes to run and that is totally unacceptable.
    thanks
    Andrea

    I am not sure if you are looking for Optimization or anything else. sorry for that following link might help you considering Oracle as DB.
    Link:[Universe Optimization 1|http://www.bidwtoday.com/business-objects/universe-designer/business-objects-universe-optimization/]
    Link:[Universe Optimization 2|http://forums.sdn.sap.com/post!reply.jspa?messageID=8721932]
    --Kuldeep

  • Best practices for 2 x DNS servers with 2 x sites

    I am curious if someone can help me with best practices for my DNS servers.  Let me give my network layout first.
    I have 1 site with 2 x Windows 2012 Servers (1 GUI - 10.0.0.7, the other CORE - 10.0.0.8) the 2nd site connected via VPN has 2 x Windows 2012R2 Servers (1 GUI - 10.2.0.7, the other CORE - 10.2.0.8)  All 4 servers are promoted to DC's and have DNS services
    running.
    Here goes my questions:
    Site #1
    DC-01 - NIC IP address for DNS server #1 set to 10.0.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.0.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    Site #2
    DC-01 - NIC IP address for DNS server #1 set to 10.2.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.2.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local
    > properties > Name Servers should I have all of my other DNS servers, or should I have my WAN DNS servers? In a single server scenario I always put my WAN DNS server but a bit unsure in this scenario. 
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > General > Type should all servers be set to
    Active Directory - Integrated > Primary Zone? Should any of these be set to
    Secondary Zone?
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > Zone Transfers should I allow zone transfers?
    Would the following questions be identical to the Forward Lookup Zone mydomain.local as well?

    I am curious if someone can help me with best practices for my DNS servers.  Let me give my network layout first.
    I have 1 site with 2 x Windows 2012 Servers (1 GUI - 10.0.0.7, the other CORE - 10.0.0.8) the 2nd site connected via VPN has 2 x Windows 2012R2 Servers (1 GUI - 10.2.0.7, the other CORE - 10.2.0.8)  All 4 servers are promoted to DC's and have DNS services
    running.
    Here goes my questions:
    Site #1
    DC-01 - NIC IP address for DNS server #1 set to 10.0.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.0.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
    Site #2
    DC-01 - NIC IP address for DNS server #1 set to 10.2.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    DC-02 - NIC IP address for DNS server #1 set to 10.2.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local
    > properties > Name Servers should I have all of my other DNS servers, or should I have my WAN DNS servers? In a single server scenario I always put my WAN DNS server but a bit unsure in this scenario. 
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > General > Type should all servers be set to
    Active Directory - Integrated > Primary Zone? Should any of these be set to
    Secondary Zone?
    Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > Zone Transfers should I allow zone transfers?
    Would the following questions be identical to the Forward Lookup Zone mydomain.local as well?
    Site1
    DC1: Primary 10.0.0.7. Secondary 10.0.0.8. Tertiary 127.0.0.1
    DC2: Primary 10.0.0.8.  Secondary 10.0.0.7. Tertiary 127.0.0.1
    Site2
    DC1: Primary 10.2.0.7.  Secondary 10.2.0.8. Tertiary 127.0.0.1
    DC2: Primary 10.2.0.8.  Secondary 10.2.0.7. Tertiary 127.0.0.1
    The DC's should automatically register in msdcs.  Do not register external DNS servers in msdcs or it will lead to issues. Yes, I recommend all zones to be set to AD-integrated. No need to allow zone transfers as AD replication will take care
    of this for you.  Same for mydomain.local.
    Hope this helps.  

  • Best practices for a development/production scenario with ORACLE PORTAL 10G

    Hi all,
    we'd like to know what is the best approach for maintaining a dual development/production portal scenario. Specially important is the process of moving from dev. to prod. and what it implies in terms of portal availability in the production environment.
    I suppose the best policy to achieve this is to have two portal instances and move content via transport sets. Am I right? Is there any specific documentation about dev/prod scenarios? Can anybody help with some experiences? We are a little afraid regarding transport sets, as we have heard some horror stories about them...
    Thanks in advance and have a nice day.

    It would be ok for a pair of pages and a template.
    I meant transport sets failed for moving an entire pagegroup (about 100 pages, 1Gb of documents).
    But if your need only deals with a few pages, I therefore would direclty developp on the production system : make a copy of the page, work on it, then change links.
    Regards

  • AIX 5.3 best practices for oracle DB 10.2.0.4

    customer is using AIX 5.3 (AIX PROD_aix 3 5 000CA2A2D900) and DB version is 10.2.0.4
    DB is critical & Recently getting migrated to AIX from WIndows.
    To make sure we don't face any probs after migration,we are implementing best practices at OS level .. so far we've implemented following.
    1)All dbf's are existing on JFS2 filesystem -- We've enabled CIO for mount points(dbf's,redo's)
    2)Changed the AIX_THREAD_SCOPE to S , previously AIX_thread_scope was set to P
    3)Changed minperm% to 5% and maxperm% to 20%
    Do we need to change the memory page size to best fit DB needs? or do we need to use large pages?,Any Document available from IBM? what is default pagesize used(how to check)?
    How to check NIC is 100mbps or 1gbps?what is the command.
    thanks in advance..

    IBM whitepaper "Oracle Architechture and Tuning on AIX" might be a good starting point. Not just a list of "magic" settings, but some real content as well.
    Some comments about the settings you've implemented so far:
    1) Be sure to use an allocation unit size of 512B for filesystems storing redo logs and control files, it's required for direct i/o to work correctly.
    3) Since you're using CIO, it might be a good idea to set VMM to steal only file pages (lru_file_repage).
    As for the large pages, you have quite a few options. Smaller large pages can be handled more dynamically, whereas bigger ones can provide more performance gains, but can also cause problems if you over-allocate. But, if you didn't need to use large pages on Windows, chances are that you might not necessarily need them on AIX either.
    And as always, be careful with "best practices" and "rules of thumb"; it's far better to test different configurations thoroughly than to blindly rely on generic recommendations based on Codd knows what.

  • Best Practices for Integrating UC-5x0's with SBS 2003/8?

    Almost all of Cisco's SBCS market is the small and medium business space.  Most, if not all of these SMB's have a Microsoft Small Business Server 2003 or 2008. It will be critical, In order for Cisco to be considered as a purchase option, that the UC-5x0 integrates well into these networks.
    To that end, I see a  lot of talk here about how to implement parts and pieces of this, but no guidance from Cisco, no labs and no best practices or other documentation. If I am wrong, please correct me.
    I am currently stumbling through and validating these configurations myself, Once complete, I will post detailed recommendations. However, it would have been nice to have a lab to follow instead of having to learn from each mistake.
    Some of the challanges include;
    1. Where should the UC-540 be placed: As the gateway for QOS or behind a validated UC-5x0 router/security appliance combination
    2. Should the Microsoft Windows Small Business Server handle DCHP (as Microsoft's documentation says it must), or must the UC-540 handle DHCP to prevent loss of features? What about a DHCP relay scheme?
    3. Which device should handle DNS?
    My documentation (and I recommend that any Cisco Lab/Best Practice guidence include it as well) will assume the following real-world scenario, the same which applies to a majority of my SMB clients;
    1. A UC-540 device utilizing SIP for the cost savings
    2. High Speed Internet with 5 static routable IP addresses
    3. An existing Microsoft Small Business Server 2003/8
    4. An additional Line of Business Application or Terminal Server that utilizes the same ports (i.e. TCP 80/443/3389) as the UC-540 and the SBS, but on seperate routable IP's (Making up crazy non-standard port redirections is not an option).
    5. A employee who teleworks from various places that provide a seat and a network jack, which is not under our control (i.e. a employees home, a clients' office, or a telework center). This teleworker should use the built in VPN feature within the SPA or 7925G phones because we will not have administrative access to any third party's VPN/firewall.
    Your thoughs appreciated.

    Progress Report;
    The following changes have been made to the router in support of the previously detailed scenario. Everything appears to be working as intended.
    DHCP is still on the UC540 for now. DNS is being performed by the SBS 2008.
    Interestingly, the CCA still works. The NAT module even shows all the private mapped IP's, but no the corresponding public IP's. I wouldnt recommend trying to make any changes via the CCA in the NAT module.  
    To review, this configuration assumes the following;
    1. The UC540 has a public IP address of 4.2.2.2
    2. A Microsoft Small Business Server 2008 using an internal IP of 192.168.10.10 has an external IP of 4.2.2.3.
    3. A third line of business application server with www, https and RDP that has an internal IP of 192.168.10.11 and an external IP of 4.2.2.4
    First, backup your current configuration via the CCA,
    Next, telent into the UC540, login, edit, cut and paste the following to 1:1 NAT the 2 additional public IP addresses;
    ip nat inside source static tcp 192.168.10.10 25 4.2.2.3 25 extendable
    ip nat inside source static tcp 192.168.10.10 80 4.2.2.3 80 extendable
    ip nat inside source static tcp 192.168.10.10 443 4.2.2.3 443 extendable
    ip nat inside source static tcp 192.168.10.10 987 4.2.2.3 987 extendable
    ip nat inside source static tcp 192.168.10.10 1723 4.2.2.3 1723 extendable
    ip nat inside source static tcp 192.168.10.10 3389 4.2.2.3 3389 extendable
    ip nat inside source static tcp 192.168.10.11 80 4.2.2.4 80 extendable
    ip nat inside source static tcp 192.168.10.11 443 4.2.2.4 443 extendable
    ip nat inside source static tcp 192.168.10.11 3389 4.2.2.4 3389 extendable
    Next, you will need to amend your UC540's default ACL.
    First, copy what you have existing as I have done below (in bold), and paste them into a notepad.
    Then, im told the best practice is to delete the entire existing list first, finally adding the new rules back, along with the addition of rules for your SBS an LOB server (mine in bold) as follows;
    int fas 0/0
    no ip access-group 104 in
    no access-list 104
    access-list 104 remark auto generated by SDM firewall configuration##NO_ACES_24##
    access-list 104 remark SDM_ACL Category=1
    access-list 104 permit tcp any host 4.2.2.3 eq 25 log
    access-list 104 permit tcp any host 4.2.2.3 eq 80 log
    access-list 104 permit tcp any host 4.2.2.3 eq 443 log
    access-list 104 permit tcp any host 4.2.2.3 eq 987 log
    access-list 104 permit tcp any host 4.2.2.3 eq 1723 log
    access-list 104 permit tcp any host 4.2.2.3.35 eq 3389 log 
    access-list 104 permit tcp any host 4.2.2.4 eq 80 log
    access-list 104 permit tcp any host 4.2.2.4 eq 443 log
    access-list 104 permit tcp any host 4.2.2.4 eq 3389 log
    access-list 104 permit udp host 116.170.98.142 eq 5060 any
    access-list 104 permit udp host 116.170.98.143 any eq 5060
    access-list 104 deny   ip 10.1.10.0 0.0.0.3 any
    access-list 104 deny   ip 10.1.1.0 0.0.0.255 any
    access-list 104 deny   ip 192.168.10.0 0.0.0.255 any
    access-list 104 permit udp host 116.170.98.142 eq domain any
    access-list 104 permit udp host 116.170.98.143 eq domain any
    access-list 104 permit icmp any host 4.2.2.2 echo-reply
    access-list 104 permit icmp any host 4.2.2.2 time-exceeded
    access-list 104 permit icmp any host 4.2.2.2 unreachable
    access-list 104 permit udp host 192.168.10.1 eq 5060 any
    access-list 104 permit udp host 192.168.10.1 any eq 5060
    access-list 104 permit udp any any range 16384 32767
    access-list 104 deny   ip 10.0.0.0 0.255.255.255 any
    access-list 104 deny   ip 172.16.0.0 0.15.255.255 any
    access-list 104 deny   ip 192.168.0.0 0.0.255.255 any
    access-list 104 deny   ip 127.0.0.0 0.255.255.255 any
    access-list 104 deny   ip host 255.255.255.255 any
    access-list 104 deny   ip host 0.0.0.0 any
    access-list 104 deny   ip any any log
    int fas 0/0
    ip access-group 104 in
    Lastly, save to memory
    wr mem
    One final note - if you need to use the Microsoft Windows VPN client from a workstation behind the UC540 to connect to a VPN server outside your network, and you were getting Error 721 and/or Error 800...you will need to use the following commands to add to ACL 104;
    (config)#ip access-list extended 104
    (config-ext-nacl)#7 permit gre any any
    Im hoping there may be a better way to allowing VPN clients on the LAN with a much more specific and limited rule. I will update this post with that info when and if I discover one.
    Thanks to Vijay in Cisco Tac for the guidence.

  • Symantec antivirus Best practice for oracle database on windows server 2003

    Hi all,
    I have an oracle database server on windows server 2003 platform of version 10.2.0.4. what would be best practice of running symantec antivirus on that server as well as database file exclusions from scanning them.
    My server had rebooted unexpectedly for many times. in event log i have id as 6008. what may be cause of it..?

    Normally, you don't run a virus scanner on a database server because your database server isn't vulnerable to viruses. It's behind firewalls, people aren't reading mail on it, people aren't plugging thumb drives into it, etc. If you do decide that you need to run a virus scanner on a database server, at least exclude the Oracle data files from the scan. Oracle gets very unhappy if someone else tries to open its data files (or, worse, if someone opens a data file before it gets the chance to acquire exclusive access).
    Justin

  • SAP Best practice for Oracle installation on HPux

    Hi - I am looking for SAP best practices document to install ERP6 on Oracle 10G platform HPUNIX 31. Is there any sap best practices guide?
    Also, for Dev system, I am planning to install ABAP and JAVA stacks in one database. What do I need to do in database level?
    Last, How am  I going to configure the 2 engines to communicate?
    Any reference to documents will be appreciated.
    Regards
    k

    Hi,
    Sorry but I'm going to be a bit bold here.
    All needed information is on the guides.
    1) If you are on Windows, you put the SAP-Oracle CD/DVD in, execute the command mentioned on the installation guide type the two things mentioned on the guide and press next next next. Then you install SAP
    2) if you are on UNIX, you start SAP installation, then it stops and tell you to install Oracle. The guide tells you to run RUNINSTALLER (in capitals) you type the two things mentioned on the guide and press next next next. Then you continue the SAP installation.
    Of course, those are the simplified versions
    No response file is generated "ever". The response file comes INSIDE the installation CDs, for that reason is very important to execute the EXACT commands mentioned on the installation guides, in order to use such, already existing, response file.
    I do not understand this need of super-hiper detailed step by step guides.
    If you get one like that, then it will be obsolete after one month because there are new Oracle parameters, new oracle patches/patch sets or new .... that have to be done

  • Best practices for setting up RDS pool, with regards to profiles /appdata

    All,
    I'm working on a network with four physical sites and currently using a single pool of 15 RDS servers with one broker. We're having a lot of issues with the current deployment, and are rethinking our strategy. I've read a lot of conflicting information on how
    to best deploy such a service, so I'd love some input.
    Features and concerns:
    Users connect to the pool from intranet only.
    There are four sites, each with a somewhat different local infrastructure. Many users are connecting to the RDS pool via thin clients, although some locations have workstations in place.
    Total user count that needs to be supported is ~400, but it is not evenly distributed - some sites have more than others.
    Some of the users travel from one site to another, so that would need to be accounted for with any plans that involve carving up the existing pool into smaller groups.
    We are looking for a load-balanced solution - using a different pool for each site would be acceptable as long as it takes #4 and #7,8 into account.
    User profile data needs to be consistent throughout: My Docs, Outlook, IE favorites, etc.
    Things such as cached IE passwords (for sharepoint), Outlook settings and other user customization needs to be carried over as well.
    As such, something needs to account for the information in AppData/localroaming, /locallow and /local between these RDS servers.
    Ideally the less you have to cache during each logon the better, in order to reduce login times.
    I've almost never heard anything positive about using roaming profiles, but is this one of those rare exceptions? Even if we do that, I don't believe that covers the information in <User>/AppData/*  (or does it?), so what would be the best
    way to make sure that gets carried over between sessions inside the pool or pools?
    The current solution involves using 3rd party apps, registry hacks, GPOs and a mashup of other things and is generally considered to be a poor fit for the environment. A significant rework is expected and acceptable. Thinking outside the box is fine!
    I would relish any advice on the best solutions for deployment! Thank you!

    Hi Ben,
    Thank you for posting in Windows Server Forum.
    Please check below blogs and document which helps to understand some basic requirement and to setup the new environment with proper guided manner.
    1. Remote Desktop Services Deployment Guide
    (Doc)
    2. Step by Step Windows 2012 R2 Remote Desktop Services –
    Part 1, 2,3 & 4
    3.Deploying a 2012 / 2012R2 Remote Desktop Services (RDS) farm
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Best practice for taking Site collection Backup with more than 100GB

    Hi,
    I have site collection data is more than 100 GB. Can anyone please suggest me the best practice to take backup?
    Thanks in advance....
    Regards,
    Saya

    Hi
    i think Using powershell script we can do..
    Add this command in powershell
    Add-PSSnapin Microsoft.SharePoint.PowerShell
    Web application backup & restore
    Backup-SPFarm -Directory \\WebAppBackup\Development  -BackupMethod Full -Item "Web application name"
    Site Collection backup & restore
    Backup-SPSite http://1632/sites/TestSite  -Path C:\Backup\TestSite1.bak
    Restore-SPSite http://1632/sites/TestSite2  -Path C:\Backup\TestSite1.bak -Force
    Regards
    manikandan

  • Best practice for calling an AM method with parameters

    Which will be the best way to call an AM method with parameters from a backing bean.
    I usually use the BindingContainer to get the operation binding and then call execute function. But when the method have parameters, how to do it?
    Thanks

    Hi,
    same:
    operationBinding.getParamMap().put("argument1Name", argument1Value);
    operationBinding.getParamMap().put("argument2Name", argument2Value);
    operationBinding.execute();
    Frank

  • Best practices for organizing a large "project" with multiple programmers

    should i put everyone in one application? should everyone get their own applcation? my understanding (limited) is the level of granularity for CVS is the application. sounds to me like multiple developers checking the same application in and out would be a disaster. if every developer has there own application what problems will i have deploying. how do i handle my configuration (navigation) diagram? any thoughts would be appreciated.

    Have a read through these:
    http://download.oracle.com/docs/html/B25947_01/team_productivity.htm#BABBEFFF
    http://brendenanstey.blogspot.com/2006/11/tips-for-using-cvs-with-jdeveloper.html

  • Best Practice for setting up an office with an extreme

    I am looking for some great info for setting up a Business Network using Comcast Business Highspeed. I rencently purchased an Airport Extreme and an Airport Express for a network extender and I am trying to understand what the optimal setup is for this kind of setup. Comcast provides a modem that manages DHCP but I am not sure if it makes sense to use the Airport Extremes built in DHCP or setup the Extreme as a bridge to the Modem and let DHCP be handled there. I am expecting to have anywhere from 30-60 devices on the network depending on the day. Is there any info out there that would help me better understand Apple's recomendation or do any of you have some good info for me? Thanks for the help in advance.

    Unless the Comcast modem/router or gateway device has an available option to be configured as a simple modem.....and....this type of configuration is supported by Comcast, the decision about DHCP service has already been made for you.
    In that case, configure the AirPort Extreme in Bridge Mode to allow the Comcast modem/router to control the routing functions on the network. You will have to check with Comcast to insure that the DHCP range of the modem/router will supply an adequate number of IP addresses to meet your needs.
    That would probably mean a pool of at least 100 or more IP addresses.
    Connect the AirPort Express using an Ethernet cable to one of the LAN <-> ports on the AirPort Extreme if you want optimum bandwidth performance for the network.
    Keep in mind that all devices will share the same Internet connection bandwidth, so if you have 50 devices on the network at one time, and you have a 50 Mbps Internet connection, each device will be allowed about 1 Mbps of bandwidth.
    That may...or may not.....be adequate for your needs depending on how active the devices will be at the time.
    If 50 users are all trying to update their email simultaneously, things are going to be sluggish.

  • Best practice for managing unofficial user repo with git?

    G'day Archers,
    (I hope this is the right subforum for my question -- feel free to move.)
    I am currently playing around with setting up an unofficial repo. So far so good, except I am trying to make sure I do things "properly".
    I am currently deploying my website according to this tip I've seen recommended. I.e. on the server, set up a bare repo in say ~/src/www.git with a post-receive hook that states something like
    GIT_WORK_TREE=$HOME/www_public git checkout -f
    where ~/www_public is the apache directory root for the website.
    Doing it this way, I can push and pull changes between my desktop, the server and my laptop and the .git files don't end up visible to the public.
    Now I want to add my repo to this system, but repo-add creates ".tar.gz" files. According to github help, these are better off ignored by git.
    One solution that occurs to me is to add a pre-commit that decompresses the db into plain text and a post-receive that recompresses to a form that pacman will understand. However, I wonder if there is a simpler solution that I'm missing? I notice that Debian seems to have a tool that will do what I want, for Debian users. Does something equivalent exist for Archlinux, or do any Archers who run unofficial repos have any tips for me based on their experience/mistakes/successes of the past?

    G'day Archers,
    (I hope this is the right subforum for my question -- feel free to move.)
    I am currently playing around with setting up an unofficial repo. So far so good, except I am trying to make sure I do things "properly".
    I am currently deploying my website according to this tip I've seen recommended. I.e. on the server, set up a bare repo in say ~/src/www.git with a post-receive hook that states something like
    GIT_WORK_TREE=$HOME/www_public git checkout -f
    where ~/www_public is the apache directory root for the website.
    Doing it this way, I can push and pull changes between my desktop, the server and my laptop and the .git files don't end up visible to the public.
    Now I want to add my repo to this system, but repo-add creates ".tar.gz" files. According to github help, these are better off ignored by git.
    One solution that occurs to me is to add a pre-commit that decompresses the db into plain text and a post-receive that recompresses to a form that pacman will understand. However, I wonder if there is a simpler solution that I'm missing? I notice that Debian seems to have a tool that will do what I want, for Debian users. Does something equivalent exist for Archlinux, or do any Archers who run unofficial repos have any tips for me based on their experience/mistakes/successes of the past?

Maybe you are looking for

  • Problem with portal theme for WD Application

    Hi Experts, We have enhanced our ECC with EHP4 and SP18 in our existing portal.And now i am getting some theme problem. I have few Wendynpro ABAP application in the portal which was coming with correct theme earlier bur after enhancement all the othe

  • Newer Version in XI

    Hi. Good morning. I want to know the latest version of XI that is being used by the companies now. I heard about SAP XI 2.0 and 3.0. I also heard from somebody that SAP XI 7.0 is now available in the market. Is that True ? What are the versions that

  • Error when workitem check from uwl (HCM Transfer Form WF)

    Hi Experts, In the final stage of HCM Transfer Workflow, I'm updating database using the u201CForm Scenario stageu201D as u201CForm_Stage_HR_Administratoru201D. I'm getting the following error message "Work item 000001677910: Object CL_HRASR00_WF_COM

  • Should I purchase ADDT

    Hi Everyone- I've been developing PHP applications for about 4 years. All using PHP Designer, which is basically notepad with syntax highlighting and error checking. Of course I've also gotten fairly involved with javascript, and over the last year a

  • How to delete temp files in /u99/temp/obiee_tmp

    Hi All, We have 50gb on /u99 but it got filled suddenly 35gb. While checking I found that following files are created with 7.1 gb size. nQS_7676_24_20578926.TMP Can someone please help us how to delete these files. 1. Will running purge_cache.sh scri