Global contact directory for abbreviated dialling in CUCM9.1

I have recently installed a Cisco Small Business 6000 solution for a customer which is running CUCM 9.1. We have been asked to provide a system wide directiory of contacts which can be used as abbreviated dialing, accessable to all phone users. Does anyone know of a quick way this can be achieved?
Hope someone can help.                  

Other than just creating translation patterns for that, you would need to create a custom directory for your phones with such data.
HTH
java
if this helps, please rate
www.cisco.com/go/pdihelpdesk

Similar Messages

  • How to create global Contact list using OSX Server Contacts Service?

    My goal is to create a global contact list for the use of my department at work. We currently have 5 MBP's and a Mac Mini Server running OSX Server.
    I have read so many forum posts and so called "solutions" that my head is completly spinning... like a record... right round... you get the idea.
    I talked with Apple Support, and they are going to walk me through the process of changing our network from "local" to a legitimate internet network. Enabling DNS and setting up open directory are some of the steps, and all in all i was impressed with the support i received. We did a complete walkthough of all the steps it would take to change everything, without actually making any changes (because I could not shut down the company server at the time I called).
    All that being said, I have been reading that actually creating and sharing a global contact list (and enabling specific user access for read/write for said list) is not as simple as Apple is telling me. I have read so many horror stories about not being able to edit the list, having duplicate lists and entries, and many other problems. It seems that this "Contact Service" is not really what its billed as.
    Im just looking for someone who has this already setup to shine some light on my questions/concerns. I have scoured the internet for information, and either i'm looking in the wrong place or I just happen to have the WORST possible combination of OS X versions, server versions and so on. I would expect a company who charges such high prices for its products to design them to ACTUALLY WORK how they are advertised to work!
    Thanks in advance and a thousand kudos to anybody that can help!!!
    -John

    That sounds like one viable option...
    The only problem is that I'm not sure how that would behave once the client MBP's are taken off site and connect to the network over a VPN or something of that sort.
    I figure since we have purchased the Server App and it contains the Contacts Service, I should try and get it to work that way (through that specific service on the server app).
    I would really prefer to not have to install yet another piece of software just to do something that should already work (or have the ability to be configured to work). None of us here are "Power Users", and we are all learning as we go and I got the role of the "IT Guy" when it comes to setting everything up. I'm dealing with lifetime Windows users here, and I feel there is a need to keep everything literally as easy as possible to use.
    I will keep your solution in the front of my mind though, as it sounds like that may work. I am not exactly 100% familiar with the OSX Server or the Mac OS yet, and I'm not positive where you would add the users (I couldn't just sit down and do what you suggested, i unfortunately would need a step by step solution so I don't fuss up our entire system).
    If you feel like giving a step by step, that would be awesome just for future reference or for anyone else having similar problems. If not, I don't blame you at all.
    Thanks for your reply, and if I find a clear-cut solution I will post it here!
    Thanks,
    J_Semp

  • ASM ORA-27041 'Global Resource Directory partially frozen for dirty detach'

    Hi there,
    I am running a 10gR2 RAC on two nodes with ASM. Once in three days, the following happens to the ASM instance of one of the nodes. But it's not predictable which node will be next. In other words: Both ASM instances failed so far, but never both at the same time. Least I can say, the cluster always worked. But what the f... happens here? Please have a look at the following alert log extract of the ASM system.
    ---------------------------------------snip---------------------------------------------
    <notmal operational output until here>
    Tue Jan 17 05:33:28 2006
    WARNING: cache failed to read fn=2 blk=0 from disk(s): 1 0
    ORA-27041: Datei kann nicht geoffnet werden
    NOTE: cache initiating offline of disk 1 group 2
    NOTE: cache initiating offline of disk 0 group 2
    WARNING: offlining disk 1.4051955519 (AWISRACVOL_0001) with mask 0x3
    WARNING: offlining disk 0.4051955520 (AWISRACVOL_0000) with mask 0x3
    NOTE: PST update: grp = 2, dsk = 1, mode = 0x6
    NOTE: PST update: grp = 2, dsk = 0, mode = 0x6
    Tue Jan 17 05:33:28 2006
    ERROR: too many offline disks in PST (grp 2)
    Tue Jan 17 05:33:28 2006
    NOTE: PST not enabling heartbeating (grp 2): group dismounted
    Tue Jan 17 05:33:28 2006
    NOTE: halting all I/Os to diskgroup AWISRACVOL
    NOTE: active pin found: 0x0x1019bcc38
    NOTE: active pin found: 0x0x1019bcce8
    Tue Jan 17 05:33:33 2006
    ERROR: PST-initiated MANDATORY DISMOUNT of group AWISRACVOL
    NOTE: cache dismounting group 2/0x9D831FCE (AWISRACVOL)
    NOTE: dbwr not being msg'd to dismount
    Tue Jan 17 05:33:33 2006
    kjbdomdet send to node 0
    detach from dom 2, sending detach message to node 0
    Tue Jan 17 05:33:33 2006
    Dirty detach reconfiguration started (old inc 2, new inc 2)
    List of nodes:
    0 1
    Global Resource Directory partially frozen for dirty detach
    * dirty detach - domain 2 invalid = TRUE
    553 GCS resources traversed, 0 cancelled
    1067 GCS resources on freelist, 6165 on array, 6165 allocated
    Dirty Detach Reconfiguration complete
    Tue Jan 17 05:33:34 2006
    WARNING: dirty detached from domain 2
    Tue Jan 17 05:33:34 2006
    SUCCESS: diskgroup AWISRACVOL was dismounted
    Tue Jan 17 05:35:25 2006
    WARNING: cache failed to read fn=1 blk=2 from disk(s): 0 1
    ORA-27041: Datei kann nicht geoffnet werden
    NOTE: cache initiating offline of disk 0 group 1
    NOTE: cache initiating offline of disk 1 group 1
    WARNING: offlining disk 0.4051955518 (AWISAUXVOL_0000) with mask 0x3
    WARNING: offlining disk 1.4051955517 (AWISAUXVOL_0001) with mask 0x3
    NOTE: PST update: grp = 1, dsk = 0, mode = 0x6
    NOTE: PST update: grp = 1, dsk = 1, mode = 0x6
    Tue Jan 17 05:35:25 2006
    ERROR: too many offline disks in PST (grp 1)
    Tue Jan 17 05:35:25 2006
    NOTE: PST not enabling heartbeating (grp 1): group dismounted
    Tue Jan 17 05:35:25 2006
    NOTE: halting all I/Os to diskgroup AWISAUXVOL
    NOTE: active pin found: 0x0x1019bcc38
    NOTE: active pin found: 0x0x1019bcce8
    NOTE: active pin found: 0x0x1019bcd98
    Tue Jan 17 05:35:25 2006
    ERROR: PST-initiated MANDATORY DISMOUNT of group AWISAUXVOL
    NOTE: cache dismounting group 1/0x9D731FCD (AWISAUXVOL)
    Tue Jan 17 05:35:26 2006
    kjbdomdet send to node 0
    detach from dom 1, sending detach message to node 0
    Tue Jan 17 05:35:26 2006
    Dirty detach reconfiguration started (old inc 2, new inc 2)
    List of nodes:
    0 1
    Global Resource Directory partially frozen for dirty detach
    * dirty detach - domain 1 invalid = TRUE
    5052 GCS resources traversed, 0 cancelled
    Dirty Detach Reconfiguration complete
    Tue Jan 17 05:35:27 2006
    WARNING: dirty detached from domain 1
    Tue Jan 17 05:35:27 2006
    SUCCESS: diskgroup AWISAUXVOL was dismounted
    <log ends here>
    ---------------------------------------snap--------------------------------------------
    Useless to say that the database based on this ASM instance terminates with complaints about non-readable controlfiles and so on.
    It has been no problem to re-start the ASM- and DB-Instance after this happened. So no persistent damage was done, but I find it widely alarming. Has anybody an idea, or can explain, what Oracle does here?
    Thanks in advance,
    Martin Klier

    Hi,
    it seem we have found the solution of this problem, it has been a whole bunch of trouble:
    - First of all, the interconnect should never be a crossover cable. Oracle said so in the official RAC FAQ, and they describe as reason why not: "b) Instability. We have seen different problems e.g.. ORA-29740 at configurations using crossover cable, and other errors." I had known, but the errors just appeared weeks after the switchover to crossover cabling.
    - The ASM-used devices / raw devices MUST be owned by oracle:oinstall (or :dba depending on the setup), if they are owned by root:disk (with oracle as a memeber of disk) it complains - the ASM instance does exactly the error above.
    - Having this fixed, I always got an error ORA-27041 but without "Global Resource Directory partially frozen". The system worked fine, but on every start of ASM the error was logged. Solution: provide an ASM_DISKSTRING parameter, in order to prevent ASM trying to use other (maybe CRS-used) raw devices than its own.
    The stuff took away three weeks of my life and a bunch of hair :)
    Martin

  • Search Contacts & Directory fails

    End users using OWA on Exchange 2013 CU5. Doesn't matter what browser or any other client-side information. I can give myself delegate access to various mailboxes and come up with the same error.
    They compose a new message. Type the name in the To.. field and the application pops the drop-down with the magnifying glass that reads "Search Contacts & Directory". When you click, it states in the same box "Can't connect. Please
    try again later."
    Troubleshooting steps, thus far:
    Recycle OWA Application Pools.
    Recycle ECP Application Pools.
    Recycle OAB Application Pools (yes, I know, this is for Outlook only).
    Restart all CAS servers.
    Nothing works. Has anyone else seen this and found a fix for it? It has to be something simple that I am missing.

    Hi,
    From your description, I would like to verify if the email address you search can be listed in Default Global Address List.
    Open EAC -> organization -> address lists -> Default Global Address List
    Best regards,
    Amy Wang
    TechNet Community Support

  • I have 2 imac computers and here are my questions: first, how to I transfer the information from my contact directory from my old imac into my new imac and once the information is transfered how can I print it? Second: I have a large music collection in m

    have 2 imac computers and here are my questions: first, how to I transfer the information from my contact directory from my old imac into my new imac?  Once the information is transfered how can I print it? Second: I have a large music collection in my old Imac computer how do I transfer this information to my new computer? Also how can I share this information with other computers at home?

    I think you may find helpful information here:
    A Basic Guide for Migrating to Intel-Macs
    The Knowledgebase article Intel-based Mac: Some migrated applications may need to be updated refers to methods of dealing with migrating from PowerPC chips to Intel with the Migration Assistant safely. The authors of this tip have not had a chance to verify this works in all instances, or that it avoids the 10.6.1 and earlier Guest Account bug that caused account information to get deleted upon use of the Migration/Setup Assistant. However, a well backed up source that includes at least two backups of all the data that are not connected to your machine will help you avoid potential issues, should they arise. In event it does not work, follow the steps below.
    If you are migrating a PowerPC system (G3, G4, or G5) to an Intel-Mac be careful what you migrate.  Keep in mind that some items that may get transferred will not work on Intel machines and may end up causing your computer's operating system to malfunction.
    Rosetta supports "software that runs on the PowerPC G3, G4, or G5 processor that are built for Mac OS X". This excludes the items that are not universal binaries or simply will not work in Rosetta:
    Classic Environment, and subsequently any Mac OS 9 or earlier applications
    Screensavers written for the PowerPC System Preference add-ons
    All Unsanity Haxies Browser and other plug-ins
    Contextual Menu Items
    Applications which specifically require the PowerPC G5 Kernel extensions
    Java applications with JNI (PowerPC) libraries
    See also What Can Be Translated by Rosetta.
    In addition to the above you could also have problems with migrated cache files and/or cache files containing code that is incompatible.
    If you migrate a user folder that contains any of these items, you may find that your Intel-Mac is malfunctioning. It would be wise to take care when migrating your systems from a PowerPC platform to an Intel-Mac platform to assure that you do not migrate these incompatible items.
    If you have problems with applications not working, then completely uninstall said application and reinstall it from scratch. Take great care with Java applications and Java-based Peer-to-Peer applications. Many Java apps will not work on Intel-Macs as they are currently compiled. As of this time Limewire, Cabos, and Acquisition are available as universal binaries. Do not install browser plug-ins such as Flash or Shockwave from downloaded installers unless they are universal binaries. The version of OS X installed on your Intel-Mac comes with special compatible versions of Flash and Shockwave plug-ins for use with your browser.
    The same problem will exist for any hardware drivers such as mouse software unless the drivers have been compiled as universal binaries. For third-party mice the current choices are USB Overdrive or SteerMouse. Contact the developer or manufacturer of your third-party mouse software to find out when a universal binary version will be available.
    Also be careful with some backup utilities and third-party disk repair utilities. Disk Warrior, TechTool Pro , SuperDuper , and Drive Genius  work properly on Intel-Macs with Leopard.  The same caution may apply to the many "maintenance" utilities that have not yet been converted to universal binaries.  Leopard Cache Cleaner, Onyx, TinkerTool System, and Cocktail are now compatible with Leopard.
    Before migrating or installing software on your Intel-Mac check MacFixit's Rosetta Compatibility Index.
    Additional links that will be helpful to new Intel-Mac users:
    Intel In Macs
    Apple Guide to Universal Applications
    MacInTouch List of Compatible Universal Binaries
    MacInTouch List of Rosetta Compatible Applications
    MacUpdate List of Intel-Compatible Software
    Transferring data with Setup Assistant - Migration Assistant FAQ
    Because Migration Assistant isn't the ideal way to migrate from PowerPC to Intel Macs, using Target Disk Mode, copying the critical contents to CD and DVD, an external hard drive, or networking will work better when moving from PowerPC to Intel Macs.  The initial section below discusses Target Disk Mode.  It is then followed by a section which discusses networking with Macs that lack Firewire.
    If both computers support the use of Firewire then you can use the following instructions:
    1. Repair the hard drive and permissions using Disk Utility.
    2. Backup your data.  This is vitally important in case you make a mistake or there's some other problem.
    3. Connect a Firewire cable between your old Mac and your new Intel Mac.
    4. Startup your old Mac in Transferring files between two computers using FireWire.
    5. Startup your new Mac for the first time, go through the setup and registration screens, but do NOT migrate data over. Get to your desktop on the new Mac without migrating any new data over.
    If you are not able to use a Firewire connection (for example you have a Late 2008 MacBook that only supports USB:)
    1. Set up a local home network: Creating a small Ethernet Network.
    2. If you have a MacBook Air or Late 2008 MacBook see the following:
    MacBook (13-inch, Aluminum, Late 2008) and MacBook Pro (15-inch, Late 2008)- What to do if migration is unsuccessful;
    MacBook Air- Migration Tips and Tricks;
    MacBook Air- Remote Disc, Migration, or Remote Install Mac OS X and wireless 802.11n networks.
    Copy the following items from your old Mac to the new Mac:
    In your /Home/ folder: Documents, Movies, Music, Pictures, and Sites folders.
    In your /Home/Library/ folder:
    /Home/Library/Application Support/AddressBook (copy the whole folder) /Home/Library/Application Support/iCal (copy the whole folder)
    Also in /Home/Library/Application Support (copy whatever else you need including folders for any third-party applications)
    /Home/Library/Keychains (copy the whole folder) /Home/Library/Mail (copy the whole folder) /Home/Library/Preferences/ (copy the whole folder) /Home /Library/Calendars (copy the whole folder) /Home /Library/iTunes (copy the whole folder) /Home /Library/Safari (copy the whole folder)
    If you want cookies:
    /Home/Library/Cookies/Cookies.plist /Home/Library/Application Support/WebFoundation/HTTPCookies.plist
    For Entourage users:
    Entourage is in /Home/Documents/Microsoft User Data Also in /Home/Library/Preferences/Microsoft.
    Credit goes to Macjack for this information.
    If you need to transfer data for other applications please ask the vendor or ask in the  Discussions where specific applications store their data.
    5. Once you have transferred what you need restart the new Mac and test to make sure the contents are there for each of the applications.
    Written by Kappy with additional contributions from a brody.Revised 5/21/2011

  • I ordered the IX500 Scan Snap through Amazon and received the incorrect XI Standard disc in Windows format instead of for my Mac.  I contacted Amazon and they suggested I contact Scan Snap who suggests I contact Adobe for the correct disc.  Please help.

    I ordered the IX500 Scan Snap through Amazon and received the incorrect Adobe Acrobat XI Standard disc in Windows format instead of for my Mac.  I contacted Amazon and they suggested I contact Scan Snap who suggests I contact Adobe for the correct disc.  Please help.

    Fujitsu delivers only Adobe Acrobat for Windows, not Mac.
    http://www.fujitsu.com/global/services/computing/peripheral/scanners/product/ix500/

  • How to with the check box in Suppliers - Contact Directory

    EBS R 12.
    I am in need of finding a solution for the following:
    Payables Manager -> Suppliers -> Inquiry -> Select a supplier -> Go to "Contact Directory" -> Select a contact
    Under User Account section there is a check box.
    When I check the check box, the following action should happen:
    1. The "Username" field must contain the value of the "Email".
    2. Under "Responsibilities" "Sourcing Supplier" must be checked.
    I don't know where to put my hands to solve this problem.
    All helps will be appreciated. Thanks.

    Hi
    SPRO - SAP IMG- Material management - Inventory management and physical inventory - Goods receipt - create purchase order automatically - activate auto Po creation for movement type.
    Then activate the auto PO creation in Vendor master - Purchasing view
    Check it out.
    Regards,
    raman

  • Creation of groups in contacts directory

    Hello,
    The operating system 10 doesn’t allow to create groups in the contacts directory, it's a real pity.
    It would be very interesting to be able to create groups (family, work, friend, unknown) and to associate specific bells to every group.
    This feature is to add for the next update software release.
    Thank you for your attention.
    Best regards

    Such Groups would be a very welcome and awaited addition, along with notifications of those Groups!
    Same for SMS.
    1. If any post helps you please click the below the post(s) that helped you.
    2. Please resolve your thread by marking the post "Solution?" which solved it for you!
    3. Install free BlackBerry Protect today for backups of contacts and data.
    4. Guide to Unlocking your BlackBerry & Unlock Codes
    Join our BBM Channels (Beta)
    BlackBerry Support Forums Channel
    PIN: C0001B7B4   Display/Scan Bar Code
    Knowledge Base Updates
    PIN: C0005A9AA   Display/Scan Bar Code

  • Contact detail for support in South Africa

    I am looking for a contact detail for support in South Africa. I need assistance with maximum activation exceeded since the computer died I do not have access to the old one to deactivate the licence. The website is not much help either seeing as it is an endless loop....

    Try serial number and activation chat support (non-CC)
    http://helpx.adobe.com/x-productkb/global/service1.html ( http://adobe.ly/1aYjbSC )

  • Public Folders \ Global Contacts causes appcrash in Outlook 2010

    Placing this under Exchange 2013 because the error occurs regardless of what PC I test from.
    I have a client we did a major over haul for.  New 2012 server running Exchange 2013 migrated from 2010.  They have several global contacts set up in categories.  There are a handful of these that cause Outlook 2010 to crash with an appcrash
    when attempting to send an email from certain categories.  After selecting the category, I select new email from the menu. Crash is immediate.  PC OS is Win 7 Pro, both 32 and 64 bit.
    From Event Viewer:
    Fault bucket , type 0
    Event Name: APPCRASH
    Response: Not available
    Cab Id: 0
    Problem signature:
    P1: OUTLOOK.EXE
    P2: 14.0.7105.5000
    P3: 51e84e55
    P4: StackHash_2179
    P5: 6.1.7601.18229
    P6: 51fb1072
    P7: c0000374
    P8: 000ce753
    P9:
    P10:
    Fault bucket 3820818482, type 5
    Event Name: FaultTolerantHeap
    Response: Not available
    Cab Id: 0
    Problem signature:
    P1: OUTLOOK.EXE
    P2: 14.0.7105.5000
    P3: 51E84E55
    P4: ffffbaad
    P5:
    P6:
    P7:
    P8:
    P9:
    P10:
    Faulting application name: OUTLOOK.EXE, version: 14.0.7105.5000, time stamp: 0x51e84e55
    Faulting module name: ntdll.dll, version: 6.1.7601.18229, time stamp: 0x51fb1072
    Exception code: 0xc0000374
    Fault offset: 0x000ce753
    Faulting process id: 0xaa4
    Faulting application start time: 0x01cebeda86fa4dfd
    Faulting application path: C:\Program Files (x86)\Microsoft Office\Office14\OUTLOOK.EXE
    Faulting module path: C:\Windows\SysWOW64\ntdll.dll
    I've tried removing the category and recreating.  Created a new category called Test with only my email address and it still crashes.  However, there are several existing ones that do not cause a crash. 
    I am truly stumped.
    Thanks in advance for any assistance.

    Hi,
    I test the issue on Windows Server 2008 R2 and I found the frequent contacts appeared on the To-Do bar if I opened Outlook 2010 firstly and then opened Lync 2013 client.
    So please make sure Outlook 2010 and Lync 2013 client to the latest version and then test again.
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • Global-Cache-Manager for Multi-Environment Applications

    Hi,
    Within our server implementation we provide a "multi-project" environment. Each project is fully isolated from the rest of the server e.g. in terms of file-system usage, backup and other ressources. As one might expect the way to go is using a single VM with multiple BDB environments.
    Obviously each JE-Environment uses its own cache. Within a our environment with dynamic numbers of active projects this causes a problem because the optimal cache configuration within a given memory frame depends on the JE-Environments in use BUT there is no way to define a global JE cache for ALL JE-Environments.
    Our "plan of attack" is to implement a Global-Cache-Manager to dynamicly configure the cache sizes of all active BDB environments depending on the given global cache size.
    Like Federico proposed the starting point for determining the optimal cache setting at load time will be a modification to the DbCacheSize utility so that the return value can be picked up easily, rather than printed to stdout. After that the EnvironmentMutableConfig.setCacheSize will be used to set the cache size. If there is enough Cache-RAM available we could even set a larger cache but I do not know if that really makes sense.
    If Cache-Memory is getting tight loading another BDB environment means decreasing cache sizes for the already loaded environments. This is also done via EnvironmentMutableConfig.setCacheSize. Are there any timing conditions one should obey before assuming the memory is really available? To determine if there are any BDB environments that do not use their cache one could query each cache utilization using EnvironmentStats.getCacheDataBytes() and getCacheTotalBytes().
    Are there any comments to this plan? Is there perhaps a better solution or even an implementation?
    Do you think a global cache manager is something worth back-donating?
    Related Postings: Multiple envs in one process?
    Stefan Walgenbach

    Here is the updated DbCacheSize.java to allow calling it with an API.
    Charles Lamb
    * See the file LICENSE for redistribution information.
    * Copyright (c) 2005-2006
    *      Oracle Corporation.  All rights reserved.
    * $Id: DbCacheSize.java,v 1.8 2006/09/12 19:16:59 cwl Exp $
    package com.sleepycat.je.util;
    import java.io.File;
    import java.io.PrintStream;
    import java.math.BigInteger;
    import java.text.NumberFormat;
    import java.util.Random;
    import com.sleepycat.je.Database;
    import com.sleepycat.je.DatabaseConfig;
    import com.sleepycat.je.DatabaseEntry;
    import com.sleepycat.je.DatabaseException;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.je.EnvironmentStats;
    import com.sleepycat.je.OperationStatus;
    import com.sleepycat.je.dbi.MemoryBudget;
    import com.sleepycat.je.utilint.CmdUtil;
    * Estimating JE in-memory sizes as a function of key and data size is not
    * straightforward for two reasons. There is some fixed overhead for each btree
    * internal node, so tree fanout and degree of node sparseness impacts memory
    * consumption. In addition, JE compresses some of the internal nodes where
    * possible, but compression depends on on-disk layouts.
    * DbCacheSize is an aid for estimating cache sizes. To get an estimate of the
    * in-memory footprint for a given database, specify the number of records and
    * record characteristics and DbCacheSize will return a minimum and maximum
    * estimate of the cache size required for holding the database in memory.
    * If the user specifies the record's data size, the utility will return both
    * values for holding just the internal nodes of the btree, and for holding the
    * entire database in cache.
    * Note that "cache size" is a percentage more than "btree size", to cover
    * general environment resources like log buffers. Each invocation of the
    * utility returns an estimate for a single database in an environment.  For an
    * environment with multiple databases, run the utility for each database, add
    * up the btree sizes, and then add 10 percent.
    * Note that the utility does not yet cover duplicate records and the API is
    * subject to change release to release.
    * The only required parameters are the number of records and key size.
    * Data size, non-tree cache overhead, btree fanout, and other parameters
    * can also be provided. For example:
    * $ java DbCacheSize -records 554719 -key 16 -data 100
    * Inputs: records=554719 keySize=16 dataSize=100 nodeMax=128 density=80%
    * overhead=10%
    *    Cache Size      Btree Size  Description
    *    30,547,440      27,492,696  Minimum, internal nodes only
    *    41,460,720      37,314,648  Maximum, internal nodes only
    *   114,371,644     102,934,480  Minimum, internal nodes and leaf nodes
    *   125,284,924     112,756,432  Maximum, internal nodes and leaf nodes
    * Btree levels: 3
    * This says that the minimum cache size to hold only the internal nodes of the
    * btree in cache is approximately 30MB. The maximum size to hold the entire
    * database in cache, both internal nodes and datarecords, is 125Mb.
    public class DbCacheSize {
        private static final NumberFormat INT_FORMAT =
            NumberFormat.getIntegerInstance();
        private static final String HEADER =
            "    Cache Size      Btree Size  Description\n" +
        //   12345678901234  12345678901234
        //                 12
        private static final int COLUMN_WIDTH = 14;
        private static final int COLUMN_SEPARATOR = 2;
        private long records;
        private int keySize;
        private int dataSize;
        private int nodeMax;
        private int density;
        private long overhead;
        private long minInBtreeSize;
        private long maxInBtreeSize;
        private long minInCacheSize;
        private long maxInCacheSize;
        private long maxInBtreeSizeWithData;
        private long maxInCacheSizeWithData;
        private long minInBtreeSizeWithData;
        private long minInCacheSizeWithData;
        private int nLevels = 1;
        public DbCacheSize (long records,
                   int keySize,
                   int dataSize,
                   int nodeMax,
                   int density,
                   long overhead) {
         this.records = records;
         this.keySize = keySize;
         this.dataSize = dataSize;
         this.nodeMax = nodeMax;
         this.density = density;
         this.overhead = overhead;
        public long getMinCacheSizeInternalNodesOnly() {
         return minInCacheSize;
        public long getMaxCacheSizeInternalNodesOnly() {
         return maxInCacheSize;
        public long getMinBtreeSizeInternalNodesOnly() {
         return minInBtreeSize;
        public long getMaxBtreeSizeInternalNodesOnly() {
         return maxInBtreeSize;
        public long getMinCacheSizeWithData() {
         return minInCacheSizeWithData;
        public long getMaxCacheSizeWithData() {
         return maxInCacheSizeWithData;
        public long getMinBtreeSizeWithData() {
         return minInBtreeSizeWithData;
        public long getMaxBtreeSizeWithData() {
         return maxInBtreeSizeWithData;
        public int getNLevels() {
         return nLevels;
        public static void main(String[] args) {
            try {
                long records = 0;
                int keySize = 0;
                int dataSize = 0;
                int nodeMax = 128;
                int density = 80;
                long overhead = 0;
                File measureDir = null;
                boolean measureRandom = false;
                for (int i = 0; i < args.length; i += 1) {
                    String name = args;
    String val = null;
    if (i < args.length - 1 && !args[i + 1].startsWith("-")) {
    i += 1;
    val = args[i];
    if (name.equals("-records")) {
    if (val == null) {
    usage("No value after -records");
    try {
    records = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (records <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-key")) {
    if (val == null) {
    usage("No value after -key");
    try {
    keySize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (keySize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-data")) {
    if (val == null) {
    usage("No value after -data");
    try {
    dataSize = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (dataSize <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-nodemax")) {
    if (val == null) {
    usage("No value after -nodemax");
    try {
    nodeMax = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (nodeMax <= 0) {
    usage(val + " is not a positive integer");
    } else if (name.equals("-density")) {
    if (val == null) {
    usage("No value after -density");
    try {
    density = Integer.parseInt(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (density < 1 || density > 100) {
    usage(val + " is not betwen 1 and 100");
    } else if (name.equals("-overhead")) {
    if (val == null) {
    usage("No value after -overhead");
    try {
    overhead = Long.parseLong(val);
    } catch (NumberFormatException e) {
    usage(val + " is not a number");
    if (overhead < 0) {
    usage(val + " is not a non-negative integer");
    } else if (name.equals("-measure")) {
    if (val == null) {
    usage("No value after -measure");
    measureDir = new File(val);
    } else if (name.equals("-measurerandom")) {
    measureRandom = true;
    } else {
    usage("Unknown arg: " + name);
    if (records == 0) {
    usage("-records not specified");
    if (keySize == 0) {
    usage("-key not specified");
         DbCacheSize dbCacheSize = new DbCacheSize
              (records, keySize, dataSize, nodeMax, density, overhead);
         dbCacheSize.caclulateCacheSizes();
         dbCacheSize.printCacheSizes(System.out);
    if (measureDir != null) {
    measure(System.out, measureDir, records, keySize, dataSize,
    nodeMax, measureRandom);
    } catch (Throwable e) {
    e.printStackTrace(System.out);
    private static void usage(String msg) {
    if (msg != null) {
    System.out.println(msg);
    System.out.println
    ("usage:" +
    "\njava " + CmdUtil.getJavaCommand(DbCacheSize.class) +
    "\n -records <count>" +
    "\n # Total records (key/data pairs); required" +
    "\n -key <bytes> " +
    "\n # Average key bytes per record; required" +
    "\n [-data <bytes>]" +
    "\n # Average data bytes per record; if omitted no leaf" +
    "\n # node sizes are included in the output" +
    "\n [-nodemax <entries>]" +
    "\n # Number of entries per Btree node; default: 128" +
    "\n [-density <percentage>]" +
    "\n # Percentage of node entries occupied; default: 80" +
    "\n [-overhead <bytes>]" +
    "\n # Overhead of non-Btree objects (log buffers, locks," +
    "\n # etc); default: 10% of total cache size" +
    "\n [-measure <environmentHomeDirectory>]" +
    "\n # An empty directory used to write a database to find" +
    "\n # the actual cache size; default: do not measure" +
    "\n [-measurerandom" +
    "\n # With -measure insert randomly generated keys;" +
    "\n # default: insert sequential keys");
    System.exit(2);
    private void caclulateCacheSizes() {
    int nodeAvg = (nodeMax * density) / 100;
    long nBinEntries = (records * nodeMax) / nodeAvg;
    long nBinNodes = (nBinEntries + nodeMax - 1) / nodeMax;
    long nInNodes = 0;
         long lnSize = 0;
    for (long n = nBinNodes; n > 0; n /= nodeMax) {
    nInNodes += n;
    nLevels += 1;
    minInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, true);
    maxInBtreeSize = nInNodes *
         calcInSize(nodeMax, nodeAvg, keySize, false);
         minInCacheSize = calculateOverhead(minInBtreeSize, overhead);
         maxInCacheSize = calculateOverhead(maxInBtreeSize, overhead);
    if (dataSize > 0) {
    lnSize = records * calcLnSize(dataSize);
         maxInBtreeSizeWithData = maxInBtreeSize + lnSize;
         maxInCacheSizeWithData = calculateOverhead(maxInBtreeSizeWithData,
                                  overhead);
         minInBtreeSizeWithData = minInBtreeSize + lnSize;
         minInCacheSizeWithData = calculateOverhead(minInBtreeSizeWithData,
                                  overhead);
    private void printCacheSizes(PrintStream out) {
    out.println("Inputs:" +
    " records=" + records +
    " keySize=" + keySize +
    " dataSize=" + dataSize +
    " nodeMax=" + nodeMax +
    " density=" + density + '%' +
    " overhead=" + ((overhead > 0) ? overhead : 10) + "%");
    out.println();
    out.println(HEADER);
    out.println(line(minInBtreeSize, minInCacheSize,
                   "Minimum, internal nodes only"));
    out.println(line(maxInBtreeSize, maxInCacheSize,
                   "Maximum, internal nodes only"));
    if (dataSize > 0) {
    out.println(line(minInBtreeSizeWithData,
                   minInCacheSizeWithData,
                   "Minimum, internal nodes and leaf nodes"));
    out.println(line(maxInBtreeSizeWithData,
                   maxInCacheSizeWithData,
    "Maximum, internal nodes and leaf nodes"));
    } else {
    out.println("\nTo get leaf node sizing specify -data");
    out.println("\nBtree levels: " + nLevels);
    private int calcInSize(int nodeMax,
                   int nodeAvg,
                   int keySize,
                   boolean lsnCompression) {
    /* Fixed overhead */
    int size = MemoryBudget.IN_FIXED_OVERHEAD;
    /* Byte state array plus keys and nodes arrays */
    size += MemoryBudget.byteArraySize(nodeMax) +
    (nodeMax * (2 * MemoryBudget.ARRAY_ITEM_OVERHEAD));
    /* LSN array */
         if (lsnCompression) {
         size += MemoryBudget.byteArraySize(nodeMax * 2);
         } else {
         size += MemoryBudget.BYTE_ARRAY_OVERHEAD +
    (nodeMax * MemoryBudget.LONG_OVERHEAD);
    /* Keys for populated entries plus the identifier key */
    size += (nodeAvg + 1) * MemoryBudget.byteArraySize(keySize);
    return size;
    private int calcLnSize(int dataSize) {
    return MemoryBudget.LN_OVERHEAD +
    MemoryBudget.byteArraySize(dataSize);
    private long calculateOverhead(long btreeSize, long overhead) {
    long cacheSize;
    if (overhead == 0) {
    cacheSize = (100 * btreeSize) / 90;
    } else {
    cacheSize = btreeSize + overhead;
         return cacheSize;
    private String line(long btreeSize,
                   long cacheSize,
                   String comment) {
    StringBuffer buf = new StringBuffer(100);
    column(buf, INT_FORMAT.format(cacheSize));
    column(buf, INT_FORMAT.format(btreeSize));
    column(buf, comment);
    return buf.toString();
    private void column(StringBuffer buf, String str) {
    int start = buf.length();
    while (buf.length() - start + str.length() < COLUMN_WIDTH) {
    buf.append(' ');
    buf.append(str);
    for (int i = 0; i < COLUMN_SEPARATOR; i += 1) {
    buf.append(' ');
    private static void measure(PrintStream out,
    File dir,
    long records,
    int keySize,
    int dataSize,
    int nodeMax,
    boolean randomKeys)
    throws DatabaseException {
    String[] fileNames = dir.list();
    if (fileNames != null && fileNames.length > 0) {
    usage("Directory is not empty: " + dir);
    Environment env = openEnvironment(dir, true);
    Database db = openDatabase(env, nodeMax, true);
    try {
    out.println("\nMeasuring with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    insertRecords(out, env, db, records, keySize, dataSize, randomKeys);
    printStats(out, env,
    "Stats for internal and leaf nodes (after insert)");
    db.close();
    env.close();
    env = openEnvironment(dir, false);
    db = openDatabase(env, nodeMax, false);
    out.println("\nPreloading with cache size: " +
    INT_FORMAT.format(env.getConfig().getCacheSize()));
    preloadRecords(out, db);
    printStats(out, env,
    "Stats for internal nodes only (after preload)");
    } finally {
    try {
    db.close();
    env.close();
    } catch (Exception e) {
    out.println("During close: " + e);
    private static Environment openEnvironment(File dir, boolean allowCreate)
    throws DatabaseException {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(allowCreate);
    envConfig.setCachePercent(90);
    return new Environment(dir, envConfig);
    private static Database openDatabase(Environment env, int nodeMax,
    boolean allowCreate)
    throws DatabaseException {
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(allowCreate);
    dbConfig.setNodeMaxEntries(nodeMax);
    return env.openDatabase(null, "foo", dbConfig);
    private static void insertRecords(PrintStream out,
    Environment env,
    Database db,
    long records,
    int keySize,
    int dataSize,
    boolean randomKeys)
    throws DatabaseException {
    DatabaseEntry key = new DatabaseEntry();
    DatabaseEntry data = new DatabaseEntry(new byte[dataSize]);
    BigInteger bigInt = BigInteger.ZERO;
    Random rnd = new Random(123);
    for (int i = 0; i < records; i += 1) {
    if (randomKeys) {
    byte[] a = new byte[keySize];
    rnd.nextBytes(a);
    key.setData(a);
    } else {
    bigInt = bigInt.add(BigInteger.ONE);
    byte[] a = bigInt.toByteArray();
    if (a.length < keySize) {
    byte[] a2 = new byte[keySize];
    System.arraycopy(a, 0, a2, a2.length - a.length, a.length);
    a = a2;
    } else if (a.length > keySize) {
    out.println("*** Key doesn't fit value=" + bigInt +
    " byte length=" + a.length);
    return;
    key.setData(a);
    OperationStatus status = db.putNoOverwrite(null, key, data);
    if (status == OperationStatus.KEYEXIST && randomKeys) {
    i -= 1;
    out.println("Random key already exists -- retrying");
    continue;
    if (status != OperationStatus.SUCCESS) {
    out.println("*** " + status);
    return;
    if (i % 10000 == 0) {
    EnvironmentStats stats = env.getStats(null);
    if (stats.getNNodesScanned() > 0) {
    out.println("*** Ran out of cache memory at record " + i +
    " -- try increasing the Java heap size ***");
    return;
    out.print(".");
    out.flush();
    private static void preloadRecords(final PrintStream out,
    final Database db)
    throws DatabaseException {
    Thread thread = new Thread() {
    public void run() {
    while (true) {
    try {
    out.print(".");
    out.flush();
    Thread.sleep(5 * 1000);
    } catch (InterruptedException e) {
    break;
    thread.start();
    db.preload(0);
    thread.interrupt();
    try {
    thread.join();
    } catch (InterruptedException e) {
    e.printStackTrace(out);
    private static void printStats(PrintStream out,
    Environment env,
    String msg)
    throws DatabaseException {
    out.println();
    out.println(msg + ':');
    EnvironmentStats stats = env.getStats(null);
    out.println("CacheSize=" +
    INT_FORMAT.format(stats.getCacheTotalBytes()) +
    " BtreeSize=" +
    INT_FORMAT.format(stats.getCacheDataBytes()));
    if (stats.getNNodesScanned() > 0) {
    out.println("*** All records did not fit in the cache ***");

  • Global contact list?

    Hello All,
    Is there a way to make a global contact list to be used for all users in the collaboration launch pad iview?
    I have not been able to do so and I also tried to make one users my contacts available to all without any success.
    What I want to use it for is an adressbook for all employees.
    Does anybody know how to do this or if it's even possible?
    Regards,
    Lennart

    Hi Lennart,
    actually the Who’s Who iView allows you to find users with a search request and, display more information, such as the user ID and telephone number from the UME. In the list you can click on the name to the the business card of the user (like in the collaboration launch pad).
    The big disadvantage of this iView is that you have to start with a search and can not display just a global list of (selected) users.
    Another alternative that seems very suitable for your scenario is the Collaboration Room Part "SAP Contacts". Normally this requires the use of Collaboration Rooms, is designed for saving centrally a list of selected users the can be searched from the UME store (like in the collaboration launch pad) and also shows the link to the business card.
    If you might like to use the advantages of the "SAP Contacts" Room Part outside of Collaboration Rooms, and maybe also integrate non UME contacts as well as UME contacts with more KM properties than  just the link to the business card, then my SDN Article might help you a lot on achieving this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/sdn-meets-labs-walldorf-05/creating contact lists juggling with ume attributes and km properties.pdf
    If you have further questions, I'd be glad to help you in implementing this scenario.
    Hope this helps,
    Robert

  • Trying to create a contact sheet for passport photos....HELP!

    Aperture 3 and iMac are both new to me and after a lot of research, I finally figured out how to make a contact sheet for my passport photo so that I don't have to pay 10.00 for a measly 2x2 photo at Walgreen's. I went to file>print>selected the 8x10 borderless size and adjusted the picture so that I would be able to print 20 2x2 photos on one sheet. PERFECT....but....I don't really want to print it myself ( I don't have any photo paper right now )...I want to SAVE the image I created and upload it to Walgreen's so that they can print it. Then, I will cut out the 2x2 photo that I need. The only problem right now is that I cannot figure out how to SAVE the image to my computer, either in iPhoto or Aperture 3. If anyone knows, can you please help me out with this. OR if there is another way to create the image in another area of Aperture 3 I would like to know how! 
    Thanks for any help you can provide

    Once in print after you have the layout you want select the Print button. This will bring up the  system print window (not actually print the image yet).
    At the bottom of the new window is a button PDF select that. From that pulldown you can elect to save the print image as a pdf or better yet for your need you can save  as a JPG (or TIFF)
    Once you have it saved as a JPG you can upload to Walgreen;s

  • [RESOLVED] Unable to move dev directory for Apache, keep getting 403

    Hello,
    I'd like to move my default /srv/http directory for my PHP pages to a different location (somewhere in my home directory) for convenience.  This is exclusively for local development.  However, after making the same change in two places, I restarted Apache and keep getting a 403 error.
    Here is my httpd.conf:
    # This is the main Apache HTTP server configuration file. It contains the
    # configuration directives that give the server its instructions.
    # See <URL:http://httpd.apache.org/docs/2.2> for detailed information.
    # In particular, see
    # <URL:http://httpd.apache.org/docs/2.2/mod/directives.html>
    # for a discussion of each configuration directive.
    # Do NOT simply read the instructions in here without understanding
    # what they do. They're here only as hints or reminders. If you are unsure
    # consult the online docs. You have been warned.
    # Configuration and logfile names: If the filenames you specify for many
    # of the server's control files begin with "/" (or "drive:/" for Win32), the
    # server will use that explicit path. If the filenames do *not* begin
    # with "/", the value of ServerRoot is prepended -- so 'log/access_log'
    # with ServerRoot set to '/www' will be interpreted by the
    # server as '/www/log/access_log', where as '/log/access_log' will be
    # interpreted as '/log/access_log'.
    # ServerRoot: The top of the directory tree under which the server's
    # configuration, error, and log files are kept.
    # Do not add a slash at the end of the directory path. If you point
    # ServerRoot at a non-local disk, be sure to point the LockFile directive
    # at a local disk. If you wish to share the same ServerRoot for multiple
    # httpd daemons, you will need to change at least LockFile and PidFile.
    ServerRoot "/etc/httpd"
    # Listen: Allows you to bind Apache to specific IP addresses and/or
    # ports, instead of the default. See also the <VirtualHost>
    # directive.
    # Change this to Listen on specific IP addresses as shown below to
    # prevent Apache from glomming onto all bound IP addresses.
    #Listen 12.34.56.78:80
    Listen 127.0.0.1:80
    # Dynamic Shared Object (DSO) Support
    # To be able to use the functionality of a module which was built as a DSO you
    # have to place corresponding `LoadModule' lines at this location so the
    # directives contained in it are actually available _before_ they are used.
    # Statically compiled modules (those listed by `httpd -l') do not need
    # to be loaded here.
    # Example:
    # LoadModule foo_module modules/mod_foo.so
    LoadModule authn_file_module modules/mod_authn_file.so
    LoadModule authn_dbm_module modules/mod_authn_dbm.so
    LoadModule authn_anon_module modules/mod_authn_anon.so
    LoadModule authn_dbd_module modules/mod_authn_dbd.so
    LoadModule authn_default_module modules/mod_authn_default.so
    LoadModule authz_host_module modules/mod_authz_host.so
    LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
    LoadModule authz_user_module modules/mod_authz_user.so
    LoadModule authz_dbm_module modules/mod_authz_dbm.so
    LoadModule authz_owner_module modules/mod_authz_owner.so
    LoadModule authnz_ldap_module modules/mod_authnz_ldap.so
    LoadModule authz_default_module modules/mod_authz_default.so
    LoadModule auth_basic_module modules/mod_auth_basic.so
    LoadModule auth_digest_module modules/mod_auth_digest.so
    LoadModule file_cache_module modules/mod_file_cache.so
    LoadModule cache_module modules/mod_cache.so
    LoadModule disk_cache_module modules/mod_disk_cache.so
    LoadModule mem_cache_module modules/mod_mem_cache.so
    LoadModule dbd_module modules/mod_dbd.so
    LoadModule dumpio_module modules/mod_dumpio.so
    LoadModule reqtimeout_module modules/mod_reqtimeout.so
    LoadModule ext_filter_module modules/mod_ext_filter.so
    LoadModule include_module modules/mod_include.so
    LoadModule filter_module modules/mod_filter.so
    LoadModule substitute_module modules/mod_substitute.so
    LoadModule deflate_module modules/mod_deflate.so
    LoadModule ldap_module modules/mod_ldap.so
    LoadModule log_config_module modules/mod_log_config.so
    LoadModule log_forensic_module modules/mod_log_forensic.so
    LoadModule logio_module modules/mod_logio.so
    LoadModule env_module modules/mod_env.so
    LoadModule mime_magic_module modules/mod_mime_magic.so
    LoadModule cern_meta_module modules/mod_cern_meta.so
    LoadModule expires_module modules/mod_expires.so
    LoadModule headers_module modules/mod_headers.so
    LoadModule ident_module modules/mod_ident.so
    LoadModule usertrack_module modules/mod_usertrack.so
    LoadModule unique_id_module modules/mod_unique_id.so
    LoadModule setenvif_module modules/mod_setenvif.so
    LoadModule version_module modules/mod_version.so
    LoadModule proxy_module modules/mod_proxy.so
    LoadModule proxy_connect_module modules/mod_proxy_connect.so
    LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
    LoadModule proxy_http_module modules/mod_proxy_http.so
    LoadModule proxy_scgi_module modules/mod_proxy_scgi.so
    LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
    LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
    LoadModule ssl_module modules/mod_ssl.so
    LoadModule mime_module modules/mod_mime.so
    LoadModule dav_module modules/mod_dav.so
    LoadModule status_module modules/mod_status.so
    LoadModule autoindex_module modules/mod_autoindex.so
    LoadModule asis_module modules/mod_asis.so
    LoadModule info_module modules/mod_info.so
    LoadModule suexec_module modules/mod_suexec.so
    LoadModule cgi_module modules/mod_cgi.so
    LoadModule cgid_module modules/mod_cgid.so
    LoadModule dav_fs_module modules/mod_dav_fs.so
    LoadModule vhost_alias_module modules/mod_vhost_alias.so
    LoadModule negotiation_module modules/mod_negotiation.so
    LoadModule dir_module modules/mod_dir.so
    # enable PHP.
    LoadModule php5_module modules/libphp5.so
    LoadModule imagemap_module modules/mod_imagemap.so
    LoadModule actions_module modules/mod_actions.so
    LoadModule speling_module modules/mod_speling.so
    LoadModule userdir_module modules/mod_userdir.so
    LoadModule alias_module modules/mod_alias.so
    LoadModule rewrite_module modules/mod_rewrite.so
    <IfModule !mpm_netware_module>
    <IfModule !mpm_winnt_module>
    # If you wish httpd to run as a different user or group, you must run
    # httpd as root initially and it will switch.
    # User/Group: The name (or #number) of the user/group to run httpd as.
    # It is usually good practice to create a dedicated user and group for
    # running httpd, as with most system services.
    User http
    Group http
    </IfModule>
    </IfModule>
    # 'Main' server configuration
    # The directives in this section set up the values used by the 'main'
    # server, which responds to any requests that aren't handled by a
    # <VirtualHost> definition. These values also provide defaults for
    # any <VirtualHost> containers you may define later in the file.
    # All of these directives may appear inside <VirtualHost> containers,
    # in which case these default settings will be overridden for the
    # virtual host being defined.
    # ServerAdmin: Your address, where problems with the server should be
    # e-mailed. This address appears on some server-generated pages, such
    # as error documents. e.g. [email protected]
    ServerAdmin [email protected]
    # ServerName gives the name and port that the server uses to identify itself.
    # This can often be determined automatically, but we recommend you specify
    # it explicitly to prevent problems during startup.
    # If your host doesn't have a registered DNS name, enter its IP address here.
    #ServerName www.example.com:80
    # DocumentRoot: The directory out of which you will serve your
    # documents. By default, all requests are taken from this directory, but
    # symbolic links and aliases may be used to point to other locations.
    DocumentRoot "/home/local_user/Documents/development/php"
    # Each directory to which Apache has access can be configured with respect
    # to which services and features are allowed and/or disabled in that
    # directory (and its subdirectories).
    # First, we configure the "default" to be a very restrictive set of
    # features.
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    Order deny,allow
    Deny from all
    </Directory>
    # Note that from this point forward you must specifically allow
    # particular features to be enabled - so if something's not working as
    # you might expect, make sure that you have specifically enabled it
    # below.
    # This should be changed to whatever you set DocumentRoot to.
    <Directory "/home/local_user/Documents/development/php">
    # Possible values for the Options directive are "None", "All",
    # or any combination of:
    # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
    # Note that "MultiViews" must be named *explicitly* --- "Options All"
    # doesn't give it to you.
    # The Options directive is both complicated and important. Please see
    # http://httpd.apache.org/docs/2.2/mod/core.html#options
    # for more information.
    Options Indexes FollowSymLinks
    # AllowOverride controls what directives may be placed in .htaccess files.
    # It can be "All", "None", or any combination of the keywords:
    # Options FileInfo AuthConfig Limit
    AllowOverride None
    # Controls who can get stuff from this server.
    Order allow,deny
    Allow from all
    </Directory>
    # DirectoryIndex: sets the file that Apache will serve if a directory
    # is requested.
    <IfModule dir_module>
    DirectoryIndex index.html
    </IfModule>
    # The following lines prevent .htaccess and .htpasswd files from being
    # viewed by Web clients.
    <FilesMatch "^\.ht">
    Order allow,deny
    Deny from all
    Satisfy All
    </FilesMatch>
    # ErrorLog: The location of the error log file.
    # If you do not specify an ErrorLog directive within a <VirtualHost>
    # container, error messages relating to that virtual host will be
    # logged here. If you *do* define an error logfile for a <VirtualHost>
    # container, that host's errors will be logged there and not here.
    ErrorLog "/var/log/httpd/error_log"
    # LogLevel: Control the number of messages logged to the error_log.
    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn
    <IfModule log_config_module>
    # The following directives define some format nicknames for use with
    # a CustomLog directive (see below).
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%h %l %u %t \"%r\" %>s %b" common
    <IfModule logio_module>
    # You need to enable mod_logio.c to use %I and %O
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
    </IfModule>
    # The location and format of the access logfile (Common Logfile Format).
    # If you do not define any access logfiles within a <VirtualHost>
    # container, they will be logged here. Contrariwise, if you *do*
    # define per-<VirtualHost> access logfiles, transactions will be
    # logged therein and *not* in this file.
    CustomLog "/var/log/httpd/access_log" common
    # If you prefer a logfile with access, agent, and referer information
    # (Combined Logfile Format) you can use the following directive.
    #CustomLog "/var/log/httpd/access_log" combined
    </IfModule>
    <IfModule alias_module>
    # Redirect: Allows you to tell clients about documents that used to
    # exist in your server's namespace, but do not anymore. The client
    # will make a new request for the document at its new location.
    # Example:
    # Redirect permanent /foo http://www.example.com/bar
    # Alias: Maps web paths into filesystem paths and is used to
    # access content that does not live under the DocumentRoot.
    # Example:
    # Alias /webpath /full/filesystem/path
    # If you include a trailing / on /webpath then the server will
    # require it to be present in the URL. You will also likely
    # need to provide a <Directory> section to allow access to
    # the filesystem path.
    # ScriptAlias: This controls which directories contain server scripts.
    # ScriptAliases are essentially the same as Aliases, except that
    # documents in the target directory are treated as applications and
    # run by the server when requested rather than as documents sent to the
    # client. The same rules about trailing "/" apply to ScriptAlias
    # directives as to Alias.
    ScriptAlias /cgi-bin/ "/srv/http/cgi-bin/"
    </IfModule>
    <IfModule cgid_module>
    # ScriptSock: On threaded servers, designate the path to the UNIX
    # socket used to communicate with the CGI daemon of mod_cgid.
    #Scriptsock /run/httpd/cgisock
    </IfModule>
    # "/srv/http/cgi-bin" should be changed to whatever your ScriptAliased
    # CGI directory exists, if you have that configured.
    <Directory "/srv/http/cgi-bin">
    AllowOverride None
    Options None
    Order allow,deny
    Allow from all
    </Directory>
    # DefaultType: the default MIME type the server will use for a document
    # if it cannot otherwise determine one, such as from filename extensions.
    # If your server contains mostly text or HTML documents, "text/plain" is
    # a good value. If most of your content is binary, such as applications
    # or images, you may want to use "application/octet-stream" instead to
    # keep browsers from trying to display binary files as though they are
    # text.
    DefaultType text/plain
    <IfModule mime_module>
    # TypesConfig points to the file containing the list of mappings from
    # filename extension to MIME-type.
    TypesConfig conf/mime.types
    # AddType allows you to add to or override the MIME configuration
    # file specified in TypesConfig for specific file types.
    #AddType application/x-gzip .tgz
    # AddEncoding allows you to have certain browsers uncompress
    # information on the fly. Note: Not all browsers support this.
    #AddEncoding x-compress .Z
    #AddEncoding x-gzip .gz .tgz
    # If the AddEncoding directives above are commented-out, then you
    # probably should define those extensions to indicate media types:
    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz
    # AddHandler allows you to map certain file extensions to "handlers":
    # actions unrelated to filetype. These can be either built into the server
    # or added with the Action directive (see below)
    # To use CGI scripts outside of ScriptAliased directories:
    # (You will also need to add "ExecCGI" to the "Options" directive.)
    #AddHandler cgi-script .cgi
    # For type maps (negotiated resources):
    #AddHandler type-map var
    # Filters allow you to process content before it is sent to the client.
    # To parse .shtml files for server-side includes (SSI):
    # (You will also need to add "Includes" to the "Options" directive.)
    #AddType text/html .shtml
    #AddOutputFilter INCLUDES .shtml
    MIMEMagicFile conf/magic
    </IfModule>
    # The mod_mime_magic module allows the server to use various hints from the
    # contents of the file itself to determine its type. The MIMEMagicFile
    # directive tells the module where the hint definitions are located.
    #MIMEMagicFile conf/magic
    # Customizable error responses come in three flavors:
    # 1) plain text 2) local redirects 3) external redirects
    # Some examples:
    #ErrorDocument 500 "The server made a boo boo."
    #ErrorDocument 404 /missing.html
    #ErrorDocument 404 "/cgi-bin/missing_handler.pl"
    #ErrorDocument 402 http://www.example.com/subscription_info.html
    # MaxRanges: Maximum number of Ranges in a request before
    # returning the entire resource, or one of the special
    # values 'default', 'none' or 'unlimited'.
    # Default setting is to accept 200 Ranges.
    #MaxRanges unlimited
    # EnableMMAP and EnableSendfile: On systems that support it,
    # memory-mapping or the sendfile syscall is used to deliver
    # files. This usually improves server performance, but must
    # be turned off when serving from networked-mounted
    # filesystems or if support for these functions is otherwise
    # broken on your system.
    #EnableMMAP off
    #EnableSendfile off
    # Supplemental configuration
    # The configuration files in the conf/extra/ directory can be
    # included to add extra features or to modify the default configuration of
    # the server, or you may simply copy their contents here and change as
    # necessary.
    # Server-pool management (MPM specific)
    #Include conf/extra/httpd-mpm.conf
    # Multi-language error messages
    Include conf/extra/httpd-multilang-errordoc.conf
    # Fancy directory listings
    Include conf/extra/httpd-autoindex.conf
    # Language settings
    Include conf/extra/httpd-languages.conf
    # User home directories
    Include conf/extra/httpd-userdir.conf
    # Real-time info on requests and configuration
    #Include conf/extra/httpd-info.conf
    # Virtual hosts
    #Include conf/extra/httpd-vhosts.conf
    # Local access to the Apache HTTP Server Manual
    #Include conf/extra/httpd-manual.conf
    # Distributed authoring and versioning (WebDAV)
    #Include conf/extra/httpd-dav.conf
    # Various default settings
    Include conf/extra/httpd-default.conf
    # enable the PHP5 module.
    Include conf/extra/php5_module.conf
    # Secure (SSL/TLS) connections
    #Include conf/extra/httpd-ssl.conf
    # Note: The following must must be present to support
    # starting without SSL on platforms with no /dev/random equivalent
    # but a statically compiled-in mod_ssl.
    <IfModule ssl_module>
    SSLRandomSeed startup builtin
    SSLRandomSeed connect builtin
    </IfModule>
    Last edited by publicus (2014-02-02 18:06:07)

    WorMzy wrote:
    Switch to the http user and see if you can manually browse to the directory.
    # su -s /bin/bash - http
    Find out where the user can't get to, and fix the permissions.
    You're right.  I can't get there.
    I tried to make Apache start up as me (my user), but got an error that said that it cannot start it up like that.  I think I have a possible solution for the given problem.  I'll my user to the http group and then work from /srv/http.
    I'd love to get some feedback if someone has a different approach .  I love to learn from the experience of others .

  • How can I maintain more than one contact person for a supplier

    Hi
    As per our business requirement , How can I maintain more than one contact person for a single supplier in SRM?
    How these contact persons are supposed to log in to a portal with different user id and passwords and access the bids for same supplier.
    we are in SRM 7, and currently we are creating contact person throug customised BDC.
    please advice.
    Thanks & Regards
    NITIN
    Edited by: nitinkk on Sep 9, 2011 10:22 AM

    Dear Yaniv,
    Thanks for your information,
    I am able to create  users But my problem is How these contact persons are supposed to log in to a portal with different user id and passwords and access the bids for same supplier ? Is there any kind of maping is available for the user/ contact person to the supplier no.??
    Thanks & Regard
    NITIN

Maybe you are looking for