Best practice: Give customers on windows access to your iCal Server

Our iCal Server has been working great - most of our external people and customers are using macs anyway, but some are using XP or Vista.
CalDAV is an open standard, but I am struggling to find information on how outlook or windows calendar users can access their accounts on our server. I got around this by asking them to use a Google Calendar with Google-Sync and end up subscribing to their calendar, but there must be a better solution.
How do you do it?

Thought I'd pass on some helpful hints to those having trouble with basic setup of Sunbird.
First, you need to know what calendars to add to Sunbird. In iCal, doing a get-info on the calendar will get you started but not the whole way. It may show /calendars/_uids_/27AA8776-D9DF-4AE1-A2D0-2DEA7832034F/calendar/ or may show /calendars/_uids_/27AA8776-D9DF-4AE1-A2D0-2DEA7832034F/27C9FC5A-AF32-43A
When you enter the full calendar path into Sunbird you want to use the full path
http://<servername>:<calendarport>/calendars/users/<username>/<calendar name>
OR
http://<servername>:<calendarport>/calendars/_uids_/<guid>/<calendar name>
calendar port is 8008 by default.
either one will work. The trick is to make sure you know the full calendar name. The FIRST/default calendar setup for each user is called 'calendar', each successive one is given a unique ID. ICAL MAY NOT SHOW YOU THE FULL NAME!!!! In the above example, ical gave me 27C9FC5A-AF32-43A but what I really needed was 27AA8776-D9DF-4AE1-A2D0-2BAA608D4074 .
Therefore in my case what I need to enter in Sunbird would be
(initial calendar)
http://myserver.com:8008/calendars/users/joe/calendar
or
http://myserver.com:8008/calendars/_uids_/27AA8776-D9DF-4AE1-A2D0-2DEA7832034F/calendar/
(additional calendar)
http://myserver.com:8008/calendars/users/joe/27AA8776-D9DF-4AE1-A2D0-2BAA608D407 4/
or
http://myserver.com:8008/calendars/_uids_/27AA8776-D9DF-4AE1-A2D0-2DEA7832034F/27AA8776-D9DF-4AE1-A2D0-2BAA608D4074/
You can use the ical get-info for a reference and then find the full name by browsing your server http://myserver.com:8008, or mounting the volume under go->connect to server http://myserver.com:8008 In the /calendars/users/username section you'll see 'calendar' and any other calendar for that user such as 27AA8776-D9DF-4AE1-A2D0-2BAA608D4074 in my case. Selecting that calendar shows the entire url in the link toolbar.
Setup your calendars in iCal and get their reference number, name them here for other mac users to see (if shared). Setup your deligates in ical etc. THEN use the above to get it setup in Sunbird. Hope this helps.
Scott Morabito
Tech Superpowers, Boston MA

Similar Messages

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best Practice - Securing Schema from User Access

    Scenario:
    User A requires access to schema called BLAH.
    User A is a developer that built an application using this schema in a separate development environment, although has the same privileges mirrored to production (same roles etc - required for operation of the application built).
    This means that the User has roles that grant Select, Update etc rights for the schema / table in order to use (and maintain) the applications.
    How can we restrict access to the BLAH schema in PRODUCTION, enforcing it to only be accessible via middle tier / application (proxy authentication?)?
    We've looked at using proxy authentication, however, it's not possible to grant roles and rights to the proxy account and NOT have them granted to the user (so they can dive straight in using development tooling and hit prod etc)>
    We've tried granting it on a session basis using proxy authentication (i.e. user a connects via proxy, an we ENABLE a disabled role on the user based on this connection), however, it causes performance issues.
    Are we tackling this the wrong way? What's the best practice for securing oracle schemas (and objects in general) for user access where the users actually get oracle user account (or even use SSO) for day to day business as usual.
    To me this feels like a common scenario, especially where SSO comes into play ...

    What about situations where we have Legacy Oracle Forms stuff? In these cases the user must be granted select etc rights to particular objects, as this can't connect via a middle tier.
    The problem we have is that our existing middle tier implementation is built expecting the user credentials to be passed to it during initial authentication and does not use a proxy, or super user style account.  We have, historically, been 100% reliant on Oracle rights and controls to validate and restrict access to our underlying data.  From what you are saying, we should start to look at using proxy or super user access and move this control process further up - i.e. into Code or Packages ?  If so, does this mean that there is no specific way to restrict schema access to given proxy accounts and then grant normal user accounts to connect through these to get access (kind of a delegated access scenario), without using disabled roles?

  • Any best practice to apply role based access control?

    Hi,
    I am starting to apply the access permissions for new users as being set by admin. I am choosing Role Based Access Control for this task.
    Can you please share the best practices or any built-in feature in JSF to achieve my goal?
    Regards,
    Faysi

    Hi,
    The macro pattern is my work. I've received a lot of help from forums as this one and from the Java developers community in general and I am very happy to help others and share my work.
    Regarding the architect responsibility of defining the pages according to the roles that have access to them : there is the enterprise.software infrastructure.facade
    java package.
    Here I implemented the Facade GoF software design pattern in the GroupsAndRolesAccessFacade java class. Thus, this is the only class the developer uses in order to define groups and roles of users and to define their access as per page.
    This is according to Java EE 6 tutorial, section VII Security, page 471.
    A group, role or user is created with an Identity Management application or by a custom application.
    Pages of the application and their sections are defined or modified together with the group, role or user who has access to them.
    For this u can use the createActiveGroup and createActiveRole methods of the GroupsAndRolesAccessFacade class.
    I've been in situations where end users very strict about the functionality of the application.
    If you try to abstract web development, u can think of writing to database, reading from database and modifying the database as actions.
    Each of these actions should have suggester, approver and implementor.
    Thus u can't call the createActiveGroup method for example, without calling first the requestActiveGroupCreationHelper and then the approveOrDeclineActiveGroupCreationHelper method.
    After the pages a group has access to have been defined with the createActiveGroup method, a developer can find out the pages and their sections a group has access to by calling the getMinimumInformationAboutGroup method.
    Further more, if the application is very strict, that is if every action which envolves writing to the database must be recorded, this concept of suggester, approver and implementor is available throught the recordActiveGroupAction method.
    For example, there is a web shop, its managers can change the prices of the products, but the boss will want to know who had the dared to lower prices.
    This action of lowering prices, is an action of modifying the information in the database and u can save in the database who suggested it, who approved it and who implemented it.
    Now that I write about the functionality of the macro pattern, I realise that some methods should have more proper names and I haven't had time to write documentation in the API, but this will be a complete when I add the web pages for the architect to use for defining access control and for the end users to view who and what is doing with their application.

  • Best Practices for FMS on Windows 2003/Apache

    We're experiencing problems where the flash videos being delivered from out 4GB Windows 2003 SP2/Apache machine hangs or don't play at all.  Does anyone have any best practices as far as the configuration?  May be we need to tweak some settings to get some stability here.

    Hi Chandra,
    Windows 2003 or Solaris.
    Firlsty , I would like to clarify few points.
    1. You server/Box might have 256 GB of RAM, but an application of essbase has a limitation of usage.It means, more does not work always. You need sufficient RAM.
    2. YOu mentioned that huge database in tera bytes sits on server of 256 GB RAM. I would like to clarify that the actual storage of tera bytes of database actually resides on the network drives i.e SAN or NAS. It wont reside at the server it self( conventionally)
    3. As the data size sounds mammoth , its ought to be on SAN or NAS. So the speed of the SAN disks , Raid config also play an important role in the overall Essbase experience
    Now, coming to the windows or Solaris. I have n't heard exceptionally bad experices with either of them. OS is one other parameter which would determine the performance.
    Hope this adds value
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Best Practices for AD and Windows Environment

    Hello Everyone,
    I need to create a document having the best practices for AD containing best practices for DNS, DHCP, AD Structure, Group Policy, Trust Etc.
    I just need the best practices irrespective of what is implemented in our company.
    I just need to create a document for analysis as of now. I searched over the internet but could not find much. I would request you all to pour in your suggestions from where i can find those.
    If anyone could send me or point me the link. I am pretty new to the technology, so need your help.
    Thanks in Advance

    I have an article where I shared the best practices to use to avoid known AD/DNS issues: http://www.ahmedmalek.com/web/fr/articles.asp?artid=23
    However, you need first to identify your requirements and based on these requirements, you can identify what should be implemented on your environment and how to manage it. The basics here is that you need to have at least two DC/DNS/GC servers per AD domain
    for the High Availability. You need also to take a system state backup of at least one DC/DNS/GC server in your domain. As for DHCP, you can use 50/50 or 80/20 DHCP rule depending on your setup.
    You can also refer to that: https://technet.microsoft.com/en-us/library/cc754678%28v=ws.10%29.aspx
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • Best practices for Reusable methods that access DBTransaction object

    Hi All,
    In our application, there are reusable methods that has DBTransaction object as parameter, eg :
    public static String postingToGL(DBTransaction dbTran, String pProceName, Number pDocId)
    This method could be called both from : Entity Object or Application Module (AM).
    I have two options about where to implement it :
    (1) put it in a Utils / helper class, so that I can call it both from Entity class and AM. But I have to pass DBTransaction object as parameter (I am not convenient with this)
    or...
    (2) put it in Entity object base class, but the problem is what if application module also nede to call that method ?
    WHat is the best practice in this situation ?
    Thank you,
    xtanto

    Hi,
    what about putting it into an AM base class ? I don't know what you worries are for passing the DBTransaction around.
    Frank

  • Best practices for installing Win 10 under Hyper-V on Server 2012R2 host

    Yeah, yeah, I know I could probably get my answers after spending 10 hours reading hundreds of isolated threads here.  I've already put in about 2 hours, and I'm exhausted.  Plus, this site does not have a very sophisticated search function.
    I want to install Win10 as a VM on my Server 2012R2 machine.  I am not currently hosting any other VMs, so my first decision was whether to try it using Hyper-V or VirtualBox.  I started with VirtualBox, but I ran into two problems: networking
    and video.  Also, VirtualBox itself seems to have some issues with failing to install the extension pack.  So now I think I'll give Hyper-V a shot.
    I found some blog posts from last year providing guidance on setting up Hyper-V for Win10, but given the rate of change of this beta OS, I expect there are many new "features" that can be mitigated against by specific settings on the VM.
    Some specific questions:
    1. Generation 1 or Generation 2 in the Hyper-V setup?  The blogs I've seen say to use Gen1, but provide no justification.  Perhaps because they are using Win8 as the host?  I am using 2012R2.
    2. Does the Win10 ISO file need to be continually available to the VM, or is it only used in the initial installation?
    3. How do I get the VM to access the GPU card, which has lots of memory, over the useless onboard video chip which only has 8MB and no 3D instruction set?  This was a dealbreaker with VirtualBox.
    4. I anticipate many issues with networking, but I'll start with this: I have dual onboard NICs going into a managed switch.  Should I just give one physical NIC to the VM and let the host have the other?  I think I'm going to have some issues
    with DHCP IP address assignment, but we'll see.  Any best practices here would be helpful.
    Thanks.

    >1.
    I'd use Gen 1, that is a BIOS type boot, but that's just because I've had
    less trouble than with Gen2 VM's.
    >2.
    Only during install, refresh, reset, or sfc.
    >3.
    No virtualization solution does it easily, but there is RemoteFX if you can
    get a Windows 10 client to use it.  I've never tried.
    http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
    >4.
    That's what I would do (assigning one NIC to the VM, and one the host).  If
    both are receiving an IP address right now from DHCP, they will continue to
    do so the new way unless you have a managed switch that would prevent more
    IP addresses.  It's hard to tell...
    Bob Comer

  • Best practices to share 4 printers on small network running Server 2008 R2 Standard (service pack 1)

    Hello, 
    I'm a new IT admin at a small company (10-12 PCs running Windows 7 or 8) which has 4 printers. I'd like to install the printers either connected to the server or as wireless printers (1 is old enough to require
    a USB connection to a PC, no network capability), such that every PC has access to each printer.
    Don't worry about the USB printer - I know it's not the best way to share a printer, but it's not a critical printer; I just want it available when its PC is on.
    I've read a lot about the best way to set up printers, including stuff about group policy and print server, but I am not a network administrator, and I don't really understand any of it. I'd just like to install
    the drivers on the server or something, and then share them. Right now all the printers do something a little different: one is on a WSD port, two has a little "shared" icon, one has the icon but also a "network" icon... it's very confusing.
    Can anyone help me with a basic setup that I can do for each printer?
    p.s. they all have a reserved IP address.
    Thanks,
    Laura

    may need to set print server... maybe helpful.
    http://www.techiwarehouse.com/engine/9aa10a93/How-to-Share-Printer-in-Windows-Server-2008-R2
    http://blogs.technet.com/b/yongrhee/archive/2009/09/14/best-practices-on-deploying-a-microsoft-windows-server-2008-windows-server-2008-r2-print-server.aspx
    http://joeit.wordpress.com/2011/06/08/how-do-i-share-a-printer-from-ws2008-r2-to-x86-clients-or-all-printers-should-die-in-a-fire/
    Best,
    Howtodo

  • Best Practice for SAP PI installation to share Data Base server with other

    Hi All,
    We are going for PI three tire installation but now I need some best practice document for PI installation should share Data base with other Non-SAP Application or not. I never see SAP PI install on Data base server which has other Application sharing. I do not know what is best practice but I am sure sharing data base server with other non-sap application doesnu2019t look good means not clean architecture, so I need some SAP document for best practice to get it approve from management. If somebody has any document link please let me know.
    With regards
    Sunil

    You should not mix different apps into one database.
    If you have a standard database license provided by SAP, then this is not allowed. See these sap notes for details:
    [581312 - Oracle database: licensing restrictions|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=581312]
    [105047 - Support for Oracle functions in the SAP environment|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=105047] -> number 23
          23. External data in the SAP database
    Must be covered by an acquired database license (Note 581312).
    Permitted for administration tools and monitoring tools.
    In addition, we do not recommend to use an SAP database with non-SAP software, since this constellation has considerable disadvantages
    Regards, Michael

  • What is "best practice" to set up and configure a Mac Mini server with dual 1 TB drives, using RAID 1?

    I have been handed a new, out of the box, Mac Mini server.  Has two 1 TB drives in it.  Contractor suggested RAID 1 for the set up.  I have done some research
    and found out that in creating the software RAID, this takes away the recovery partition, so I have been reading up on how to create a recovery "disk" using a thumb drive.  this part of the operation I am comfortable with, but there are other issues/concerns that I have.
    Basically, what is the "best practice" to setup the Mini, configure the RAID and then start the server.  I am assuming the steps would be something like this:
    1) start up the Mini and run through the normal Maverick setup/config - keep it plain and vanilla
    2) grab a copy of the Server app and store it offline in a safe place
    3) perform the RAID configuration / reinstall of OS X Maverick using the recovery tools
    4) copy down and start the server app
    This might be considered a very simplified version of this article (http://support.apple.com/kb/HT4886 - Mac mini server (Late 2012 and Mid 2011): How to install OS X Server on a software RAID volume), with the biggest difference being I grab a copy of the Server App off of the mini before I reinstall, since I did not purchase it from the App store, but rather it came with the mini.
    Is there a best practice /  how-to tutorial somewhere that I can follow/learn from? Am I on the right track or headed for a train wreck?
    thanks in advance

    I think this article will answer your question. Hope this helps: http://wisebyte.blogspot.com/2014/01/best-configuration-for-mac-mini-server.html

  • Best practice for creating a new email address to Exchange Server 2010 for share point Library

    Hi,
    Please advise if there is any best practice for the above issue?
    Thanks 
    srabon

    Hi Srabon,
    Hope these are what you want.
    Use a cmdlet to Create a User account and Mailbox in Exchange 2010
    http://technet.microsoft.com/en-us/magazine/ff381465.aspx
    Create a Mailbox for an Existing User
    http://technet.microsoft.com/en-us/library/aa998319(v=exchg.141).aspx
    Thanks
    Mavis
    Mavis Huang
    TechNet Community Support

  • SAP Best Practices for Data Migration :repositories only on MS SQL Server ?

    Hi,
    I'm implementing the "SAP Best Practices for Data Migration" (see https://websmp109.sap-ag.de/bp-datamigration).
    As part of the installation you have to install MS SQL Server Express Edition. The installation guide contains detailed steps to do this. All repositories for Data Services should be running on SQL Server, according to the installation guide.
    The customer I'm working for now does not want to use SQL Server, but DB2, as company standard.
    So I use DB2 for the local and profiler repositories.
    I notice however that the web application http://localhost:8080/MigrationServices does not support DB2.The only database type you can select in the configuration area is MS SQL Server.
    Is this a limitation, a by design ?

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • Accessing directories of ical server

    Hello
    I'm trying to get a CRM program to interact with the ICAL server, and hope to be able to write and read directly from the directories containing the information. So in theory drop a webcal object in, read a webcal object out.
    However.. even as an administrator, access to these directories seem to be barred.
    Does anyone know how to get into them?
    Thanks
    Peter

    I may be solving my own problem here.
    The copying of the folder in the finder changed the ownership of course to root.
    Change owner and group back to _calendar and everything seems to work fine using the new location.
    Anyone have any thoughts. Is what I have done reasonable or will I see problems down the line with calendars or the service itself?
    thanks

Maybe you are looking for