DLU or DSfW for single server setup using NOWS for Small Bus

I'm planning to install a single server in a 5 user office, plus a few remote users.
OES 11 SP1, ZCM 11, GW
As part of this I want to have single sign on to Windows workstations (Win 7), and could use DLU, but I also wonder about DSfW.
They both would achieve single sign on but in vastly different ways.
Which way to go? DLU or DSfW?

Originally Posted by drops
5 users? in my opinion everything is overkill then.
The 5 users is expected to grow to 10 within 3-4 months.
Overkill or not, no end user likes to have two different passwords when they logon, and most don't comprehend the difference between a local logon and the network (OES) logon. If there is a password change policy in place then the problem is compounded.
So the goal is single sign on with unique profiles.
Originally Posted by drops
if its only about desktop access with a local user you have several options
1. DLU
A good candidate, though I haven't used it for some time. Satisfies all criteria.
Originally Posted by drops
2. pgina LDAP auth - local gateway and novell client passthrough auth
Never used this...Is it easy to setup and reliable?
Originally Posted by drops
3. use a generic windows user
I've used this on a small Samba setup once, and it is workable, except for no unique user profile, and not ideal from a security perspective.
Originally Posted by drops
4. DSfw or AD (synchronized passwords)
I've never used DSfW hence my post here. My gut feeling is that it is more complex and difficult to troubleshoot than DLU so I'm leaning towards DLU. That said, I'm also interested on your comments of any PGINA implementations you've done, and if it should be considered.

Similar Messages

  • Connection Broker/RDWeb Single Server Setup.

     
    Hello, I have a quick question for understanding how this configuration works that we just setup and I want to understand if I am driving this the right way and if we configured this in the
    proper setup.
    So..
    Server 1 - RDWeb / Connection Broker
    Server 2 - RDWeb / Connection Broker
    Server 3 - Terminal Server 1
    Server 4 - Terminal Server 2
    Server 4 - Terminal Server 3
    This was setup and I am asking the team to build these server to make a highly available environment so that we can grow and move applications here in the future. I would like the team to configure
    the CB for High Availability then use our F5 to balance the Web Services. This should yield a very fault tolerant model for us, as I see it laid out and if you have suggestions or faults don’t hesitate to voice them.
    However I am getting resistance and I am unable to understand where my logic is flawed. Does the High Availability on the broker balance the web services, I am being told that they do however
    I don’t know how and I want to ensure I am operating off the correct information.
    Thank you for any insight that you can provide for me on this.

    Hi Jessica,
    Thank you for your posting in Windows Server Forum.
    As you want to configure RDCB high availability, I can say that you must have SQL server setup that can be used by RDCB server to store data. You also need to create a folder to store SQL database files. Also SQL Server Native Client is installed on all RD
    Connection Broker servers. Static IP addresses have been assigned to all RD Connection Broker servers. DNS resource records with a single DNS name have been created for all RD Connection Broker servers that will be part of the deployment.
    Please refer beneath article for more information:
    1.  RD Connection Broker High Availability in Windows Server 2012
    2.  Deploying RD Connection Broker High Availability in Windows Server 2012
    3.  Step by Step Windows 2012 R2 Remote Desktop Services – Part 2
    Hope it helps! 
    Thanks,
    Dharmesh

  • Hyper-v 2012 R2 server core recommendations for single server setup

    Hi All,
    Next weekend I will be installing a brand new Dell 720 server for a client of mine.
    I would like to hear some recommendations on how to install this hypervisor.
    Let me state that I have had some experience with virtualization technology  in the past with VMWARE ESXi.
    This is a small client with under 10 users but I have learned that with the 2012 product offering you cannot get Small Business Server any more. I know Microsoft has Windows server 2012 Essentials but you would have to have Exchange server in the cloud which
    my client does not want. He wants to run his Exchange server in house.
    Also he needs Remote Desktop Services (Terminal Server) because his accounting application requires it.
    He has an existing Windows 2003 domain controller that eventually has to be decommissioned because it is too old and slow.
    So I thought I install Hyper-v 2012 R2 core on the new server and then create 2 VMs:
    1) Windows 2012 STD to run as a secondary domain controller.
    2) Windows 2012 STD member server with Exchange server 2013 and Remote Desktop Services role.  
    My questions here are these:
    1) Do I install the Hypervisor in workgroup or domain mode?
    2) When I am finished with the old server do I transfer the AD FSMO roles onto the VM that is acting as a domain controller?
    The new server has 2 CPUs. How does Microsoft licensing go here? I know that Windows 2012 STD gives me 2 instances with the same product key on the same server for 1 CPU. What happens if you have 2?
    Can someone give me some insight on how to go about this? I remember that I found this easier with VMWARE Esxi.
    Now with Hyper-V I need to also get a management workstation with Windows 8 professional on it to manage the Hyper-V core server.
    Thanks and regards
    Alfred

    In regard to the VM's: 
    1.) DC: If you're going to be installing a second domain controller that is 2012 R2, you might consider reading this guide first.
    http://msmvps.com/blogs/mweber/archive/2012/07/30/upgrading-an-active-directory-domain-from-windows-server-2003-or-windows-server-2003-r2-to-windows-server-2012.aspx
    2.) Exchange: Here's the guide for installing Exchange 2013 on Server 2012.  I don't believe R2 is supported yet for 2013.
    http://social.technet.microsoft.com/wiki/contents/articles/14506.how-to-install-exchange-2013-on-windows-server-2012.aspx
    3.) Host: You'd install the host in domain mode....but that point may be moot after you read #5.
    4.) FSMO: I would verify the health of the 2012 DC before moving the roles.  All too often I'll see a new DC get stood up, sysvol won't be published, the engineer will be in a rush to move the FSMO's, and things get a little sideways.  Ultimately
    you don't HAVE to move the FSMO's until you are ready to decommission the 2003 box.
    5.) Licensing:  No licensing rights convey with Hyper-V Core, so this may not be best for your scenario.  See the queen mother of all Hyper-V licensing posts here -
    http://www.aidanfinn.com/?p=13090
    For what it's worth, plenty of SMB's are going to Office 365 to avoid the on-prem administration headache, but your client wants what your client wants. Sorry if this isn't the news you wanted but I hope it helps.

  • DR Server Setup using GLVM/DB2 HADR and HACMP/XD.

    We have CI/DB of SAP ERP SR3 and SAP BI 7.0  SR3 based on AIX 6.1 / DB2 9.5 installed on two separate LPAR's ( one LPAR has ABAPJAVA ERP CI/DB and otherABAPJAVA BI CI/DB) in P570 system at main site.
    DR site is 20Kms away from main site IBM has proposed to setup Async GLVM for SAP instance & Async DB2 HADR for DB2 HACM/XD based cluster for above mentioned ERP and BI systems on two separate LPAR in P550 system at DR site.
    We have Monthly DB growth of approximately 8GB on ERP server and 12 GB on BI server.
    DR site is 20 Kilometers away from Main site and connected to main site via 8Mbps dedicated leased line.
    I would like to know if this proposal is workable with above mentioned DB growth over the limited bandwidth we have and with possibility of data loss using async setup.
    I would also appreciate if anyone forward me SAP best practice or recommend best way going forward for our environment, we intend to failover SAP and DB2 instance to DR site system which is 20 Kilometers apart from main site.
    Best Regards
    Mohammed Ashraf
    Edited by: Mohammed Ashraf on Mar 15, 2010 10:21 AM
    Edited by: Mohammed Ashraf on Mar 15, 2010 11:08 AM

    Since GLVM and HADR are both replicating data (LOGBUF in case of DB2) over the TCP/IP network, you need to compare the daily (or hourly or even more granular) data volume averages and peaks with the bandwidth of the pipe (which is 8MBps in your case). Theoretically, growth of 12 GB per month is not very high and should not really stress the bandwidth of your TCPIP connection but your network administrators will be able to give you a more detailed answer once you provide them with the amount of volume you expect to see flowing between the primary and the DR site. Since your replication is async, you should not really experience any issues due to workload peaks (provided there is enough bandwidth). We replicate one of our bolt-on (non-SAP) systems using HADR and "rsync" for the application files without any issues. The log volume and growth of this system is about 200 GB per month with about 2 GB of data moving across the wire every hour without any latency or performance problems.
    Hope this helps!
    - Sameer

  • '[FAQ] Twonky DLNA Media Server Setup & Use

    Some Frequently Asked Questions Regarding Twonky, And Some Answers The MyCloud includes a Twonky DLNA Media Server.  Sadly it doesn't seem very well integrated into the MyCloud, and the two fight for supremacy.  I've spent a long time trying to understand how it works, and reading the forum, I see many other people also have trouble with it. So I thought that, rather than trying to answer these questions individually, I'd try to write an FAQ covering the things I've found, and the problems I or others have encountered, and the solutions.
    I'm not a member of WD staff.  I'm not a MyCloud, Twonky or Linux expert (but I'm moderately competent).  But I am stubborn, and determined to find out how to make the thing work the way I want it to.  If more expert users find fault in this FAQ, or have other insights to offer, then feel free to post comments.  I'll edit this first post and make corrections and add any FAQ&A that are added.
    Where I am unsure of something, I'll enclose it in [brackets].  Feel free to investigate and confirm or disprove my observations... Where I mention control settings, I use a | to indicate a level of hierarchy in the menu system.  This is to distinguish control settings and file system paths.
    Ideally, we'd shame WD into doing a better job of integrating Twonky into the MyCloud, so it's not such a struggle, and doesn't require Linux administrator-level skills to make it work properly. Starting the Media Server and setting up a Media LibraryQ. What is a Media Server?
    Q. How do I start the Twonky Server?
    Q. How do I control the Twonky server properly?
    Q. Where should I put my media so Twonky finds it?
    Q. Why doesn't Twonky find my media?
    Q. Can I stop Twonky searching in certain folders in my media storage folders?
    Q. What are the 'Shared Media' folders for?
    Q. Why can people using DLNA clients see media in my private share?
    Q. How do I control when Twonky rescans the media store? Optimising operation of the Media Server
    Q: How does the Twonky service work on the MyCloud?
    Q. Where is the Twonky database stored?
    Q. Where are the Twonky settings stored?
    Q. What do all the settings in Twonky's configuration file do?
    Q. How do I clear Twonky's database and start again?
    Q. Where can I find debug log files for Twonky?
    Q. How do I stop MyCloud breaking the Twonky Server?
    Q. Can I change to location of the 'Shared Media' folders?
    Q. Why does setting a Twonky access control password prevent the MyCloud from sleeping?
    Q. Can I replace Twonky with another Media Server?
    Optimising behaviour for Media Clients
    Q. Why does my media player not show the right track information?
    Q. How can I get Twonky to provide full size artwork to clients, rather than crude thumbnails?
    Q. How can I change the media 'views' I see in my DLNA client?
    Q. Can I add my own category to the Music/Photos/Videos categories?
    Some Prerequisites
    You'll need to know how to use the WD Dashboard browser interface to control the MyCloud:
    http://wdc.custhelp.com/app/answers/detail/a_id/10420
    You'll need to know how to use the Twonky UI browser interface to control Twonky:
    http://wdc.custhelp.com/app/answers/detail/a_id/3299
    You'll need to know how to use SSH to log in as root to the MyCloud:
    http://wdc.custhelp.com/app/answers/detail/a_id/10435
    Note the big warning signs about warranty.  IMHO, it's a bit off to make dire threats like this when you often NEED to SSH login to get the advertised services to work properly.
    Some Unix experience will help, but I'll try to explain all the commands you'll need.  Experience with a Unix text editor such as vi or nano is assumed.

    Q. How does the Twonky service work on the MyCloud?
    A: [Twonky is installed as a Linux service.  Like other such services, there is a service control file in the /etc/init.d directory]:
    /etc/init.d/twonky
    This is called when the twonky service is invoked, and there are a number of options for the control of Twonky using an SSH root login:
    service twonky start
    service twonky stop
    service twonky restart
    service twonky status If you execute the start or stop commands with Media Streaming turned on in the Dashboard, you are likely to confuse the Dashboard; you may get an 'Error 400162'.
    'start' calls writeTwonkyContentDir.sh, which, as we'll see later (Q: How do I stop MyCloud breaking the Twonky Server?), reads the /etc/contentdir file and modifies the twonkyserver.ini file to set the media search paths and types, and then starts the Twonky server.
    [It is assumed that the MyCloud Dashboard uses these Twonky service calls to start and stop the Twonky server].  Q. Where is the Twonky database stored?
    A: Since Twonky is a service running on MyCloud, it is not visible in the shared space, so you have to log in via SSH, and navigate around the MyCloud's Linux file system.
    Once you have logged in, use the cd command to change directory to Twonky's area:
    cd /CacheVolume/twonkymedia Now see what's there with a directory list:
    ls -l  Q. Where are the Twonky settings stored?
    A: One of the files listed above is the Twonky configuration file:
    twonkyserver.ini This file contains all the settings that control how Twonky works.
    Now, this is where some of the fighting occurs, because both MyCloud and Twonky can modify this file...
    When you are happy that Twonky is working correctly, I'd recommend making a copy of the file in your Public area, where it won't be mangled by MyCloud firmware upgrades or restarts.  For instance:
    cp /CacheVolume/twonkymedia/twonkyserver.ini /shares/Public/twonkyserver.ini
    If things go wrong, you can reinstate this file by swapping the source and destination.  Q. What do all the settings in Twonky's configuration file do?A: [Most of them are provided with comments that explain their purpose].
    [There are some 'magic numbers' embedded in the configuration file that seem to cause problems, and I don't know what changes those; I suspect that a firmware upgrade does it.  If anyone can monitor these values before and after a MyCloud firmware upgrade and report any changes, I'd be obliged:
    # UserID Please Do NOT change it manually
    userid=<redacted>
    # twonky info for Media Feeds Please Do NOT change it manually
    twonkyinfo=<redacted>]  Q. How do I clear Twonky's database and start again?
    A: If Twonky doesn't seem to be behaving correctly (not finding media correctly, or partially, or finding unwanted media), then it may be useful to clean up the database. There are a number of escalating actions you can take.  Starting with the mildest, and increasing in severity, these are as follows:
    1. Initiate a library rescan, using either Settings|Media|DLNA Database|Rescan in the Dashboard, or Settings|Advanced|Server Maintenance|Rescan Content Folders in the Twonky UI.
    2, Initiate a library rebuild, using either Settings|Media|DLNA Database|Rebuild in the Dashboard, or Settings|Advanced|Server Maintenance|Restart Server in the Twonky UI.
    3. Finally, you can completely clear out Twonky's working area. [This can be useful if you notice the '[Error] - LOG_SYSTEM: Error: 2 No such file or directory' report in the Twonky logfile.  I don't know what causes this error, or whether it has serious consquences, but I have found it to be associated with periods of diffcult behaviour.] Turn Twonky off using the Settings|Media|DLNA Media Server|Media Streaming control in the Dashboard.  Then SSH root login, and hide the entire Twonky working area:cd /CacheVolumemv twonkymedia twonkymedia_bak Restart the Twonky Server using the Dashboard; it will re-create Twonky's working area with a clean version with default settings.  You can then use the Twonky UI to put your settings back in place.  You might try doing this one-by-one, saving the settings and restarting the Twonky server from the Twonky UI each time, then checking the logfile after the Twonky UI has re-appeared. Once you're happy with the operation of the clean startup, you can delete the old Twonky working area:
    rm -f -R /CacheVolume/twonkymedia_bak
    The -f -R flags mean 'delete EVERYTHING from here down... yes, I mean it'.  nb. it will only delete the Twonky database; it won't delete any of your media.  But do be careful that you type the command in correctly; rm -f -R is a powerful command (-f means force, -R means recursive), and could do a lot of damage if you get it wrong...
    Q. Where can I find debug log files for Twonky?
    A: You can enable activity logging using the Settings|Advanced|Logging control in the Twonky UI.  You can also open the logfile from there.
    Alternatively, the logfile is stored in Twonky's area:
    /CacheVolume/twonkymedia/twonkymedia-log.txt
    There's also a MyCloud system logfile that records actions on the Twonky server, such as start and stop:
    /CacheVolume/update.log  Q. How do I stop MyCloud breaking the Twonky Server?
    A: Firstly, go to the Settings|Firmware page in the Dashboard, and disable Auto Update: firmware upgrades completely destroy the /CacheVolume/twonkymedia area, and replace it with new, default version, so you'll be back to square one.
    Secondly, we can stop the problem of Media Streaming restarts overwriting the Twonky UI settings.  Using an SSH root login, enter the following commands:
    cd /usr/local/sbin
    mv writeTwonkyContentDir.sh writeTwonkyContentDir.sh.old
    This disables the script that MyCloud calls when it starts Twonky, which overwrites your settings.
    Another way of fixing the settings is to leave this script alone, but change the file it uses as the source of the settings.  Admittedly, this will still override any changes made via the Twonky UI.  The file is found here:
    /etc/contentdir
    Replace this with a single control line, with no line terminator, e.g.
    +M|/Public/Music,+P|/Public/Pictures,+V|/Public/Videos This will select my earlier settings.
    The format is a comma-separated list of shares and media search and aggregation control flags:
    + enable media searching on the share
    - disable media searching on the share
    A look for all media types
    M look for music
    P look for pictures
    V look for videos
    lower case letters enable aggregation of these media types.
    The | terminates the media type string [you can search for multiple types in a share].  Q: How can I change the location of the 'Shared Media' folders?A: This brings us deep into the guts of the twonkyserver.ini file...  There are a number of settings that control where Twonky stores uploaded files and 'servermanaged' media, [which are used for aggregation], and there is no control setting for them in the Twonky UI.  To change the location of the folders we have to do an SSH root login and edit the twonkyserver.ini file.  The lines in question are as follows:
    uploadmusicdir=/shares/Public/Shared Music
    uploadpicturedir=/shares/Public/Shared Pictures
    uploadvideodir=/shares/Public/Shared Videos
    servermanagedmusicdir=/shares/Public/Shared Music
    servermanagedpicturedir=/shares/Public/Shared Pictures
    servermanagedvideodir=/shares/Public/Shared Videos
    You can change these locations to suit your desired file system.  For instance, you could create a single 'Shared' folder in the Public area, with sub folders for the different media types, e.g.
    uploadmusicdir=/shares/Public/Shared/Music
    Or you could create a Shared sub folder within each of your main media folders, e.g.
    uploadmusicdir=/shares/Public/Music/Shared
    However, if you choose that option, you will need to stop Twonky searching there, by adding 'Shared' to the list of ignored directories discussed above.  Q. Why does setting a Twonky access control password prevent the MyCloud from sleeping?
    A: It has been reported that if a username and password are set using the Settings|Advanced|Secured Server Settings control in the Twonky UI, that the MyCloud then never sleeps.  [It appears that the MyCloud continues to interrogate the drive in some way, hoping to get status of the Media Server from Twonky, but Twonky won't talk to it because it doesn't have the required permission.].
    I don't think this is very important, since the MyCloud is mainly intended for home use, and the Twonky control UI isn't visible external to your local network, and all users on your local network must be trusted*, since you've given them the network access code, [and they have access to the Public area, and could wreak havoc there...] If you need to make your music library available to visitors who cannot be trusted, create a private share and put all your media under that, then enable media serving on that share, and get Twonky to search for media in that private share. DLNA ignores acces control, so DLNA clients will be able to see the media on your private share, but visitors will not have access to your private share via network file access. [However, if you feel that secured access to the Twonky UI is necessary, then I would suggest that you disable the Twonky server using the Settings|Media|Media Streaming conrol, then SSH login as root, and start Twonky using the command:service twonky start You won't get the media scan status in the Dashboard, but the Twonky Server and UI will be running.]
    * Breaches of this trust should be punished in an appropriate manner...
    Q. Can I replace Twonky with another Media Server?A: Yes; forum users hvalentim and Nazar78 have posted good threads about this:MiniDLNA for V3 firmwareMiniDLNA for V4 firmware Kudos to them :-) 

  • Looking for quicken-like app for small bus

    Before switching to mac I used an old version of quicken.  Basically just a check register, and invoicing.
    I am a sole proprietor, no employees.  Need some simple monthly, quarterly and annual reports. 
    Anyone using anything besides quicken for mac?

    Quicken for mac was released several weeks ago. You can find it on Mac App Store. I usually use it with Numeric Notes for simple invoicing and calculations, wich also available on Mac App Store. Hopes either of apps will be useful for you.

  • One powerful single server or distributed setup for EPM 11 (Planning)?

    Hi,
    If you have an option of choosing a single (but powerful) server vs distributed server, what would you recommend? This is for appx 100 concurrent users.
    Single Server setup
    Web & App Server
    - 8 core CPU
    - 16 GB RAM
    Distributed Server setup
    Server 1 (Web)
    - 4 core CPU
    - 8 GB RAM
    Server 2 (App)
    - 4 core CPU
    - 8 GB RAM
    Both setup will have a separate dedicated Essbase & RDBMS.
    Any thoughts will be greatly appreciated.
    Kind regards,
    Lou
    Edited by: user8640150 on 27/01/2010 15:53

    Personally I would go for a distributed architecture to split out the load instead of putting it all on one server.
    I have seen many times an architecture of essbase on one server and planning, foundation and reporting & analysis split between two or more servers.
    Though I would recommened getting a consultant in to discuss what products are going to be installed and how they are going to be used before making any decisions.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Single server solution for RDS / TS / RDP using Windows Server 2012 R2

    Planning on setting up a small single server and  need this functionality:
    * 3 local users runnnig Windows 7 Home Premium needs to access files on the server
    * The same 3 users should also be able to connect from home (PC, Mac, iPhone) and run an application on the server. (Session-Based Remote Desktop).
    We want to use Windows Server 2012, and found out that Essentials does not support RDP, so that leaves Foundation and Standard versions.
    However, I also found out that in WS 2012 the RDP can not be on the same server as the Domain Controller, and we therefor needs to run 2 server instances on our hardware. I think this starts to look way to complicated for what we want to do, but found out
    that WS 2012 R2 allows a single server to run RDP (See TechNet article 2833839).
    So we will go for Windows Server 2012 R2, either Foundation or Standard to set up our RDP.
    So now the question: Will that solution work with our local machines running Windows 7 Home Premium, as they cannot connect to a domain? Can we set up some kind of simple file share or Workgroup to acces files locally while still keeping the RDP
    functionality on the server?
    And, will WS 2012 Foundation R2 do this as well as WS 2012 Standard R2?
    (I have been asking several locat MS representatives to find a solution to our needs, but no one seems to know how this works....of cause we could just get 2 WS 2012 Standard server instances, run one as DC and on as RDCB and upgrade all our clients to Win
    7 Pro, but we would like a solution with minimal investment in time and money)
    Rgds
    Petter

    Hi Ryan, 
    and thanks for the answer! I do not know how to do "multiple quote" in this forum so I do it this way:
    "have you considered virtualisation, as you can run multiple virtual machines under one licence. I think this would be the cheapest and most efficient use of your money. Upgrading your clients to Windows 7 pro would allow you to have domain control
    Single Sign On SSO. "
    This is the "official" solution I think: Upgrade all clients to Win 7 Pro and run two instances of Win Server 2012 Standard on the server.
    However, I was hoping to get away with something a bit more Quick & Dirty.....;-) We do not have big security issues and will have a good backup system, and I think for 3 users only, it will be more work trying to centralise administration like updating,
    backups etc, than to just go to each machine and do what is needed. 
    We are good with computers/Windows but have no Server experience. A server guy will help us get started, but I dont want him around after that, so it must be a very simple solution.
    Also, installing 2 instances of WS 2012 and upgrading all 3 clients to Win Pro, and then installing all software and settings on the clients into the new domain user accounts on these clients is quite a lot of work. So I was hoping to keep only existing local
    users on the client machines and only have some kind of file share thing going on with the server disks that we need to access. So perhaps use a Workgroup instead of a domain, if that works with the RDS setup?
    "Option 1
    2 virtual machines 1x DC and 1x RDS server."
    So, if we set up RDS this way (so we can log in remote and run our application session-based on the server), can we keep the local clients running Windows Home Premium using our current local user logins (ie no domain user accounts created on the client machines,
    as this is impossible in Home versions) and still access the server disks somehow, or is it impossible? 
    Another question is if it is stupid/a really bad solution...but I still want to know if it is possible....;-)
    "Option 2 
    2 virtual machines 1x DC and 1x RDS server.
    You can configure your RDS solution as a domain joined platform and will still be able to access resources from the local device as you can map local drives to the session host. http://www.serverintellect.com/support/techfaq/drive-rdp/
    Your users would have two sets of credentials, one for the local client and one for the domain."
    I do not want to access files over VPN or RDP, we only want to run an application on the server from remote (Session-Based Remote Desktop). However when we use the local clients we want to access files on the server, and then we access huge image and film files
    on fast RAID drives, so local network speed must be top speed. Also if possible we would like to not upgrade to Win Pro, and then joining a domain is not possible.
    "Option 3
    1x Server
    The second option would be to manually deploy the session host role and licencing role to a work group server. This would limit access to RDP only and you would loose web access functionality."
    I think this is what I was hoping for. It seems that the new R2 release of WS 2012 allows you to rund RDP and Domain Controller roles on the SAME instance of the server. That sounds nice, it limits what we need to keep track on and minimises the load on the
    server that needs to act as a very fast file server locally.
    However, can we do this and still keep file acces with only Windows Home (no domain) in the local clients (same question as above under "Option 1")?
    Rgds
    Petter

  • Exchange Server 2013 SP1 - Optimize for Single Server Use

    Hello,
    i've set up Exchange 2013 SP1 with at last CU6 - get exchangeserver says:
    AdminDisplayVersion             : Version 15.0 (Build 995.29)
    ExchangeVersion                 : 0.1 (8.0.535.0)              - - - funny number ;-) (is this an early version???)
    Yes, the annoying 16028 in the application log are running every five minutes, but thats not the main question.
    Which services are needed in a SINGLE-SERVER Environment, without DAG and unified messaging?
    Can the Self-Checking, Health- and Self-Probing be put down?
    These services are started automatic:
       Microsoft Exchange Active Directory Topology
       Microsoft Exchange Anti-spam Update
       Microsoft Exchange DAG Management
       Microsoft Exchange Diagnostics
       Microsoft Exchange EdgeSync
       Microsoft Exchange Frontend Transport
       Microsoft Exchange Health Manager
       Microsoft Exchange Mailbox Assistants
       Microsoft Exchange Mailbox Replication
       Microsoft Exchange Mailbox Transport Delivery
       Microsoft Exchange Mailbox Transport Submission
       Microsoft Exchange Migration Workflow
       Microsoft Exchange Replication
       Microsoft Exchange RPC Client Access
       Microsoft Exchange Search
       Microsoft Exchange Search Host Controller
       Microsoft Exchange Service Host
       Microsoft Exchange Throttling
       Microsoft Exchange Transport
       Microsoft Exchange Transport Log Search
       Microsoft Exchange Unified Messaging
       Microsoft Exchange Unified Messaging Call Router
       Microsoft Exchange-Informationsspeicher
       Microsoft Filtering Management Service
       Microsoft Online Services Sign-in Assistant
    And with C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\data\Queue\mail.que about 512 MBytes size (which may may be a designed default size)...
    Have set up about 6 users mailboxes, 3 public folder in a mailbox for public folders. Only for testing purposes sent some (small) mails, no external mails received (only testing receive with 'popcon' for mail delivery to the internal boxes).
    The server (2012 R2) is running only in the internal network (yes, with internet connection for updates).
    No virus scanners, not third party receive or send connectors, only the 5 default (3 Frontend, 2 Hub) receive Connectors and one Send Connector to our mail provider set up and no user activity (only sometimes from me for testing).
    It seems like it's doing only with itself, logging file sizes of above 11 GB (in 4 weeks), with huge amounts of probing, health checking, internal testing, internal updating etc.
    Can this logging be set up to a lower level, can services be deactivated (for single server configuration),
    the logging volume should be in a useful relation to the mail use of the server, a message for every self test, internal health check etc. in the log-files normally isn't useful, only when problems occur there is a need...
    Thanks in advance
    Andreas

    Between Exchange protocol-based log files, PerfMon .blg files used by Managed Availability, IIS logs, etc., the total footprint can add up quickly and these logs are not purged automatically.  You can take a look at this site below and use the powershell
    script to clean it up.
    http://www.c7solutions.com/2013/04/removing-old-exchange-2013-log-files-html

  • DSfW, Groupwise, Zenworks on single server

    Hi
    I am thinking of installing Novell Open Workgroup suite for Small Business, and putting DSfW, Groupwise, Zenworks on a single server.
    Is this sane?
    Has anyone done this before and can vouch that it would work?
    I even wonder if GW and ZW would co-exist on the same server. The docs for ZW only recommend a dedicated server, which means that it still should work with GW, but not so sure if DSfW is thrown into the mix.
    My main reason for DSfW is for the single sign on for the 5 Windows workstations, though maybe I am better off with still using Novell client, Zenworks and DLU.
    - Gordon

    Originally Posted by W_Prindl
    GW and ZCM can coexist in a small environment quite well. I have this
    running and do not see performance problems at all.
    Good to know!
    Originally Posted by W_Prindl
    Regarding your USB passtrough Vmware ESXi performance problem I have no
    experience - but I see very often USB performance problems on real
    hardware, too. Is there such a big difference between USB unvirtualized
    and virtualized?
    Not too sure, but for the few days I was running backups over USB, it seemed extra slow. I should have tested the throughput but it was only temporary until a backup server was setup. I might re-run some real tests on our ESX server and see what the actual difference is. Also as per VMware VMware KB: Supported USB device models for passthrough from an ESX or ESXi host to a virtual machine there is very limited support for USB devices, which surprises me a bit due to the prevalence of USB on server hardware. Though as VMware's core market is enterprise, they wouldn't see too much demand for local USB backup...Bring on USB 3!
    Originally Posted by W_Prindl
    For virtualization of only one guest you could use vmware workstation
    or XEN (but I don't know for now, if DSfW is supported as a XEN guest)
    instead of ESXi, that way you get native USB performance for your host
    OS.
    I haven't tried XEN of late, but from testing it 18 months ago, it was a lot more complicated than ESXi. I think it was lacking in nice management tools and at the time using VMware's Workstation and ESXi were intuitive and easy enough to use, so have stuck with them ever since. I might have to give XEN another test. How do you find working with it?

  • New, Single Server - DNS, Web, Wiki, Mail Setup Issues

    I'm having some issues properly setting up 10.7.3 to host internal DNS and external Web, Wiki and Mail.  I'm having issues with the web and wiki hosting.  Since those are the most important right now, I haven't really had a chance to fully test the other features.  I was able to do some testing of the mail and iCal but it was limited.
    Long read below but I thought the specifics would be helpful...
    My goals and configuration are:
    ***GOALS***
    Primary:
    1) Host a public website: example.org and www.example.org
    2) Host a public wiki: main.example.org and www.main.example.org
    3) Host a public mail server: [email protected]
    4) Host a public, group calendar
    4a) Read only to majority - Read/Write to a group
    5) Host a global address book for authenticated users
    Secondary:
    6) Allow anonymous public access to a file share (read only)
    7) Allow authenticated access to the same file share (read/write)
    8) Do as much of this via GUIs as possible.
    ***SETUP AND CONFIGURATION***
    Physical:
    1) Business class Internet (no blocked ports)
    2) A single, public and static IP address
    3) Domain name and public DNS via GoDaddy
    4) Wildcard Cert: *.example.org from GoDaddy
    5) Late 2011 (bought in Jan 2012) MacMini Lion Server (the $1,000 one).
    5a) Upgraded the RAM to 16GB (need for VMware Windows clients)
    5b) Added two USB to Ethernet adapters.
    6) Using a new model AirPort Extreme Base Station (bought w/ the MM) as the main router.
    Initial Configuration:
    7) Setup a Mac Address reservation for the main and two USB Ethernet ports along with the wireless too.
    7a) Main port = 10.0.1.5 / Others are .6, .7 and .10
    8) During the setup, I chose the Host on the Internet (third) option and named my server: main.example.org
    9) After the setup completed, I upgraded the OS & Admin Tool to 10.7.3 from a clean install (on #5 now)
    DNS Config
    10) I used the admin tool to open DNS and change:
    11) "Primary Zone Name" from main.example.org to example.org.
    12) In the "Nameservers:" block, I changed the zone name there but left the nameserver name alone (zone: example.org /// Nameserver Hostname: main.example.org).
    13) The Machine Name and Reverse Zone was left alone.  RZ resolves to main.example.org.  sudo changeip -checkhostname is good.  dig on the example.org and main.example.org are good to go (NOERROR).
    OD Config
    14) From the server app, I clicked Manage/Network Accounts and setup the OD - No issues.
    SSL
    15) From the server app, I created self signed cert, generated a CSR, got a public Cert, then replaced the self-signed with the public one - No issues.
    16) Changed any service using the self-signed cert to the public one - No issues.
    17) Changed the cert in the OD to the public cert from server admin - No issues.
    In order: File Sharing, Mail, AB, iCal, Web, Wiki, Profile Manager, Network Groups, Network Users
    18) File Sharing was setup using the server app
    19) Setup mail using the server app to start it and the server admin app to configure it - No issues there (I think...)
    20) AB - Flipped the switch to on
    21) iCal - Flipped the switch to on - I setup the e-mail address to use after I added the network accounts.
    22) Web - Flipped the switch to on - Default site worked (main.example.org)
    23) Wiki - Flipped the switch to on - Default wiki worked. (main.example.org)
    24) PM - Checked the sign config profiles and enabled the device mgt.  I then flipped the switch to on - Default settings and pages worked.
    ***MY PROBLEMS***
    Website:
    Adding a website for example.org gave me the red dot in the server app.  To fix that, I added a Machine Name record to my primary zone (PZ = example.org Machine Name = example.org).  I first tried using the same 10.0.1.5 IP as the main.example.org and left the reverse mapping alone (still resolved to the NS of main.example.org).
    That gave me the green light in the server app when trying to add the website again.  From there, I changed the "Store Site Files In" to the location of my website files (and confirmed "Everyone" has Read Access in the folder's security settings).  I left the other info alone (all defaults accepted) and clicked done.
    Access to the website works on the server but external access doesn't (Network Error/timed out tcp_error).  Checked the AirPort settings using the AirPort utility (version 5.5.3) and the Port Mapping (under the "Advanced" icon) show serveral services all pointing to 10.0.1.5.  Thinking it could be DNS I tried main.example.org externally and it failed the same way.
    I ran the changeip command (good to go) and dig on example.org and main.example.org and they both resolved to 10.0.1.5 correctly.
    I removed the example.org Machine Record from the zone and it now looks like:
    PZ=example.org / ZONE=example.org / NS=main.example.org
    Machine Record=main.example.org / IP=10.0.1.5
    RM=10.0.1.5 / Resolves=main.example.org
    PLEASE HELP!

    The amount of users (if relevant):
    On site - 1 (Me)
    Off site - 16 (Windows clients - some have iOS devices too)
    Web site traffic - less than 50 regular visits per day (avg of 15) with a peek of ~125 once a month.
    This is for a 501c3 public nonprofit made of all unpaid volunteers (including the officers and directors).  All of us have paying day jobs and I just so happen to be the guy that knows just enough to get myself in trouble here.

  • Help with Proper DNS Setup for Leopard Standard Server Setup

    Hello All,
    Problem Description-
    I was reviewing some training today on DNS setup and checking for proper setup with the sudo changeip - checkhostname tool and I seem to have an incorrectly configured DNS setup. So I need some help on correcting it. When I go to the "Server Preferences" tool I cannot log in using apple.ourdomainname.com instead in order to use the tool I have to input localhost as the server name. Now I just thought that the system was broken or something and with the help of my training I now see it's a DNS problem. I thought I had everything proper since I followed the steps of creating proper DNS/RDNS entries with my ISP. Now I am stuck wondering what else isn't working properly due to the DNS issue. Thanks in advance.
    Technical Info-
    My ISP provides us with 5 static IP's and we have asked them to create entries and verified the setup of apple.ourdomainname.com = x.x.x.x which is one of our public IP's assigned currently assigned to the WAN port of our Apple Airport Extreme. We have also had them create a PTR record which also is present, verified and functional. Our MacMini running 10.5.5 is connected directly to one of the ethernet ports on our Apple Airport Extreme which is our NAT/Firewall for the LAN. So during the setup of the Standard Server install the OS configured the Airport with the required ports for chat/web/vpn. And mobile Mac's can VPN in and gain folder access and web works fine too. We don't use the e-mail portion so I can't say how that works. The server is using the DNS of 10.0.200.1 which is the IP of the Airport and the airport is programmed with the DNS of OpenDNS servers 208.67.222.222 and 208.67.220.220. The reason for this whole long shpeal is that I want to give as much technical background as possible for the best possible help.
    Thanks
    DM

    What happens when you use 'Localhost' instead of 'localhost' (i.e. capitalizing the 'L')?

  • One Search service application for multiple web applications in a single server

      We are planning to host 17 Web applications in a single Server. Do I need to create search service application for each web application or I need to create one  Search service application , create a Content source for each web
    application and create a Result source for filtering. Which is the best approach. And which approach takes more RAM memory.
       In my application I am using Search web part, "Recently Changed Items", "Popular Items" web parts. when I created only one one  Search Service application for all web applications and using Result sources ,
    I am not getting the results. What could be the problem.

    Hi,
    One SSA is ok, but you should think about access rights. If the access is clear cut between all the web apps you should be ok with one SSA. Multiple result sources limiting on content source also works, but could easily be bypassed.
    Multiple SSA's will eat up RAM/CPU like a mother :)
    As for popular etc.. it could be due to how those sources are set up, but haven't investigated or tested this much.
    Thanks,
    Mikael
    Search Enthusiast - SharePoint MVP/MCT/MCPD - If you find an answer useful, please up-vote it.
    http://techmikael.blogspot.com/
    Author of Working with FAST Search Server 2010 for SharePoint

  • How do you setup a server to use multiple DNS servers that are not connect to each other?

    Is there a way to setup a server that connects to two different domains to use the proper DNS server for name resolution?
    Let say there are two DCs: serverA.subdomaina.domain.com and serverB.subdoamainb.domain.com.  The domains are independent and not connected.  Now you need a common server that is connected to both and need to resolve names from both
    domains.
    Is this possible?
    I have setup a server in a workgroup.  One NIC has the subdomaina.domain.com connection specific suffix and the other nic has the subdomainb.domain.com.  Each NIC has the DNS server listed for the domain it is connected to.
    This configuration will resolve FQDNs of one domain but not the other.  This I believe is due to the fact the server only querys one DNS server and doesn't try the other DNS server.
    Is there any way to make the server try another DNS server, if the first one doesn't have the entry?

    Hi,
    Thank you for posting in Windows Server Forum.
    Here adding to the words of “Tim”, a forwarder is a DNS server on a network used to forward DNS queries for external DNS names to DNS servers outside of that network. You can also forward queries according to specific domain names using conditional forwarders.
    A DNS server on a network is designated as a forwarder by having the other DNS servers in the network forward the queries they cannot resolve locally to that DNS server. You can refer information regarding forwarders and how to configure from beneath link.
    Understanding forwarders
    http://technet.microsoft.com/en-us/library/cc782142(v=ws.10).aspx
    Configure a DNS Server to Use Forwarders
    http://technet.microsoft.com/en-us/library/cc754941.aspx
    Hope it helps!
    Regards.

  • Server Setup advice for a video production house

    Hello Forum,
    I recently started working for a small video production company, I need advice on the type of server and other hardware and services that will allow my co-workers to work efficiently. Here are some details:
    Currently we are going to buy a new server, I am not sure what we should look at getting,
    currently we use a mini mac running OSX server and have 6 iMacs that the editors and other staff use.
    We also have a 15 TB Drobo network storage device which is attached to the mini-mac server we are going to replace.
    The production team uses Final Cut and Motion to build documentaries.
    The other issue is that need a way to connect the office in Baltimore to the office in Washington DC and ideally share files between each location, currently they cannot access the network from Baltimore and have to have a copy of the unedited footage on their own Drobo.
    What type of server setup do you recommend? Quad core? speed?
    Should we set up a VPN to connect offices or does someone have a better idea, if so what applications do we need?
    I know windows networking pretty well, but Mac is totally new to me.Currently editing files from the server is really slow and files are usually pulled to the iMacs to do editing, burning DVDs from files on the server hardly ever works- they pull them local then burn the DVD and it works.
    I would love to hear suggestions to help us get up and running.
    Anyone know of a good website for server setup- Since there is no Domains in OSX that i know of, how can we secure the network? Links would be great.
    Thanks in advance for any help and suggestions.

    Well if your looking for performance. You could get a Promise VTrak E-Class RAID with 32TB of storage, later on you can add more chassis to the raid for more storage space. The raid can be expanded up to 160TB. (80 drive bays, each drive 2 TB)
    Using fiber you could attach the raid to an xServer or Mac Pro (running mac os x server). You'll probable want at least 8 or 16 GB of ram on the server.
    The server can run a copy of final cut server. Witch makes it easer to work as a group. Mac OS X server, when properly configured, can also be used to create a VPN between both locations.
    Final Cut Server will let editors check in/out specific parts of the documentaries. So the project lives on the server; instead of scattered over every ones computers. Part of this is you can pull down thumb nails versions of the video to work off of. Only when you do the final render do you download the HD version of the video.
    If you want even better performance on the editing stations. you could also upgrade to Mac Pros. Mac Pros have upgrade slots witch you can use to add fiber networking. You could also use the upgrade slots to add a black magic real time HD capture card.
    Or if you want to keep the imacs. you might want to hook the server to a switch by fiber, and have the imacs connected to the same switch by 1000-T
    If you call apple i'm sure they'd be happy to help you figure this all out.
    for info on Promise: http://www.promise.com/apple/
    for mor info on final cut server: http://www.apple.com/finalcutserver/

Maybe you are looking for

  • Can I synch 2 computers with the same Time Capsule?

    Hi I have a Time Capsule, a MacBook Pro and an iMac. Right now, the MacBook is synched with the Time Capsule, but i was wondering if I could also sync the Time capsule with the iMac. tx

  • Macbook got wet...

    A soda spilled near my macbook causing the top plastic with the glowing apple to get wet, along with the side with my USB ports. It was off when the spill occurred, and the battery was taken out immediately. Do you think it will still work? My brothe

  • Hide tabs in Item Master

    Dear All I Would like to hide all the tabs except sales and Inventory tab,how ever i have tried all the ways but when i navigate or do some updation Item Master form comes as default with all the tabs, Does SAP Business One Supports hiding of tabs .i

  • Font not showing

    Hi everyone, I installed the CS4 Master collection last week on my computer. Today, I needed to open a page of my website on Flash, to create a new page and add some things, and FLA wouldn't recognize the font, telling me it wasn't in the system. Wel

  • WebNFS copy 3Gb file results in arrayindexoutofboundsexception

    Hi, Using webNFS to copy a backupfile to a SAN via a NFS share the file is 3Gb, and hence com.sun.nfs.Nfs.write results in a exeption XFileInputStream in = new XFileInputStream(srcFile); XFileOutputStream out = new XFileOutputStream(dstFile); int c;