Compatibility questions re: OS X Server and new ATA serial HD

The good news: We just got a 500 GB ATA serial HD with some end of year money just as we were running out of space on our 250 GB drive.
The bad news: We have an Xserve G5 running OS X Server 10.3.7 and the hard drive says we need to be running OS X Server 10.4.7 or later.
I checked the Apple Store to see if they had any ATA drives compatible with our version of the OS X software but the only 500 GB serial ATA I found was no longer available. Looking at the 750 GB ATA I can't find any specs as to software compatibility.
Suggestions? I really don't want to spend $1000 to upgrade to OS X 10.4 right now.
Thanks in advance for the input.
Xserve G5, OS X Server 10.3.7   Mac OS X (10.3.7)  

Hi PrintTech-
I think the hardware will be fine. You may want to search this forum as I seem to remember this question coming up before and it turned out that all was well.
Assuming that this is an Apple-branded drive in an Apple sled hardware-wise you should be fine.
Luck-
-DaddyPaycheck

Similar Messages

  • How to copy Restore database in new server and new domain.

    Hai all,
    I'm trying to backup a database from one SQL server and restore it to another (both are SQL2008 R2) but diff domain. 
    when I try run few query, the output is empty. Please advise.
    noor hafizah

    Hi Noor,
    Filtered views use the Principal Access Object to determine the records to be shown to the current user (the one executing the query). You will not be able to see any results if the account not defined inside CRM.
    Please try to
    assign the correct security role to the user account.
    If you have more questions related to CRM, you can post in the Microsoft Dynamics CRM Forum . More experts there will assist you.
    Regards,
    Fanny Liu  
    Fanny Liu
    TechNet Community Support

  • Question about HP MediaSmart Server and Time Machine

    I just got an EX490 MediaSmart server to use for backups and as a media server. I set up the server to serve as a Time Machine backup device, and it seems to work fine. However, when the server is connected and appears as a Share in the Finder, if I use the Finder to navigate to the "Mac" folder on the server, I don't see any files inside. Actually I searched through all visible files on the server and didn't see any files that might be a backup file. I tried turning on Invisible file viewing but that didn't make any difference. I'm assuming that the Time Machine backup file on the server is somehow hidden from viewing.
    Does anyone know why the backup file isn't appearing?
    Thanks.

    I have been looking at similiar HP NAS model, which now appears to have beome a prior model -
    HP EX485 MediaSmart Home Server
    http://www.amazon.com/gp/product/B001OI2ZG4/
    The prices have gone up slightly, too. The reviews on the above EX485 were almost like a mini review and informative. I assume you've created accounts and perhaps have (need?) a PC to configure Windows Home Server R2 fully? I was waiting for full Windows 7 support which is still in beta.
    From what I read, one of the "Missing Bible" for WHS or something helps to get the most out of it and help with setup (though generally that just takes 15 minutes if you know how).

  • Final Cut Server and Compressor / Qmaster serial conflict

    On my Final Cut Server machine, i have Final Cut Server version 1.1.1. On my Mac Pro i have Final Cut Studio 7. I have installed Compressor and Qmaster on my Final Cut Server from the Final Cut Pro DVD, so i can use Qmaster (the version of Qmaster must match exactly on all machines). But the two serials conflict, Final Cut Server will not start due to the following problem:
    "Either the current serial number is not valid or there is another server using the same serial number.
    Check the serial number and then stop and start the server using the Final Cut Server Preferences panel."
    Of course both serials is completely legit. Is it really true that i either need to downgrade Compressor and Qmaster (FCS will work, but Qmaster will not) or buy FCS 1.5? I have paid lots of money for this setup already, it can't be true that i am forced to update to 1.5.
    Please Apple, a solution?

    I have a similar case. Maybe someone knows how to solve it:
    We installed Final Cut Server and on the same MacPro we installed Final Cut Studio.
    After a while we dedicated this machine to Final Cut server and used the Final Cut Studio license on another nachine.
    Now, if I want to change or add a transcode setting in the compressor, I'm getting the error message: 'conflicting serialnumber' or something like it.
    So compressor thinks it's still part of th FinalCutStudio installation. But I don't quite understand, because Final Cut Server has its own compressor, isn't it?
    How can I solve this without loosing the custom transcode settings I've made in compressor.

  • Mix 10.5.8 Server and new 10.6 - Advice?

    We currently are using a Leopard 10.5.8 Xserve with Dansguardian and Squid proxy to provide web access and limited wiki and calendar use for about 200 users (not all online at once). I would like to investigate serving podcasts daily to the users using the necessary services in Mac OS X Server. There is also a possibility of streaming broadcasts once a week. Workgroups have been setup and Podcast Producer works fine.
    My question has to do with the load on the Xserve. While testing there does not seem a significant impact on the web access and server. However, going forward, am I better off to deploy the Podcast Producer, Xgrid, and QTSS on another server; perhaps a new 10.6 mini? If so, has anyone had experience integrating a 10.6 server with a 10.5? Are there any issues to be concerned about?

    We currently are using a Leopard 10.5.8 Xserve with Dansguardian and Squid proxy to provide web access and limited wiki and calendar use for about 200 users (not all online at once). I would like to investigate serving podcasts daily to the users using the necessary services in Mac OS X Server. There is also a possibility of streaming broadcasts once a week. Workgroups have been setup and Podcast Producer works fine.
    My question has to do with the load on the Xserve. While testing there does not seem a significant impact on the web access and server. However, going forward, am I better off to deploy the Podcast Producer, Xgrid, and QTSS on another server; perhaps a new 10.6 mini? If so, has anyone had experience integrating a 10.6 server with a 10.5? Are there any issues to be concerned about?

  • Big Project:  Upgrade to Snow Leopard Server and new drive

    Hi guys, with my early 09 Mac Pro (with maxed RAM) I need some advice. I am a guy who sticks with things for a long time, cautious about change, and likes consistency.
    So, I have a copy of Snow Leopard SERVER I want to install from scratch. I also have 4 WD 1tb black caviar drives I want install -- replacing the current startup 640 gb drive. How would I best accomplish the backup of my current data on this smaller drive and then restore it onto the new 1tb drive?
    I give you the following objectives I have for the use of these 1tb drives on the system and would appreciate any other advice you might have to offer.
    1) I envision using two of the drives as primary, and the other two as backups using TimeMachine.
    2) I would like to have the following partitions on the startup drive (#1).
    a)one boot partition for installation of Server OS and Application programs.
    b)one boot partition for future Mac OS installations.
    c)one data partition to house the /Users filesystem - 4 named users in household who share two macbooks.
    d)one or more partitions for separate deployments of Guest OSes for XP, Window 7, etc. using Parallels.
    3) As far as partition sizes, I am thinking 50 Gb for each of the MacOS bootables and apps, 100 Gb for the Guest OSes. Leaving about 500 or 600 Gb for /Users, including an iTunes library, and the wife's scrapbooking.
    4)Ok, drive #2. I want this drive dedicated to demanding storage needs of video.
    5)Drive #3. TimeMachine Backup for Drive #1.
    6)Drive #4. TimeMachine Backup for Drive #2.
    Thoughts? I appreciate your questions and critique on my outline here.
    Thanks much!

    Off load /users to another hard drive
    don't use internal drives for TimeMachine, especially both. Safer to be external. And if internal, use extra drive sleds.
    Depending on what you use it for, the 640 might be fine. Maybe use it as a bootable clone though.

  • Newbie question about Oracle Parallel Server and Real Application Cluster

    I am trying to find out what kind of storage system is supported by the 9i Real Applicaton cluster. I have looked at the 8i Oracle Parallel Server which requires raw partitions and therefore NAS (network attached storage) that provides an interface at the file level will not work. Does anyone know if the 9i Real Application cluster has a similar requirement for raw partitions? any suggestions of whether SAN or other technology will be suitable? pointers to more information is appreciated.
    Robert

    Hi Derik,
    I know this is a really broad question. No, it happens all the time! Here is a similar issue:
    http://blog.tuningknife.com/2008/09/26/oracle-11g-performance-issue/
    +"In the end, nothing I tried could dissuade 11g from emitting the “PARSING IN CURSOR #d+” message for each insert statement. I filed a service request with Oracle Support on this issue and they ultimately referred the issue to development as a bug. Note that Support couldn’t reproduce the slowness I was seeing, but their trace files did reflect the parsing messages I observed."+
    I would:
    1 - Start by examining historical SQL execution plans (stats$sql_plan or dba_hist_sql_plan). Try to isolate the exact nature of the decreased performance.
    Are different indexes being used? Are you geting more full-table scans?
    2 - Migrate-in your old 10g CBO statistics
    3 - Confirm that all init.ora parms are identical
    4 - Drill into each SQL with a different execution plan . . .
    Raid 5 Don't believe that crap that all RAID-5 is evil. . . .
    http://www.dba-oracle.com/t_raid5_acceptable_oracle.htm
    But at the same time, consider using the Oracle standard, RAID-10 . . .
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/t_oracle_tuning_book.htm
    "Time flies like an arrow; Fruit flies like a banana".

  • Help With MAC Mini Server and New SSD

    Hey Guys,
    I just got a Mac Mini Server. I swapped out one HD for a new SSD. A Crucial One.
    I am booting in recovery mode and it only sees one HD and says it is 1.5 TB, (1 tb regular and the new SSD is 512)
    I just want to run osx on the SDD .. help

    I'm unsure of the proper procedure as I'm not experienced dealing with Fusion Drive setups. If Disk Utility shows you have a 1.5 TB drive, then what do you see when you boot the computer and check disk drive space using Get Info? I know you can revert the volume you have back to standard format if you don't mind using the Terminal in your Utilities folder:
    Open the Terminal and paste or enter the following at the prompt:
    diskutil list
    Press RETURN. This will list for you the device information for your HDD. Here is an example of the output:
    /dev/disk1
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:      GUID_partition_scheme                        *120.0 GB   disk1
       1:                        EFI EFI                     209.7 MB   disk1s1
       2:                  Apple_HFS Yosemite                119.2 GB   disk1s2
       3:                 Apple_Boot Recovery HD             650.0 MB   disk1s3
    You want what you see at the top left - /dev/diskn, where n is the integer number. Then enter or paste at the prompt:
    sudo diskutil cs revert /dev/diskn ; replace the n with the integer number found above.
    Press RETURN and enter your admin password when prompted. It will not echo to the screen, so be careful typing. Press RETURN again. Wait until it completes the process which is when the prompt returns.
    Given the potential danger when using the Terminal in this way, please be sure you have first made a reliable backup. Better safe than sorry.

  • Move appset to new server and new version

    Hi
    I am currently on 5.1, SQL 2005 and Windows 2003, where we have some performance issues. To see the effekt of an upgrade I would like to move my current appset to a new server with BPC 7.5 SQL 2008 and Windows 2008.
    How should I do this?
    Can I make a clean installaion of BPC 7.5 on SQL 2008 and then restore the 5.1 version appset into 7.5.
    What is the recommended way to do this.

    Hi,
    There is nothing to worry.
    You need to take the backup from the 5.1 system and restore it in 7.5 MS. After the restoration, please make sure that all the components are working fine or not. You might have to look at the security once again and the reports and the input schedules. There might be small small alterations / modifications required. However, there wont be any big development. There is only one area which needs little more concentration - SSIS packages. You might have to re-build the SSIS packages in SQL 2008. Another thing is that if you have any macros or VB in your templates, you might have to revisit them.
    As a whole, I dont see much challenge in this whole process.
    Hope this helps.

  • Don't see my post - how long does it take? question on non-wireless printer and new fios network

    have a canon ip5000 - not wireless printer.  We just installed our fios network with 2 mac laptops and an imac desktop.  We also had been using an airport extreme and airport express (as an extender) with our cox cable modem.
    we cannot seem to hook up the canon to the network.  We ended up plugging the printer into the airport extreme and the airport extreme into the router.  However, the airport extreme now appears as a separate network on the - you can see it in the list of  networks  at the top of the screen.  so you have to toggle between our network for fios -- and then the the network for the airport extreme in order to print.  
    There's got to be a way to connect up the printer to the fios router?
    logging into fios router - you can see the print server is on the list -- however it says inactive -- and never changes status.
    sorry if this is a duplicate to my previous post - I'm new here and not sure how long it takes tfor these messages to appear

    Actually ... go into the Airport and turn on "bridge mode".   Then go into the FiOS router and turn off the wireless.   Connect the WAN port on  Airport to one of the open ports on the FiOS router.   This will allow you to continue to use your Airport as the wireless access point, and continue to have your printer attached like you have had. 
    It may be necessary to change the IP address of the Airport in this configuration (think bridge mode will suggest one for you -- but essentially and address somewhere on the 192.168.1.x that's not used -- I suggest 192.168.1.254).

  • Few more questions (sorry I'm 22 and new)

    Let me just say you guys are amazing, this entire community and how you support eachother... Ive strolled through just about every post in rebuilding section for the most recent 20 pages and must say, everyone here is absolutely glorious. However I have a few questions about my own personal roadmap if anyone could provide insight Im not a complete noob but so far Ive just been reading, taken no action.  I have a CO account with CITI, should be paid in full today or tommorow or whenever the check clears. CA was working I assume on behalf of CITI because both checks were made for full amount to OC and balance was the balance, not 0. I had perfect payment for 2.5 years, the was delinquent for 8 months, paid in full 4.1k in two installments. When is it not greedy to ask them for some love and peace and kindness with a GW letter? I have valid reason as to why I stopped (2 car accidents april, july) and why I paid them back (accident settlement)How likely are they to help me out? I can provide them anything their heart desires with regards to documentation - medical, college related, police reports, absolutely anything. I was making maybe $300 a month during that time and literally borrowed $33 from my neighbor to buy gas. Paying them seemed so huge at the time I just closed my eyes.I also have a natural gas account on my report that is negative in this manner:   JUL-OK__AUG-OK__SEPT-ND__OCT-120Is this possible to be real? 120 is 4 monthsIf I still dont believe you, who do I call first, gas company or experian?If I call gas company, do I say youre wrong give me statements or do I nicely ask them to change it off the bat without disputingIf they give me statements and they are in fact wrong, do I call back or write a letter? Certified or no? Do I send said statements to bureaus? When? Finally, was it a mistake to take out and pay off a Discover student loan in two years? It was for 4k and I applied again and was denied.. Did they not make enough to bother with me? I sent them a recon letter, saying Im loyal please love me. Hope to hear back because loved them much more than Sallie. Any insight as to where I should have sent such letter (email)? I sent it to application status questions on their site..  Thoughts/suggestions?

    yes i would start with the gw letters now.  explain about the accidents and let them know that you were perfect till that point.  in my experience sometimes one gw letter will do it and others you send once a month. As far as the gas company goes, yes I would call them and ask to see the statements if you dont have any.  If they are indeed wrong I would ask them to correct it.  Do you still have service with them?  If you do then I would think that they would be wrong otherwise i'm sure that they would not still be providing this service. If not yes I would dispute it with the cb.I know nothing about Discover student loans so I will let someone else get to that one.  Overall it is nice to have one installment loan going on.  When you paid it off you probably had a small drop in your fico score.

  • Question about udev,hal, hotplug and new Xorg 1.8

    From what I understand the hal .fdi files now are ignored since the last upgrade of Xorg to version 1.8. Before that, I used to modify .fdi files and then restarting hal to get the modifications to work. I did this to configure my joystick and synaptics for example.
    How I'm supposed to do that now with this new system? I've been trying with hal and the fdi files, but are totally ignored (as expected) and making changes to /etc/X11/xorg.conf.d/10-name-here.conf works but I have to restart X.
    According to this the configuration for input devices should be done by .conf files on xorg.conf.d/ or udev rules.
    The Xorg input hotplugging from the wiki is kind of outdated too because it's all about the deprecated hal so I don't understand how I'm supposed to configure and work with the hotplugging...
    I hope someone can make this a little bit clear :) (or tell me that I'm really lost)

    Not what I'm looking for really, because if I change the Xkblayout for example (or if I create a new file), the change only works after restarting X. My joystick for example, doesn't work and create a new inputclass only works after restarting it.
    I know that I can change the keyboard layout some other way while using it using setxkbmap, but still, I'd like to know how it is supposed to use the "hot" plug.

  • Newie Mail server and running other services

    We have a small office network of 6 macs that connect to a Panther server, this server provides DNS and file sharing and thats about it a Filemaker Sever and Retrospect Server. I doesn't suffer from heavy use
    I have been using a a separate mac to run Quickmail server 1 (os9) and I need to upgrade it as some of the mail protocols are out of date.
    We have a static IP address assigned to our mail gateway by our service provider.
    My question or advice
    Should I just start using OS X server to run mail services
    or
    Upgrade Quickmail and continue running it separately on a new mac mini (or similar)
    My concerns are at the moment any problem with email locally can be solved pretty much without effecting the other server or the network.
    Thanks

    The basic setup is prety simple...
    Replace following with your own equivalents...
    Domain name: woopee.com (the domain name after the "@" in your emails)
    Host name: mail.woopee.com (the hostname your MX record points to. Does not need to match server hostname. This will be the hostname mail server uses when communicating with other servers)
    Local Host Aliases: woopee.com (a list of the domains you want to accept mail for. Probably just same as Domain name?)
    Local network: 192.168.10.0/24 (LAN IP range for local users. Used to bypass authentication when they send mail out)
    Server Admin-> Mail-> General...
    Tick:Enable POP
    Tick:Enable IMAP
    Tick:Enable SMTP, Allow incoming mail, Enter Domain name & Host name (from above).
    Mail-> Relay
    Tick: Accept SMTP relays... Enter localhost IP: 127.0.0.1/32 and Local network (from above).
    Tick: Use these junk mail rejection servers. Add: zen.spamhaus.org
    Mail->Filters
    Tick: scan for junk mail. Minimum score: 5 (can be reduced later)
    Junk mail should be: Delivered (will just tag and forward to recipient)
    Tick: Attach subject tag: * Junkmail *
    Tick: Scan email for viruses
    Infected messages should be: Deleted
    Tick: update junk mail & virus database: 1 time per day
    Mail->Advanced->Security
    SMTP: none (this prevents smtp authentication from anyone outside your Local network)
    IMAP: Tick: Clear, Plain, Cram-md5 (or leave all unticked if only using pop accounts)
    POP: Tick: APOP
    Mail->Advanced->Hosting
    Local Host Aliases: Add: localhost & woopee.com (separate entries, see Local host aliases, above)
    That's it (I think ...although I cannot guarantee I have not missed something). There will be no problem setting this up and seeing it going whilst still using the existing mail server. Set up client accounts to send and receive from new server and you can send mail around internally to test. Last thing would be to change your firewall port-forwarding for SMTP from existing server to new one.
    Watch the mail.log in Console for any errors & do plenty tests.
    Ensure users have mail enabled in Workgroup Manager.
    There are plenty mods available beyond this. Have a good read through the mail services manual (I know its a bit confusing at times) and you should see where the above settings fit in.
    Lots of stuff on the forum here which you can search for. Spam filtering in particular can be made far more effective but requires editing of the underlying unix configuration files - again, plenty of previous discussions about that on forum. Meantime, the zen.spamhaus.org RBL will filter out a great many spammers.
    -david

  • New server and/or CA certificate for connection from custom authentication

    We are running Access Manager version 72005Q4 in the Sun ONE Web Server 6.1SP5 B06/23/2005 container with java build 1.5.0_07-b03. I run a custom authentication module which checks sessions against our university single sign on system which is CAS (from Yale/Jasig). The checks are essentially https calls. All this has been working well for us for the last couple of years.
    I would like to migrate the certificate used on the university CAS system from a Verisign certificate to a wildcard certificate issued by the IPS CA in spain -- these are in most browsers but are not in the standard batch of cacerts CA's -- and are free for .edu domains.
    My other java based authentication plugins (Blackboard, custom apps etc) have worked fine once I import the certificate into the cacerts for the java container, but I'm missing something (obvious probably) about importing this certificate so that my amserver custom authentication module can connect to the CAS server once the CAS server is using the new certificate.
    Could anyone provide guidance on where I need to import this server certificate (or preferably the IPS CA) in order to allow the custom authentication module to work properly? I assume this same problem has been solved by people wishing to connect from the amserver to services with self signed certificates. For some reason I'm finding the debugging unexpectedly difficult, I'll outline some of those details below.
    Relevant things I've tried so far:
    Import both the server cert and the IPS CA into the cacerts of the java container identified in the web server server.xml /usr/jdk/entsys-j2se.
    Import the IPS CA into the web server cert8 style db via the web admin server.
    The debugging has surprised me a bit, as I'm not getting an error that is explicitly SSL related error. It almost seems like the URLConnection object ends up using a HttpURLConnection rather than an HttpsURLConnection and never gives me a cert error, rather a connection refused since there is no non SSL service running on CAS. The same code pointed to the server running the verisign cert works as expected.
    Part of the stack:
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: java.net.ConnectException: Connection refused
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.socketConnect(Native Method)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.Socket.connect(Socket.java:516)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.Socket.connect(Socket.java:466)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.NetworkClient.doConnect(NetworkClient.java:157)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.openServer(HttpClient.java:365)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.openServer(HttpClient.java:477)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.<init>(HttpClient.java:214)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.New(HttpClient.java:287)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.New(HttpClient.java:311)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.setNewClient(HttpURLConnection.java:489)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.setNewClient(HttpURLConnection.java:477)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.writeRequests(HttpURLConnection.java:422)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:937)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.yale.its.tp.cas.util.SecureURL.retrieve(Unknown Source)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.yale.its.tp.cas.client.ServiceTicketValidator.validate(Unknown Source)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.fsu.ucs.authentication.providers.CASAMLoginModule.process(CASAMLoginModule.java:86)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at com.sun.identity.authentication.spi.AMLoginModule.wrapProcess(AMLoginModule.java:729)
    The relevent bit of code from the SecureURL.retrieve looks as follows:
    URL u = new URL(url);
    if (!u.getProtocol().equals("https"))
    throw new IOException("only 'https' URLs are valid for this method");
    URLConnection uc = u.openConnection();
    uc.setRequestProperty("Connection", "close");
    r = new BufferedReader(new InputStreamReader(uc.getInputStream()));
    String line;
    StringBuffer buf = new StringBuffer();
    while ((line = r.readLine()) != null)
    buf.append(line + "\n");
    return buf.toString();
    } finally { ...
    The fact that this same code in other authentication modules running outside the amserver (in other web containers as well, tomcat and resin for example) running java 1.5 works fine with the new CA, as well as with self signed certs that I've imported into the appropriate cacerts file leads me to believe that I'm either importing the certificate into the wrong store, or that there is some additional step needed for the amserver in the Sun Web container.
    Thank you very much for any insights and help,
    Ethan

    I thought since this has had a fair number of views I would give an update.
    I have been able to confirm that the custom authentication module is using the cert8 db defined in the AMConfig property com.iplanet.am.admin.cli.certdb.dir as documented. I do seem to have a problem using the certificate to make outgoing connections, even though the certificate verifies correctly for use as a server certificate. This is likely a question for a different forum, but just to show what I'm looking at:
    root@jbc1 providers#/usr/sfw/bin/certutil -V -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u V
    certutil: certificate is valid
    root@jbc1 providers#/usr/sfw/bin/certutil -V -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u C
    certutil: certificate is invalid: Certificate type not approved for application.
    root@jbc1 providers#/usr/sfw/bin/certutil -M -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -t uP,uP,uP
    root@jbc1 providers#/usr/sfw/bin/certutil -V -l -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u C
    FSU Wildcard Certificate : Certificate type not approved for application.
    So it could be that I don't understand how to use the certutiil to get the permissions I want, or it could be that using the same certificate for both server and client functions is not supported -- though you can see why this would be a common case with wildcard certificates.
    BTW for those interested, it did seem to be the case that when the certificate failure occurred that the attempt was then made by the URLConnection to bind to port 80 in cleartext even though the URL was clearly https. I'm sure this was just an attempt to help out misformed URL, but it seemed that the URLConnection implementation in the amserver would swapped traffic over cleartext if that port had been open on the server I was making the https connection to; that seems dangerous to me, I would not have wanted it to quietly work that way exposing sensitive information to the network.
    This was why I was getting back a connection refused instead of a certificate exception. The URLConnection implementation used by the amserver is defined by java.protocol.handler.pkgs=com.iplanet.services.comm argument passwd to the JVM, and I imagine this is done because the amserver pre-dates the inclusion of the sun.net.www.protocol handlers, but I don't know, there maybe reasons why the amserver wants it own handler. I only noticed that this is what was going on when I as casting the httpsURLConnection objects to other types trying to diagnose the certificate problem. I would be interested in hearing if anyone knows if there is a reason not to use sun.net.www.protocol with the amserver.
    After switching to the sun.net.www.protocol handler I was able to get my certificate errors rather than the "Connection Refused" which is what lead me to the above questions about certutil.

  • Exchange 2010 Migration - Decommissioning Multi Role Server and Splitting Roles to 2 new servers - Certificate Query

    Hi,
    I have been tasked with decommissioning our single Multi Role Server (CAS/HT/MB) and assigning the roles to 2 new servers. 1 server will be dedicated to CAS and the other new server will be dedicated to HT & MB roles.
    I think I'm OK with the moving of HT and MB roles from our current server to the new HT/MB server by following "Ed Crowley's Method for Moving Exchange Servers", my focus is on the migration of the CAS role from the current to the new server as
    this one has the potential to kill our mail flow if I don't move the role correctly.
    The actual introduction of the new CAS server is fairly straight forward but the moving of the certificate is where I need some clarification.
    Our current multi role server has a 3rd Party Certificate with the following information:
    Subject: OWA.DOMAIN.COM.AU
    SANs: internalservername.domain.local
              autodiscover.domain.com.au
    The issue here is the SAN entry "internalservername.domain.local" which will need to be removed in order for the certificate to be used on the new CAS server, firstly because the CAS server has a different name and secondly the internal FQDN will
    no longer be allowed to be used from 2015 onwards. So I will need to revoke this certificate and issue a new certificate with our vendor who is Thawte.
    This presents me with an opportunity to simplify our certificate and make changes to the URLs using a new certificate name, so I have proposed the following:
    New Certificate:
    Subject: mail.domain.com.au
    SANs: autodiscover.domain.com.au
              OWA.DOMAIN.COM.AU
    I would then configure the URLs using PowerShell:
    Set-ClientAccessServer -Identity NEWCASNAME-AutodiscoverServiceInternalUrl https://mail.domain.com.au/autodiscover/autodiscover.xml
    Set-WebServicesVirtualDirectory -Identity " NEWCASNAME\EWS (Default Web Site)" -InternalUrl https://mail.domain.com.au/ews/exchange.asmx
    Set-OABVirtualDirectory -Identity " NEWCASNAME\oab (Default Web Site)" -InternalUrl https://mail.domain.com.au/oab
    Set-OWAVirtualDirectory -Identity " NEWCASNAME\owa (Default Web Site)" -InternalUrl https://mail.domain.com.au/owa
    I would also then set up split DNS on our internal DNS server creating a new zone called "mail.domain.com.au" and add an host A record with the internal IP address of the new CAS server.
    Now I know I haven't asked a question yet and the only real question I have is to ask if this line of thinking and my theory is correct.
    Have I missed anything or is there anything I should be wary of that has the potential to blow up in my face?
    Thanks guys, I really appreciate any insights and input you have on this.

    Hi Ed,
    Thanks for your reply, it all makes perfect sense I guess I was being optimistic by shutting down the old server and then resubscribing the edge and testing with mailboxes on the new mailbox server.
    I will make sure to move all of the mailboxes over before removing the old server via "Add/Remove Programs". Will I have to move the arbitration mailboxes on the old server across to the new mailbox server? Will having the arbitration mailboxes
    on the old server stop me from completely removing exchange?
    Also, the InternalURL & ExternalURL properties are as follows:
    Autodiscover:
    New CAS - InternalURL: https://svwwmxcas01.pharmacare.local/Autodiscover/Autodiscover.xml
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/autodiscover/autodiscover.xml
    WebServices:
    New CAS - InternalURL: https://svwwmxcas01.pharmacare.local/EWS/Exchange.asmx
    New CAS - ExternalURL: https://owa.pharmacare.com.au/EWS/Exchange.asmx
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/ews/exchange.asmx
    Old CAS - ExternalURL: https://owa.pharmacare.com.au/EWS/Exchange.asmx
    OAB:
    New CAS - InternalURL: http://svwwmxcas01.pharmacare.local/OAB
    New CAS - ExternalURL: https://owa.pharmacare.com.au/OAB
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/oab
    Old CAS - ExternalURL: https://owa.pharmacare.com.au/OAB
    OWA:
    New CAS - InternalURL: https://svwwmxcas01.pharmacare.local/owa
    New CAS - ExternalURL: https://owa.pharmacare.com.au/
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/owa
    Old CAS - ExternalURL: https://owa.pharmacare.com.au/
    ECP:
    New CAS - InternalURL: https://svwwmxcas01.pharmacare.local/ecp
    New CAS - ExternalURL: https://owa.pharmacare.com.au/ecp
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/ecp
    Old CAS - ExternalURL: https://owa.pharmacare.com.au/ecp
    Our Public Certificate has the following details:
    Name: OWA.PHARMACARE.COM.AU
    SAN/s: autodiscover.pharmacare.com.au, svwwmx001.pharmacare.local
    From your previous communications you mentioned that this certificate would not need to change, it could be exported from the old server and imported to the new which I have done. With the InternalURL & ExternalURL information that you see here can you
    please confirm that your original recommendation of keeping our public certificate and importing it into the new CAS is correct? Will we forever get the certificate warning on all of our Outlook clients when we cut over from the old to the new until we get
    a new certificate with the SAN of "svwwmx001.pharmacare.local" removed?
    Also, I am toying with the idea of implementing a CAS Array as I thought that implementing the CAS Array would resolve some of the issues I was having on Saturday. I have followed the steps from this website, http://exchangeserverpro.com/how-to-install-an-exchange-server-2010-client-access-server-array/,
    and I have got all the way to the step of creating the CAS array in the Exchange Powershell but I have not completed this step for fear of breaking connectivity to all of my Outlook Clients. By following all of the preceeding steps I have created a Windows
    NLB with dedicated NICs on both the old CAS and the new CAS servers (with separate IP addresses on each NIC and a new internal IP address for the dedicated CAS array) and given it the name of "casarray.pharmacare.local" as per the instructions on
    the website, the questions I have on adding the CAS array are:
    1. Do you recommend adding the CAS array using this configuration?
    2. Will this break Outlook connectivity alltogether?
    3. Will I have to generate a new Public Certificate with an external FQDN of "casarray.pharmacare.com.au" pointing back to a public IP or is it not required?
    4. If this configuration is correct, and I add the CAS Array as configured, when the time comes to remove the old server is it just as simple as removing the NLB member in the array and everything works smoothly?
    So, with all of the information at hand my steps for complete and successful migration would be as follows:
    1. Move all mailboxes from old server to new server;
    2. Move arbitration mailboxes if required;
    3. Implement CAS Array and ensure that all Outlook clients connect successfully;
    4. Remove old server;
    5. Shut down old server;
    6. Re-subscribe Edge from new Hub Transport server;
    7. Test internal & external comms;
    We also have internal DNS entries that would need changing:
    1. We have split DNS with a FLZ of "owa.pharmacare.com.au" that has a Host A record going to the old server, this would need changing from "svwwmx001.pharmacare.local" to "svwwmxcas01.pharmacare.local";
    2. The _autodiscover entry that sits under _TCP currently has the IP address of the old server, this would need to be changed to the IP address of the new CAS;
    3. The CNAME that sits in our FLZ for "pharmacare.local" would need to be changed from "svwwmx001.pharmacare.local" to "svwwmxcas01.pharmacare.local".
    4. Or rather than using the FQDN of the server where applicable in the DNS changes would I be using the FQDN of the CAS Array instead? Please confirm.
    Would you agree that the migration path and DNS change plan is correct?
    Sorry for the long post, I just need to make sure that everything goes right and I don't have egg on my face. I appreciate your help and input.
    Thanks again.
    Regards,
    Jamie

Maybe you are looking for