Best practises regarding Internal and External access to SIM

Currently we have two separate Active Directories one internal and one in the DMZ and plan to have one SIM on an segmented network allowing access for our internal users directly to SIM UI and external users thru portlets that talks to SIM.
The external AD hosts some internal users that also needs access to the DMZ applications so we can save efforts in managing to separate SIM environments in development, tests, upgrades, unique UID etc...
What are the best practices on the market is this a preferred choice with only one SIM or with one SIM internally and one SIM in DMZ hosting suppliers, customers etc?
With a single SIM environment are you allowing internal users accessing SIM from Internet to change internal AD password or have you restricted the functionality in some way for internal users accessing SIM from internet?
How about challenge response questions are you allowing users to have the same both internally and externally or setup different for different user interfaces?
Anyone willing to share how your environment is setup for internal and external access?

Yes for handling the access to the SIM we probably need to look into some kind of access management solution to get it to work in a secure way.
The question is a bit complex with many different factors controlling the outcome of the SIM implementation, but I hope to get some idées with this thread of how we can solve it.
The question still remains if its common to have one or to SIM's and what internal users is allowed to do in SIM from Internet.
Ex are internal users allowed to change their password in internal Active Directory thru SIM from Internet or what have others done to limit the functionality?

Similar Messages

  • Use Same URL for Internal and External Access for CRM 2015 IFD

    I have setup a CRM2015 server for IFD access.
    ADFS and CRM are on separate servers.
    CRM server all roles
    ADFS 2.0 server.
    Using the internal URL I am able to access CRM without entering my details (as expected)
    Using the external URL I am authenticated by ADFS as expected and can sign in.
    We have an internal domain domain.local
    We have an external domain domain.com (the certificate is for *.domain.com)
    We have a DNS zone created internally for domain.com.
    CRM URLs
    internal : internalcrm.domain.com
    External : externalcrm.domain.com
    I would like all users to use the same link regardless of them being internal or external, but I would like so that any user who is on the domain is automatically logged in without entering their username and
    password. What is the best way to do this?
    I have tried creating a cname record on the internal domain.com zone pointing externalcrm.domain.com to internalcrm.domain.com but that didn't work, I still get the ADFS sign in page.
    Thanks

    So fair warning, what you're asking for isn't really a supported deployment method of CRM.
    That said, you should be able to do some DNS trickery internal to your network that points your "crm.domain.com" to "crm.domain.local" and then hopefully CRM will treat the connection as if it came from an internal network.
    Otherwise, you're likely going to have to accept that everyone gets the ADFS login page internal and external to your network.
    The postings on this site are solely my own and do not represent or constitute Hitachi Solutions' positions, views, strategies or opinions.

  • Single URL for internal and external CRM access when using IFD

    Hello,
    At one of our client site I have setup IFD on CRM 2011. This IFD is behind TMG. My client is a big corporation therefore all CRM components including CRM, ADFS and SQL are on separate servers.
    I have configured IFD using single url https://orgname.contoso.com Their IT staff wants to know why can't they use single URL for internal and external access where internal users are nto prompted for authentication
    when logging on to the CRM server. I know you can do URL re-write in ADFS but they want to know the reason "why internal users can't use the same IFD URL and don't get prompted for their credentials". Text below is from their IT staff.

    There are several approaches to your question.  You need to set up both an internal and an external relying party trust. If you use the external URL, it will always direct you to the signin page, if you use the internal URL, it will resolve you single
    sign on.
    I've configured IFD for CRM multiple times, and this is how it works. CRM looks at the URL. If you use the external URL (org.domain.com), it will prompt for credentials. So what you are asking for, a single URL that works single sign on internally and prompts
    externally really isn't possible.
    What I recommend is:
    1. make the external URL available internally
    2. Configure all outlook clients against the external URL, that way you won't have to reconfigure when someone goes internal to external
    3. Have users who are primarily internal use the internal URL for the web client, which will resolve single sign on
    4. Have users who are primarily external use the external URL for the web client
    For #1, since you only need to enter the credentials when you first configure CRM, it is in all effects single sign on.
    One thing I haven't tried that may work is using IIS redirect internally to redirect the external URL to the internal URL. There is also a powershell script in the IFD guide that you can use to make the outlook client switch between the internal and external
    URL's, but nothing that will give you a single URL that works as the internal relying party trust when internal and the external relying party trust when you are external.

  • Leopard.  Did a full restore from Time Machine.  Now I can't access my other internal and external drives.  I get the following error: The folder "Capture Video" can't be opened because you don't have permission to see its contents.

    Leopard.  Did a full restore from Time Machine.  Now I can't access my other internal and external drives.  I get the following error: The folder “Capture Video” can’t be opened because you don’t have permission to see its contents.  I have repaired permissions pn the main harddrive.  When I try too click on a disk I get the previously stated error.  I can't even open up information to see what permission/access there is.  It simply will not let me see the content.  It shows the content of my main hard drive when I have clicked the other harddrive's name.

    Solved:
    sudo chflags 0 /Volumes/"FCP Time Machine BU"
    sudo chown 0:80 /Volumes/"FCP Time Machine BU"
    sudo chmod 775 /Volumes/"FCP Time Machine BU"
    sudo chmod -N /Volumes/"FCP Time Machine BU"

  • All my hard drives (internal and external) have a small lock in the lower left corner of the icon and I don't have permissions to access. Permissions are set to 'Custom' in the get info window and I can't change them.

    All my hard drives (internal and external) have a small lock in the lower left corner of the icon and I don't have permissions to access. I have 3 user accounts set up and I cannot access any of them.   Permissions are set to 'Custom' in the get info window and I can't change them. Originally I had Snow Leopard installed on one hard drive and 10.5.8 installed on another.   I started to have some problems accessing data between them and so I tried changing the permissions on ONE hard drive partition.   The next thing I know, all my drives are locked (except the ones with the systems on them), the small lock appeared in the lower left corner of the drive icons and I don't have permissions to access any of them.   In the get info window, permissions are set to 'Custom' and I can't change them.

    There is suddenly a lock icon on my external backup drive!
    Custom Permissions

  • Setup internal and external DNS namespaces best practice

    Is external name space (e.g. companydomain.com) and internal name space (e.g. corp.companydomain.com or companydomain.local) able to run on the same DNS server (using Microsoft Windows DNS servers)?
    MS said it is highly recommended to use a subdomain to handle internal name space - say corp.companydomain.com if the external namespace is companydomain.com.  How shall this be setup?  Shall I create my ADDS domain as corp.companydomain.com directly
    or companydomain.com then create a subdomain corp?
    Thanks in advanced.
    William Lee
    Honf Kong

    Is external name space (e.g. companydomain.com) and internal name space (e.g. corp.companydomain.com or companydomain.local)
    able to run on the same DNS server (using Microsoft Windows DNS servers)?
    Yes, it is technically feasible. You can have both of them running on the same DNS server(s). Just only your public DNS zone can be published for external resolution.
    MS said it is highly recommended to use a subdomain to handle internal name space - say corp.companydomain.com
    if the external namespace is companydomain.com.  How shall this be setup?  Shall I create my ADDS domain as corp.companydomain.com directly or companydomain.com then create a subdomain corp?
    What is recommended is to avoid having a split-DNS setup (You internal and external DNS names are the same). This is because it introduces extra complexity and confusion when managing it.
    My own recommendation is to use .local for internal zone and .com for external one.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Backup internal and external hard drives-TC and offsite

    I now have Mavericks 10.9.3 on my iMac with a 2TB (1.25 TB used) internal hard drive.
    I also have some external drives attached to my iMac with older iPhoto libraries and other files (total file sizes 940 GB and 505 GB).  I primarily use Aperture now on my internal hard drive, but still access those iPhoto libraries on my external drives on occasion.  I have Time Machine backup my iMac internal hard drive on my 2TB Time Capsule regularly.
    My primary question is in regards to getting another copy onto a larger external drive that covers my internal and external drives so I can have a backup off-site.  Online backup services seem to always exclude external drives.  So a physical drive I can have offsite seems to be the best option.
    A year ago I changed the destination of my Time Machine backup to be on a 3TB external drive (backed up iMac internal hard drive and the two externals).  However, when I changed the destination of the backup back to the Time Capsule, the TM initiated a brand new backup (it did not recall that I had backed up prior to that on my Time Capsule).
    I want to backup monthly, if not quarterly for my off-site storage.  But, if every time I change the destination drive for TM, a new backup profile is created, it will overwork my drives unnecessarily.
    Is there a backup program or a process on "disk utility" I could run parallel to TM that I just use quarterly capturing only the changes/additions in those few months for both the internal and external hard drives?  Also, is there a way to add an external drive to my Time Capsule that is solely used to wirelessly backup the two externals on a regular basis (i.e. keep the internal 2TB drive backing up to the Time Capsule; and the external hard drive attached to Time Capsule via USB used as the backup drive for the external hard drives)?
    Summary:  I need to backup regularly to the local Time Capsule/additional external hard drive.  The data will come from my internal hard drive and my two external hard drives.  I also want to do quarterly backups of the additions/changes to all three drives to have on an offsite external drive that I manually backup to quarterly.  Any help is greatly appreciated.

    Carbon Copy Cloner is not on the App store.
    Correct.. it is not approved because Apple do not like the fact that CCC (and most likely superduper) which are the most popular backup software for Mac because it makes a bootable clone. Apple will never approve of that. But let me assure you that is the genius of it. If the internal disk fails, you simply boot from the external. It is $40 but you can use on all the computers in your home.
    CCC is a clone.. ie when it does the backup, any changes on the drive are changed on the clone. It does not work like Time Machine which simply piles up incrementals until the drive fills up. The idea of CCC is a backup of the drive as it exists at any point in time. TM btw is also not a reliable archive, ie it thins backups constantly.. so you should never rely on it to archive old versions.. but in the middle of a project it does a good job to keep various versions of your files. That is why I specifically said in my last post do not stop using it.
    and you can keep using the TC just for the internal drive.
    Keep TM running to the TC.. that will then keep a current hourly incremental of your drive. You can set CCC in a way which is a lot more flexible. ie backup just at the end of the day. There is no need for constant hourly backups. So to answer the second question.. you are still using your TC and TM.. but I suggest you only backup the internal drive.
    Please read a bit from forum expert Pondini on the value of clones and TM.
    http://pondini.org/TM/Clones.html
    That's why many folks use both Time Machine and a bootable clone, to have two separate, independent backups, with the advantages of both.  If one fails, the other remains.
    Now the ports issue.
    You can of course continue to use USB2. Just that moving large volumes around will be slow.. as doubtless you already know.
    On your particular Mac since you missed out on USB3 which is a pain.. you can buy a Thunderbolt to USB3 adapter like the belkin.
    http://www.belkin.com/au/p/P-F4U055/
    I suggested the Thunderbolt to Esata (it is an older interface and the adapter is rather cheaper but I hear more reliable.. check reviews for both).
    http://store.apple.com/au/product/H8875ZM/A/lacie-esata-hub-thunderbolt-series
    Sata is the interface of the hard disk.. Esata just means external sata.. so it is native and without conversion.
    I was merely suggesting ways to speed things up. But as long as you don't run CCC on more than daily basis then I think you will be fine just with USB 2. It will take a while on the day you do the swap over for the archive volume, but if you turn off the power saving in the Mac and leave it run overnight it should be able to do most of it.. you need to realise it will have to deep scan both the disks.. to compare files.. but CCC is based on Rsync and it is extremely fast and efficient. I am just not sure I know how long it will take to do. Anyway.. there is a plan.. tweak and adapt as you see fits your needs.

  • How to Setup RDS custom property when internal and external domain name space is different

    Hi All
    I am setting up RDS for customer
    My internal domain name is domain.local and my external domain is domain.com
    I came across below PowerShell cmdlets on some blogs because my internal and external name space are different
    Set-RDSessionCollectionConfiguration –CollectionName QuickSessionCollection -CustomRdpProperty “use redirection server name:i:1 `n alternate full address:s:remote.domain.com”
    In above command, remote.domain.com points to which host?
    Is it pointing to RD Session Broker
    OR
    Pointing to RD Session Host servers
    I am not sure what above command will do exactly ?
    Any help will be highly appreciated
    Thanks Best Regards Mahesh

    Hi,
    It all depends who is accessing the RDS Solution.
    If you have a large BYOD or large number of external users, it would be better to use a public certificate.
    Have a look at the following script which will simplyfy the configuration of the RDSH hosts with certificates.
    http://ryanmangansitblog.com/2014/05/20/rds-2012-rdsh-certificate-deployment-script/
    You can use a custom RDP property to hide the Session host names.
    Have a look at the following article on configuring certificates:
    http://ryanmangansitblog.com/2013/03/10/configuring-rds-2012-certificates-and-sso/
    Ryan Mangan | Ryanmangansitblog.wordpress.com | Help keep the forums tidy, if this has helped please mark it as an answer

  • SharePoint 2013 - Office Web Apps - Internal and External Use

    I have successfully installed SharePoint 2013 and Office Web Apps on Azure VMs inside an Azure Virtual Network (IaaS model). Everyting is working well. However, my testing has shown that external users and internal users can't use Office Web Apps at the
    same time.
    Office Web Apps, installed on its own vm, accomodates an external and internal URL quite well. However, SharePoint 2013 appears to only allow one setting for WOPI Zone, either internal or external but not both. I've set the WOPI zone to Internal-HTTPS (Set-SPWOPIZone
    –Zone “internal-https”). OWA works just fine if accessed from inside the Azure Virtual Network. However, if I try to access from outside the Virtual Network, from the Internet, Office Web Apps fails. The exact oppisite is also true. I can set WOPI Zone to
    External-HTTPS and accessing from the Internet works fine, but accessing inside the Virtual Network fails.
    Am I missing something? I, obviously, want Office Webs Apps to function properly for both internal and external users simultaneously.
    I appreciate any help anyone can provide here.
    Glenn

    Hi Glenn,
    To have both the use of Internet and Internal available to your end-users, you first need to configure AAM setting. Open Central Administration > Application Management > Configure alternate access mappings. Let's say there is an existing web application
    named http://sharepoint and my end-users from local network are able to access it using the URL http://sharepoint (root site collection). Here you need to add the Internet URL by select the web application and click Edit Public URLs. Add the Internet domain
    to the web application, e.g http://sharepoint.abc.com. You don't necessarily have to edit binding setting in IIS. Before continuing next steps, make sure you are able to access http://sharepoint.abc.com from the Internet while being able to access http://sharepoint
    from local network (aka Internal).
    On the machine where Office Web App (OWA) Server 2013 is installed, open PowerShell to add OWA module and use the following command to re-create a new OWA server farm if you've completed configuring it previously.
    New-OfficeWebAppsFarm -InternalUrl "http://owa" -ExternalUrl "http://owa.abc.com" -EditingEnabled.
    In this case, I'm not using SSL certificate to encrypt data over the Internet. You can use Internet-public IP of the OWA server like -ExternalUrl "http://198.xxx.xxx.xx". Add CertifcateName parameter if you want to use whether CA-issued certificate
    or self-signed certificate.
    On your SharePoint machine, you need to re-bind all WFE machines to WAC farm using the cmdlet New-SPWOPIBinding. Next, you need to set the WOPI zone for both internal and external.
    Set-SPWOPIZone -zone "external-http"
    Note: I'm not all using certificate in my guidance. But the steps to have it configured is just to add more parameter. 
    I've recently successfully deployed OWA multi-server farm for both internal and internet uses for two big clients. In real-world scenario, ideally OWA should be published through firewall (Forefront UAG, TMG, F5...etc). Please let me know if you still have
    issues after following my steps. My email: [email protected]
    Regards,
    -T.s
    Thuan Soldier
    A 23-year-old man loving Microsoft technologies and making crazy ideas on business journey.
    SharePoint Vietnam |
    Blog | Twitter

  • Cisco ISE with both internal and External RADIUS Server

    Hi
    I have ISE 1.2 , I configured it as management monitor and PSN and it work fine
    I would like to know if I can integrate an external radius server and work with both internal and External RADIUS Server simultanously
    So some computer (groupe_A in active directory ) will continu to made radius authentication on the ISE internal radius and other computer (groupe_B in active directory) will made radius authentication on an external radius server
    I will like to know if it is possible to configure it and how I can do it ?
    Thanks in advance for your help
    Regards
    Blaise

    Cisco ISE can function both as a RADIUS server and as a RADIUS proxy server. When it acts as a proxy server, Cisco ISE receives authentication and accounting requests from the network access server (NAS) and forwards them to the external RADIUS server. Cisco ISE accepts the results of the requests and returns them to the NAS.
    Cisco ISE can simultaneously act as a proxy server to multiple external RADIUS servers. You can use the external RADIUS servers that you configure here in RADIUS server sequences. The External RADIUS Server page lists all the external RADIUS servers that you have defined in Cisco ISE. You can use the filter option to search for specific RADIUS servers based on the name or description, or both. In both simple and rule-based authentication policies, you can use the RADIUS server sequences to proxy the requests to a RADIUS server.
    The RADIUS server sequence strips the domain name from the RADIUS-Username attribute for RADIUS authentications. This domain stripping is not applicable for EAP authentications, which use the EAP-Identity attribute. The RADIUS proxy server obtains the username from the RADIUS-Username attribute and strips it from the character that you specify when you configure the RADIUS server sequence. For EAP authentications, the RADIUS proxy server obtains the username from the EAP-Identity attribute. EAP authentications that use the RADIUS server sequence will succeed only if the EAP-Identity and RADIUS-Username values are the same.

  • Internal and external facing applicaitons on same infrastructure

    I'm looking for suggestions on the best way to architect an apex production environment where you may have two or three apps open to the public and 10 or more for internal access only. All of the apps (regardless of public or private) are running on the same APEX instance, DB, app tier and web tier.
    We are using the APEX Listener on Weblogic for the app tier with an OHS webserver and Load Balancer in front of everything.
    The Load Balancer houses all of our certificates and has the ability to perform iRules to make more friendly urls.
    Our approach is to assign each app (ie https://someurl.com/apex/f?p=APPID) a static IP from the load balancer and then firewall public/private based on APPID to prevent internal only apps from being reached outside the network.
    Unfortunately the iRule friendly url rewrite isn't able to mask the APPID from the URL (https://someurl.com/apex/f?p=200) which currently allows anyone the ability to change the APPID parameter of the URL and cycle through all the apps regardless of the firewall rule in place to prevent it from being publicly accessible.
    For example, if we have the following apps deployed and the only one which is allowed open to the internet is app 100, the url rewrite isn't able to mask APPID of 100 (or the APP Alias if used).
    Publicly accessible:
    https://someurl.com/apex/f?p=100 (192.168.25.100)
    Internal only access:
    https://somedifferenturl.com/apex/f?p=200 (192.168.25.200)
    https://anotherurl.com/apex/f?p=250 (192.168.25.250)
    https://subdomain.someurl.com/apex/f?p=300 (192.168.25.300)
    I could navigate to the publicly accessible url https://someurl.com/apex/f?p=100 and change the APPID for one of (200,250,300) and still access those apps which should not be open to the internet.
    from the internet browsing directly to https://somedifferenturl.com/apex/f?p=200 or https://anotherurl.com/apex/f?p=250 or https://subdomain.someurl.com/apex/f?p=300 would all result in a page not found error since their ip's are not accessible directly from the internet.
    What is the best practice to overcome the above scenario and utilize shared infrastructure for internal and external facing applications? Is mod_rewrite my only other option to accomplish this setup and bypass the load balancer?

    Hi Jeff,
    I'm not sure if this is the ideal recommendation, but I know of a way you could block the "internal-only" applications from being accessed externally.
    1) Create a function which inspects the CGI environment variables, e.g., HTTP_HOST, HTTP_PORT, etc. Using this information, you determine if the request is emanating from an internal server name or an external server name.
    2) Create an authorization scheme which returns FALSE if the host/port/other CGI isn't what you expect.
    3) Apply this authorization scheme to every application you wish to keep from an external site.
    I know this isn't ideal, as you have to add this to every "internal-only" application. And if you forget an application, then this application suddenly becomes available on the Internet. But it's one way. If all of the applications are in the same workspace, you could define this authorization scheme in one application and subscribe to it from the other applications.
    Joel
    P.S. From SQL Commands, you can see all of the CGI environment variables at your disposal using:
    begin
    owa_util.print_cgi_env;
    end;

  • Exchange 2013 DNS for internal and external domain

    Hi All,
    I have been assigned a task to implement Microsoft Exchange Server 2013. I need some help in setting up DNS namespaces and design a strategy to have same internal and external names. Let me share some details here.
    We have an Active Directory domain myinternaldomain.net, and we have a public domain
    mypublicdomain.com and we have setup email policy to have
    mypublicdomain.com as the SMTP domain for all the users. We have created another DNS zone in Active directory integrated DNS and created a records for
    mail.mypublicdomain.com and autodiscover.mypublicdomain.com which will point to CAS NLB IP. We have 2 CAS servers and 2 MBX servers, we have configured DAG for MBX High availability and planning to implement WNLB for CAS as
    hardware LB is out of scope due to budget constrains.
    We want to have same URLs for OWA, Autodiscover, ECP and other services from internal network as well as from public network. Users should not be bothered to remember two URLs, using one from internal and other from public networks. I also want to confirm
    that with this setup in place do i need to have myinternaldomain.net and server names in SAN certificate?
    Thanks

    Hi Sccmnb,
    You can easily achieve this using split DNS.
    Internal DNS hostname "mail.mypublicdomain.com" will be pointing to your internal CAS NLB IP and the external public DNS hostname"mail.mypublicdomain.com" will be pointing to the Network device or
    Reverse proxy server IP.
    Depending upon users access location(internal\external) the IPs would vary and they should be able to access the website with same name.
    The names that you would require on the certificate(Use EAC or powershell to raise the request) for client connectivity would be
    SN= mail.mypublicdomain.com
    SAN= autodiscover.mypublicdomain.com
    You don't need to have the active directory domain name present in the certificate.
    Additional  to this you need to update the AutodiscoverURI for all servers and OWA,ECP,Autodiscover Virtual Directories InternalURL and ExternalURL fields with appropiate public names.
    Some additional Info:
    *Internal vs. External Namespaces
    Since the release of Exchange 2007, the recommendation is to deploy a split-brain DNS infrastructure for the Internet-based client namespaces. A split-brain DNS infrastructure enables different IP addresses to be returned for a given namespace
    based on where the client resides – if the client is within the internal network, the IP address of the internal load balancer is returned; if the client is external, the IP address of the external gateway/firewall is returned.
    This approach simplifies the end-user experience – users only have to know a single namespace (e.g., mail.contoso.com) to access their data, regardless of where they are connecting. A split-brain DNS infrastructure, also simplifies the configuration of Client
    Access server virtual directories, as the InternalURL and ExternalURL values within the environment can be the same value.
    *Managing Certificates in Exchange Server 2013 (Part 2)
    *Nice step by step article
    Designing a simple namespace for Exchange 2013
    Regards,
    Satyajit
    Please“Vote As Helpful”
    if you find my contribution useful or “MarkAs Answer” if it does answer your question. That will encourage me - and others - to take time out to help you.

  • Best Practices regarding AIA and CDP extensions

    Based on the guide "AD CS Step by Step Guide: Two Tier PKI Hierarchy Deployment", I'll have both
    internal and external users (with a CDP in the DMZ) so I have a few questions regarding the configuration of AIA/CDP.
    From here: http://technet.microsoft.com/en-us/library/cc780454(v=ws.10).aspx
    A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined by the certificate issuer. Since the roots certificate issuer is the root CA, there is no value in including a CRL distribution point for
    the root CA. In addition, some applications may detect an invalid certificate chain if the root certificate has a CRL distribution point extension set.A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined
    by the certificate issuer. 
    To have an empty CDP do I have to add these lines to the CAPolicy.inf of the Offline Root CA:
    [CRLDistributionPoint]
    Empty = true
    What about the AIA? Should it be empty for the root CA?
    Using only HTTP CDPs seems to be the best practice, but what about the AIA? Should I only use HTTP?
    Since I'll be using only HTTP CDPs, should I use LDAP Publishing? What is the benefit of using it and what is the best practice regarding this?
    If I don't want to use LDAP Publishing, should I omit the commands: certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA / certutil -f -dspublish "A:\Fabrikam Root
    CA.crl" CA01
    Thank you,

    Is there any reason why you specified a '2' for the HTTP CDP ("2:http://pki.fabrikam.com/CertEnroll/%1_%3%4.crt"
    )? This will be my only CDP/AIA extension, so isn't it supposed to be '1' in priority?
    I tested the setup of the offline Root CA but after the installation, the AIA/CDP Extensions were already pre-populated with the default URLs. I removed all of them:
    The Root Certificate and CRL were already created after ADCS installation in C:\Windows\System32\CertSrv\CertEnroll\ with the default naming convention including the server name (%1_%3%4.crt).
    I guess I could renamed it without impact? If someday I have to revoke the Root CA certificate or the certificate has expired, how will I update the Root CRL since I have no CDP?
    Based on this guide: http://social.technet.microsoft.com/wiki/contents/articles/15037.ad-cs-step-by-step-guide-two-tier-pki-hierarchy-deployment.aspx,
    the Root certificate and CRL is publish in Active Directory:
    certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA
    certutil -f -dspublish "A:\Fabrikam Root CA.crl" CA01
    Is it really necessary to publish the Root CRL in my case?
    Instead of using dspublish, isn't it better to deploy the certificates (Root/Intermediate) through GPO, like in the Default Domain Policy?

  • Internal and external mail setting

    I have 20 WinXP users connecting to my Xserve.
    I want to set up Mail so that only 3 Users can access mail both internally and externally from the office, the others I want to allow internal mail to each other but not to send/receive externally. Is this possible? If so how do I configure this behaviour?
    Regards
    Tony

    Can anyone assist with my previous posting?
    Can the functionality I want be achieved via the server or do I have to look foe alternative methods?
    Can anyone point me in the directon of info that will let me solve this issue?
    Regards
    Tony

  • Exchange 2013 not receiving internal and external emails ..

    I have a coexistence of exchange 2007 and exchange 2013 ..2013 mailboxes where able to receive and send mails (internal and external) but suddenly the mail flow has stopped. 
    Mail flow status
    2013 to 2007 = OK
    2013 to internet = OK
    2013 to 2013 = OK
    2007 to 2013 = FAIL
    Internet to 2013 = FAIL 
    incoming internet mails return the NDR below
    Diagnostic information for administrators:
    Generating server: mydomain.com
    [email protected]
    Remote Server returned '< #4.4.7 smtp;400 4.4.7 Message delayed>'
    What could be a possible reason for this? 
    Cheers guys ..
    ..forever is just a minute away*

    Hi Richard,
    Thank you for your question.
    When there is a coexistence of Exchange 2007 and Exchange 2013, external email will be sent and received by Exchange 2013.
    4.4.7 means message expired, message wait time in queue exceeds limit, potentially due to remote server(your Exchange server ) being unavailable.
    If your organization has correct MX record in ISP. We could refer to the following link to check if MX record is correct:
    http://technet.microsoft.com/en-us/library/aa998082(v=exchg.65).aspx
    If we could telnet Exchange server by the following command: telnet mail.domain.com 25
    If there is a receive connector on Exchange 2013 to receive Internet emails, we could create a receive connector to receive message from the Internet by the following link:
    http://technet.microsoft.com/en-us/library/jj657447(v=exchg.150).aspx
    If there are any questions regarding this issue, please be free to let me know. 
    Best Regard,
    Jim

Maybe you are looking for

  • Line Item Display Variant MIRO-column position needs to be changed..

    Hi, What is the possible way of setting the column position in line item display variant in MIRO screen. I need to change the positions of couple of columns so need to have  one layout as per my requirement. Does this involve only confiuration or it

  • Document management for home use

    Hello, I am going to / wanting to scan all documents (bills, insurance stuff, tax, ...) that I receive and want to archive it on my machine. For that I'm looking for some document management system suitable for home use. While I'm not afraid of some

  • Iphone shipping 11/12-11/27

    Why is the shipping so long .I can't wait .

  • Imovie 09 error message trying to export file as .flv

    I am setting up a website in iweb 09 and using video created in imovie 09... been told best video code across the board for pc users is flv... is this true that the best way to save files for iweb is flv format? If so.. how do I convert? imovie won't

  • How to backup the archived log after a crosscheck .

    Hi all, Here is the scenario: My archive area got full, I moved some archived logs to another destination. My main script to backup archive log raised a error informing that some files were missing, so I issued a crosscheck archivelog all command. I