DFS Random Namespace disappering

We have been fighting now for over a week where DFS namespaces will disapper from our domain controllers. Basically if you go to \\domain.int\share it will only show the users folder on there. This happens on almost all of the domain controllers. Even going
to \\domaincontroller\share results in the same issue.
Sometimes however if you go to \\domain\share it will work and other times it won't work.
Topology
2 Domain controllers in DC
2 Domain controllers in Nashville
1 Domain Controller in Ohio
2 Domain Controllers in Boca Raton
We have it so that Nashville and Ohio are considered a site and DC is a Site and Boca Raton is a site setup under Sites and Services.
We have followed some guides from microsoft like changing the latency to 32000 for \\domain.int\share but that still has not helped.
Any help on where I can start to look at because we are out of ideas at this point and pretty much just aggravated and frustrated beyond belief.  If there any other data that you might need to help solve please ask. I am sure in my haste I have left
out some important details that could help us.

Hi,
Sorry for the long delay to provide a reply.
From your description you mentioned "\\domain.int\share". First I would like to know if the DFS namespace structure is correct.
In your case there should be a root like \\domain.int\share, and then create links (subfolders) which actually point to shared folders on other servers such as:
\\domain.int\share
\folder1  --- point to \\Nashville&Ohio\share
\\DC\share
\\BocaRaton\share
Similar to the second picture in following article:
http://blogs.technet.com/b/josebda/archive/2009/03/10/the-basics-of-the-windows-server-2008-distributed-file-system-dfs.aspx
If there is no namespace root as displayed above, the configuration is incorrect.
So here are my questions:
1. How many root server setup here - you can add a second root server (namespace server) for failover.
2. If there are more than 1 folder targets, when the issue occurs, please make sure if all folder targets could be accessed correctly or not, with access \\servername\share instead of domain based namespace. 
If you have any feedback on our support, please send to [email protected]

Similar Messages

  • Is It Possible to Add a Fileserver to a DFS Replication Group Without Connectivity to FSMO Roles Holder DC But Connectivity to Site DC???

    I apologize in advance for the rambling novella, but I tried to include as many details ahead of time as I could.
    I guess like most issues, this one's been evolving for a while, it started out with us trying to add a new member 
    to a replication group that's on a subnet without connectivity to the FSMO roles holder. I'll try to describe the 
    layout as best as I can up front.
    The AD only has one domain & both the forest & domain are at 2008R2 function level. We've got two sites defined in 
    Sites & Services, Site A is an off-site datacenter with one associated subnet & Site B with 6 associated subnets, A-F. 
    The two sites are connected by a WAN link from a cable provider. Subnets E & F at Site B have no connectivity to Site A 
    across that WAN, only what's available through the front side of the datacenter through the public Internet. The network 
    engineering group involved refuses to route that WAN traffic to those two subnets & we've got no recourse against that 
    decision; so I'm trying to find a way to accomplish this without that if possible.
    The FSMO roles holder is located at Site A. I know that I can define a Site C, add Subnets E & F to that site, & then 
    configure an SMTP site link between Sites A & C, but that only handles AD replication, correct? That still wouldn't allow me, for example, 
    to enumerate DFS namespaces from subnets E & F, or to add a fileserver on either of those subnets as a member to an existing
    DFS replication group, right? Also, root scalability is enabled on all the namespace shares.
    Is there a way to accomplish both of these things without transferring the FSMO roles from the original DC at Site A to, say, 
    the bridgehead DC at Site B? 
    When the infrastructure was originally setup by a former analyst, the topology was much more simple & everything was left
    under the Default First Site & no sites/subnets were setup until fairly recently to resolve authentication issues on 
    Subnets E & F... I bring this up just to say, the FSMO roles holder has held them throughout the build out & addition of 
    all sorts of systems & I'm honestly not sure what, if anything, the transfer of those roles will break. 
    I definitely don't claim to be an expert in any of this, I'll be the first to say that I'm a work-in-progress on this AD design stuff, 
    I'm all for R'ing the FM, but frankly I'm dragging bottom at this point in finding the right FM. I've been digging around
    on Google, forums, & TechNet for the past week or so as this has evolved, but no resolution yet. 
    On VMs & machines on subnets E & F when I go to DFS Management -> Namespace -> Add Namespaces to Display..., none show up 
    automatically & when I click Show Namespaces, after a few seconds I get "The namespaces on DOMAIN cannot be enumerated. The 
    specified domain either does not exist or could not be contacted". If I run a dfsutil /pktinfo, nothing shows except \sysvol 
    but I can access the domain-based DFS shares through Windows Explorer with the UNC path \\DOMAIN-FQDN\Share-Name then when 
    I run a dfsutil /pktinfo it shows all the shares that I've accessed so far.
    So either I'm doing something wrong, or, for some random large, multinational company, every sunbet & fileserver one wants 
    to add to a DFS Namespace has to be able to contact the FSMO roles holder? Or, are those ADs broken down with a child domain 
    for each Site & a FSMO roles holder for that child domain is located in each site?

    Hi,
    A DC in siteB should helpful. I still not see any article mentioned that a DFS client have to connect to PDC every time trying to access a DFS domain based namespace.
    Please see following article. I pasted a part of it below:
    http://technet.microsoft.com/en-us/library/cc782417(v=ws.10).aspx
    Domain controllers play numerous roles in DFS:
    Domain controllers store DFS metadata in Active Directory about domain-based namespaces. DFS metadata consists of information about entire namespace, including the root, root targets, links, link targets, and settings. By default,root servers
    that host domain-based namespaces periodically poll the domain controller acting as the primary domain controller (PDC) emulator master to obtain an updated version of the DFS metadata and store this metadata in memory.
    So Other DC needs to connect PDC for an updated metadata.
    Whenever an administrator makes a change to a domain-based namespace, the
    change is made on the domain controller acting as the PDC emulator master and is then replicated (via Active Directory replication) to other domain controllers in the domain.
    Domain Name Referral Cache
    A domain name referral contains the NetBIOS and DNS names of the local domain, all trusted domains in the forest, and domains in trusted forests. A
    DFS client requests a domain name referral from a domain controller to determine the domains in which the clients can access domain-based namespaces.
    Domain Controller Referral Cache
    A domain controller referral contains the NetBIOS and DNS names of the domain controllers for the list of domains it has cached. A DFS client requests a domain controller referral from a domain controller (in the client’s domain)
    to determine which domain controllers can provide a referral for a domain-based namespace.
    Domain-based Root Referral Cache
    The domain-based root referrals in this memory cache do not store targets in any particular order. The targets are sorted according to the target selection method only when requested from the client. Also, these referrals are based on DFS metadata stored
    on the local domain controller, not the PDC emulator master.
    Thus it seems to be acceptable to have a disconnect between sites shortly when cache is still working on siteB.
    If you have any feedback on our support, please send to [email protected].

  • DFS and Windows 7 x64 strange behavior when trying to access a DFS link through mapped drive

    I've manually mapped a network drive (Q Drive) to a DFS location. Whenever I go into "My Computer" and open the Q Drive it shows the DFS links but when I double click one of the links it randomly takes me back to the "My Computer" starting point showing my standard drive letters. If I click through the Q drive and the DFS several time it all of the sudden works. Sometimes this circle of clicking can go on for 5-10 times before it works properly.
    I'm running the x64 edition of Windows 7.  Any suggestions on how to make this work properly? Its very annoying.

    GPO mapped namespace where you can't connect through the mapped drive letter but can connect through the DFS UNC namespace? Do you use access based enumeration?
    I personally think it has something to do with network bandwidth, the security token and offline files. =)
    You can access the namespace itself but not any of the linked shares (try checking the ACL:s on the shares and you get permission denied, but you still see them, ie they are listed).
    I found a post sometime ago about corrupted/trunkated security tokens. If the member was part of too many AD groups the token was trunkated and corrupted. That was going to be my next move. Sniff the traffic and see what actually happens when the issue occur
    if theres something to be learnt there. Since it works correctly through the UNC adress but not through the drive letter you ough to be able to see what is different between the two requests.
    http://blogs.technet.com/b/askds/archive/2008/05/14/troubleshooting-kerberos-authentication-problems-name-resolution-issues.aspx
    One thing I noted was that it only happened to remotely connected computers on slow 3G connections (we use Direct Access). Never on LAN-connected computers with GB access or remote computers with fast (>15Mbps) access. We also use folder redirection,
    which I think could be part of the problem, ie the share never goes online, atleast for us the issue was itermittent, it never happend all of the time, just from time to time. And if I disconnected and reconnected it could fix the issue for that particular
    sessions, usually it didn't but occasionally it did (just pulling the network cable and put it back). Check the offline/online status the folders show up as offline even though they are online.
    Enough of my ramblings. Sorry to hear you still have the problem and I hope you find a way to solve it.

  • Server 2008 R2 DFS Drop Outs

    I have a problem with a Server 2008R2 running DFS, 3 out of 4 Folders drop out - I just can't figure out what on earth is going on.
    It is a Domain based DFS, the namespace is called 'dfs' - e.g.
    \\domain\dfs, I have 4 Folders called "MyDocs", "Home", "Shared" and "Software" - all of these have a cache duration of 1800 seconds.
    "MyDocs" is used with My Documents Folder Redirection, Home, Shared and Software are used with Group Policy Preference based drive mappings.
    The problem I have is the Home, Shared and Software Folders simply disappear, thus mapped drives become unavailable. The problem can occur as frequently as every 5 minutes or if I'm lucky, I'll get half an hour of solid reliable DFS usage.
    I half suspect I'm not having a problem with the MyDocs Folder because it's linked to Offline files and Windows 7 Offline Files uses background sync.
    One thing I will point out, the problem only surfaced after I upgraded my Lenovo laptop to Windows 7 Ultimate x64 SP1, it did not exist when I was running Windows 7 Professional x64 (no SP). Another laptop on the network (Sony VAIO - differnet
    WLAN NIC), it's still running Windows 7 x64 Professional (no SP), uses the same WLAN and it never has a problem.
    I have spent months investigating the potential for this to be a WLAN card (Intel Centrino 6200) and I have noticed other people reporting similar mapped network drive issues on the Lenovo forums (also using these Centrino model WLAN
    cards).
    The problem occurs almost immediately if I try to transfer a large amount of data over the WLAN, e.g. don't bother trying to copy an ISO as will cause the DFS Folders (Home/Software/Shared) to drop-out almost immediately and the copy borkes out. The problem
    does not occur if I transfer the same ISO over wired LAN ethernet.
    I can't get past the fact that this problem didn't occur when running Win 7 Pro with no SP1, I'm not 100% inclined to think this is a hardware or WLAN card issue.
    I have screwed around with WLAN drivers, power management settings etc etc etc, jack of that. It's achieved nothing.
    I have installed KB983620 but this has had no effect.
    A few things I have noticed:
    * When the DFS Folders come back online, the Application Experience service on the Windows 7 client has literally just (re)started. As a test, sometimes I'll manually restart it and the Folders will reappear almost immediately after (doesn't
    work every time).
    * Restarting the TCP/IP NetBIOS Helper Service can help force the Folders to come back online (also doesn't work every time)
    * Repeatedly browsing the still functional MyDocs DFS Folder (which never drops out) can help prompt the other DFS Folders to reappear (also doesn't work every time)
    * I have noticed when the 3 DFS Folders are not available via
    \\domain.local\dfs\, they will be available via
    \\domain\dfs and vice versa
    * If I run dfsutil /pktinfo - when the 3 Folders are unavailable, the TTL for Home, Shared, Software is sitting at 0 (MyDocs on the other hand is not, it's still counting down). When things are functioning properly, the TTL is counting down for all
    of them from 1800, when it reaches 0, it starts from 1800 again.
    Any ideas?
    Ben

    Hello Ben,
    From your explanation it seems it is wireless issue? can you look in to the event logs of the machine affected and see what event ID is being reported? Also, possibility might be DNS because you said it works with
    \\domain\dfs but fails with
    \\domain.local\dfs?
    Can you check your wireless settings to see what is the max data size that can go through if there are such restrictions
    Isaac Oben MCITP:EA, MCSE,MCC
    View my MCP Certifications
    Hi Isaac
    Thanks for the reply.
    At first and over the past few weeks I've been thoroughly investigating the possibility this is a Wireless issue. Messing around with drivers (Lenovo, Intel, versions, settings, I've tried it all). But there are some factors I can't ignore that lead
    me to think maybe this is not specifically a Wireless issue.
    The problem only started after I upgraded to Windows 7 Ultimate with Service Pack 1 - the problem never occured with Windows 7 Professional (no SP, both x64)
    When I first started troubleshooting I was using
    \\domain\dfs, but then I realised when I was having the problem I could still access the folders via
    \\domain.loca\dfs. Now I've realised it's a case of when fqdn is working, non-fqdn might not be and when non-fqdn is working, fqdn might not be and sometimes both are not.
    No specific events are being generated. Any suggestions?
    Any idea where I check max data rate?
    Thanks,
    ben

  • How can we setup a prefix namespace for a custom library in FB 4.7

    Using 3.6, we are creating a custom library.
    We need to be able to use a prefix such as "MyLib" when adding library components to an MXML file in an application.
    Right now we get <ns:MyButton. . .> and we need <myLib:MyMyButton . . .>. Any other library used gets <ns1:...<ns2:... etc. Not very usefull to identify the source of the component.
    In the library, we have the manifest.xml and design.xml. The library is functional except for the prefix.
    manifest.xml
    <?xml version="1.0" encoding="utf-8"?>
    <componentPackage>
              <component id="MyButton" class="com.test.MyButton"/>
    </componentPackage>
    design.xml
    <?xml version="1.0"?>
    <design version="2">
              <namespaces>
                        <namespace prefix="myLib" uri="library://flex/myLib"/>
              </namespaces>
              <categories>
                        <category id="MyLib" defaultExpand="true"/>
              </categories>
              <components>
                        <component name="MyButton" namespace="myLIb" category="MyLib" displayName="MyButton"/>
              </components>
    </design>
    When building the SWC, the assets includes the manifest.xml and design.xml files. The Namespace URL library://flex/myLib and manifest file location are also set. The compiler arguments contains -include-namespaces library://flex/myLib
    Is there  fix for this?
    I understand that the "design" part has been removed from 4.7, but why would it affect the mxml prefix?

    Any feedback from Adobe people that worked on FB 4.7 and the removeal of the design mode?
    Are they related? Any suggestions?
    Having random namespace prefix being generated is very annoying.

  • Namespace shenagans

    Hello,
    I'm trying to load some XML without it being modified but from what I see, no matter which way I load it (.setNamespaceAware = true|false) java is making modifications to my XML.
    Eg:
    If I have source XML
    <root xmlns:ns="http://someURI.net">
    <ns:tag1>some value</ns:tag1>
    <ns:tag2>
    <tag3>some value for tag 3</tag3>
    </ns:tag2>
    </root>
    And I load it using a DocumentBuilderFactory with .setNamespaceAware=true I get:
    <root xmlns:ns="http://someURI.net">
    <ns:tag1>some value</ns:tag1>
    <ns:tag2>
    <*ns5:*tag3 xmlns="http://test.net" xmlns:ns5="http://test.net">some value for tag 3</*ns5:*tag3>
    </ns:tag2>
    </root>
    And if I load it using .setNamespaceAware=false I get:
    <root xmlns:ns="http://someURI.net">
    <ns:tag1 xmlns:ns="">some value</ns:tag1>
    <ns:tag2 xmlns:ns="">
    <tag3>some value for tag 3</tag3>
    </ns:tag2>
    </root>
    So basicly what I'm asking is how do I load my source XML as it is without having Java modify it? I have tags that don't have namespace information, and that is by design, but if I load the XML and let java assign a random namespace to it, it's no longer the same XML. And if I tell the document builder to not be namespace aware then it starts redefining namespaces to "".

    My code is doing:
    {color:#3366ff}
    try {
    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
    factory.setNamespaceAware(false);
    factory.setValidating(false);
    DocumentBuilder builder;
    builder = factory.newDocumentBuilder();
    document = builder.parse(new InputSource(new StringReader(xml)));
    return document.getDocumentElement();
    }{color}
    Where "xml" is a String.

  • Migrate DFS to Windows 2012

    We are currently running Windows2003R2 as our DFS server. This server has three network drives to address the Department folders and User folders. Some of the department folders are replicating to remote servers. The remote servers are running Windows2008R2.
    I want to migrate the Windows2003 DFS server to Windows 2012 in virtual environment. Please advise how I can migrate to new serevr with minimum downtime to the users. I am thinking of splitting the department and user folder to multiple DFS servers. Also
    advise how I can get fault tolerance and high availability on the DFS servers in VMware environment.
    Please advise.

    Hi,
    To minimize downtime and reduce impact to users, plan your data migration to occur during off-peak hours. Use the “net share” command to list all shared folders on the source server.
    File and Storage Services: Prepare to Migrate
    http://technet.microsoft.com/en-us/library/jj863563.aspx
    Migrate File and Storage Services to Windows Server 2012
    http://technet.microsoft.com/en-us/library/jj863566.aspx
    For fault tolerance and high availability on the DFS servers, you could refer to the article below:
    How many DFS-N namespace servers do you need?
    http://blogs.technet.com/b/josebda/archive/2009/06/26/how-many-dfs-n-namespaces-servers-do-you-need.aspx
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • DFS - ABE Issue HELP!!!

    Used dfsutil to export stand-alone DFS on Windows 2003 then imported to Windows 2012 R2 DFS (domain namespace)
    The problem I'm having is I've enabled Access Based Enumeration (ABE) but each link needs "Set explicit view permissions on the DFS folder" modified.  Is there an easy way to modify each DFS link to match the group used for share/NTFS permissions?
     I have 456 links and would prefer NOT to modify each manually.
    Thanks
    Ben

    Hi Ben,
    Do you mean you would like to enable ABE on each DFS link?
    ABE is applied to DFS Root so do you have too many DFS roots as well?
    You can run the following command line:
    dfsutil property abe enable \\<namespace_root>
    If there are multiple DFS roots, add them into each command line and save it as a BAT file to run it in batch.
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • DFS folder visibility and group membership

    Hello
    I have a forest with multiple domain
    I have activated ABE on DFS
    My design is :
    \\contoso.com\DFS
    -Site1 -> \\site1.contoso.com\DFS (explicit permission : DL.folder1.site1)
    \\site1.contoso.com\DFS
    -Folder1 -> \\fileserver.site1.contoso.com\Folder1 (explicit permission : DL.folder1.site1)
    i have set explicit explicit authorization with Domain local group (Domain local groups contains Global Group which contain users)
    when my user  from site 1 connect to : \\site1.contoso.com\dfs  it's work they can see the folder1 only if they are in DL.folder1.site1
    But when there are connect to \\contoso.com\DFS then don't see ther folder site1. but they can't access it if put the full path ( \contoso.com\DFS\site1

    Hi,
    Do you mean that you have a Domain local group named DL.folder1.site1 and you give explicit permission on the group to access DFS share Folder1? You have enabled ABE. The use in the group can see the Folder1 using the DFS path
    \\site1.contoso.com\dfs. But the user cannot see the Folder1 using the DFS path
    \\contoso.com\DFS and the user cannot access the Folder1 using the full path
    \\contoso.com\DFS\Folder1?
    The DFS share is created on the domain or the forest? If on the forest, I think it is by design. As the DFS domain namespace is domain based, so we could not access it using forest name.
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Is it Possible to Promote DC on a Subnet With Connectivity to a Site DC But Not DC with FSMO Roles???

    I apologize in advance for the rambling novella, but I tried to include as many details ahead of time as I could.
    I guess like most issues, this one's been evolving for a while, it started out with us trying to add a new member 
    to a replication group that's on a subnet without connectivity to the FSMO roles holder. I'll try to describe the 
    layout as best as I can up front.
    The AD only has one domain & both the forest & domain are at 2008R2 function level. We've got two sites defined in 
    Sites & Services, Site A is an off-site datacenter with one associated subnet & Site B with 6 associated subnets, A-F. 
    The two sites are connected by a WAN link from a cable provider. Subnets E & F at Site B have no connectivity to Site A 
    across that WAN, only what's available through the front side of the datacenter through the public Internet. The network 
    engineering group involved refuses to route that WAN traffic to those two subnets & we've got no recourse against that 
    decision; so I'm trying to find a way to accomplish this without that if possible.
    The FSMO roles holder is located at Site A. I know that I can define a Site C, add Subnets E & F to that site, & then 
    configure an SMTP site link between Sites A & C, but that only handles AD replication, correct? That still wouldn't allow me, for example, 
    to enumerate DFS namespaces from subnets E & F, or to add a fileserver on either of those subnets as a member to an existing
    DFS replication group, right? Also, root scalability is enabled on all the namespace shares.
    Is there a way to accomplish both of these things without transferring the FSMO roles from the original DC at Site A to, say, 
    the bridgehead DC at Site B? 
    When the infrastructure was originally setup by a former analyst, the topology was much more simple & everything was left
    under the Default First Site & no sites/subnets were setup until fairly recently to resolve authentication issues on 
    Subnets E & F... I bring this up just to say, the FSMO roles holder has held them throughout the build out & addition of 
    all sorts of systems & I'm honestly not sure what, if anything, the transfer of those roles will break. 
    I definitely don't claim to be an expert in any of this, I'll be the first to say that I'm a work-in-progress on this AD design stuff, 
    I'm all for R'ing the FM, but frankly I'm dragging bottom at this point in finding the right FM. I've been digging around
    on Google, forums, & TechNet for the past week or so as this has evolved, but no resolution yet. 
    On VMs & machines on subnets E & F when I go to DFS Management -> Namespace -> Add Namespaces to Display..., none show up 
    automatically & when I click Show Namespaces, after a few seconds I get "The namespaces on DOMAIN cannot be enumerated. The 
    specified domain either does not exist or could not be contacted". If I run a dfsutil /pktinfo, nothing shows except \sysvol 
    but I can access the domain-based DFS shares through Windows Explorer with the UNC path \\DOMAIN-FQDN\Share-Name then when 
    I run a dfsutil /pktinfo it shows all the shares that I've accessed so far.
    So either I'm doing something wrong, or, for some random large, multinational company, every sunbet & fileserver one wants 
    to add to a DFS Namespace has to be able to contact the FSMO roles holder? Or, are those ADs broken down with a child domain 
    for each Site & a FSMO roles holder for that child domain is located in each site?

    Hi Matthew,
    Unfortunately a lot of the intricacies of DFS leave my head as soon as I’m done with a particular design or troubleshooting situation but from memory, having direct connectivity to the PDC emulator for a particular domain is the key to managing domain based
    DFS.
    Have a read of this article for the differences between “Optimize for consistency” vs “Optimize for scalability”:
    http://technet.microsoft.com/en-us/library/cc737400(v=ws.10).aspx
    In brief, I’d say they mean:
    In consistency mode the namespace servers always poll the PDCe for the latest and greatest information on the namespaces they are hosting.
    In scalability mode the namespace servers should poll the closest DC for information on the namespaces they are hosting.
    The key piece of information in that article about scalability mode is: “Updates are still made to the namespace object in Active Directory on the PDC emulator, but namespace servers do not discover those changes until the updated namespace object replicates
    (using Active Directory replication) to the closest domain controller for each namespace server.”
    I read that as saying you can have a server running DFS-N as long as it has connectivity to a DC but if you want to make changes, do them from a box that has direct connectivity to the PDCe. Then let AD replication float those changes out to your other DCs
    where the remote DFS-N server will eventually pick them up. Give it a try and see how you get on.
    That being said, you may want to double check that you have configured the most appropriate FSMO role placement in your environment's AD design:
    http://technet.microsoft.com/en-us/library/cc754889(v=ws.10).aspx
    And a DFS response probably wouldn’t be complete without an AskDS link:
    http://blogs.technet.com/b/askds/archive/2012/07/24/common-dfsn-configuration-mistakes-and-oversights.aspx
    These links may also help:
    http://blogs.technet.com/b/filecab/archive/2012/08/26/dfs-namespace-scalability-considerations.aspx
    http://blogs.technet.com/b/josebda/archive/2009/12/30/windows-server-dfs-namespaces-reference.aspx
    http://blogs.technet.com/b/josebda/archive/2009/06/26/how-many-dfs-n-namespaces-servers-do-you-need.aspx
    I hope this helps,
    Mark

  • Learning Windows server 2012 R2 & 2012 core

    Hi,
    How do i configure a fast and standard solution with 1domain (Windows
    Server 2012 R2) and 1subdomain(Windows Server 2012 Core) implemented with a webserver and security for dns?
    Thx

    Hi
    Maybe this can help,
    Nslookup test:
    cmd => nslookup => set type=mx => host.net.
    Organizational unit:.be
    Active directory users and computers openen => rmb op domeinnaam => new => organtizational unit aanmaken => Protection uitvinken
    Computer Manueel toevoegen aan domein:
    1)DNS veranderen naar 192.168.1.1 van het domein zelf
    2)Add-Computer -domainname host -cred administrator@host -passthru -verbose
    GPO voor chrome installeren:
    1)Group policy management => in OU PC's => new policy aanmaken
    2)rmb policy en klik edit
    3)onder computer => software => new package => pad ingeven waar je msi bestand hebt gezet van chrome => \\S1\netlogon\msi\chrome.msi
    4)client heropstarten en aanmelden met domeingebruiker => powershell => Restart-Computer
    5)mapje waar MSI in zit => security => domain controller (user) toevoegen met volledig beheer
    GPO voor browser block chrome:
    3)block listed urls..
    4)op client gpupdate
    Failed login events:
    1)Group policy instellen op OU Servers: Computer Configuration\Windows Settings\Security Settings\Local Policies\Audit Policy\ ==> Failed logins aanzetten
    2)gpupdate /force
    1)powershell
    2)get-windowsfeature => install-windowsfeature SMTP-Server
    3)Internet information services => S1 => Domain RMB => properties => Acces tab => Relay => Add => Group computers => IP: 192.168.1.1 subnet 255.255.255.0 => Ok => ok
    3b)Eens afmelden en aanmelden met fout wachtwoord zodat er een log geschreven wordt met audit failure in de security log van event viewer
    4)Eventviewer security log => op failed audit log RMB => attach => Geef andere naam => next => next start program => program: powershell.exe =>
    open the propery dialog aanvinken
    5)Run wheter user is logged in or not aanvinken => tabke conditions: start the task only if AC power afvinken! => ok => paswoord administrator ingeven
    6)powershell: get-executionpolicy => resultaat moet remotesigned zijn => view tabke => script pane aanzetten =>
    Script geven: $smtpServer = ìsmtp2.school.beî
    $msg = New-Object Net.Mail.MailMessage
    $smtp = New-Object Net.Mail.SmtpClient($smtpServer)
    $msg.From = ì[email protected]î
    $msg.ReplyTo = [email protected]î
    $msg.To.Add([email protected]î)
    $msg.subject = ìhacking attempt?î
    $msg.body = ìlogin/pwd failure on S1.î
    $smtp.Send($msg)
    7)Script opslaan in mapje op C schijf => powershell cd naar mapje met script => ls commandoTo configure the time source for the forest
    8)Task scheduler openen => naar event viewer tasks => login => rmb properties => actions => powershell.exe edit => add arguments: -command "C:\Script\login.ps1" => ok => password admin ingeven
    9)Testen
    *Op welke manier kan je je MX records controleren met NSLOOKUP
    cmd => nslookup => set type=mx => host.net.
    *Commando powershell om Client toe te voegen aan het domein:
    Add-Computer -domainname host -cred administrator@host -passthru -verbose
    Best practice analyzer:
    1)Server manager => klik op dns en op ADDS => Scroll naar onder tot bij BPA => Task start scan => bekijk resultaten:
    Vraagje: Welke suggesties zou je kunnen oplossen:
    DNS server should have scavenging enabled
    De PDC emulator master moet geconfigureerd worden
    1)To configure a domain controller in the parent domain as a reliable time source
    *W32tm /config /reliable:yes /update
    2)To configure the time source for the forest
    *w32tm /config /computer:s1.host.net /manualpeerlist:ntp.belnet.be /syncfromflags:manual /update
    Tijd moet gelijk zijn van S1 en S2!!
    Corefig opstarten in powershell:
    1)cd C:\corefig
    2)execution policy aanpassen: Set-ExecutionPolicy bypass
    3).\corefig.ps1
    4)naam veranderen in corefig
    Commando om S2 toe te voegen aan het domein in de OU servers:
    1)DNS instellen
    Set-DnsClientServerAddress -InterfaceAlias "Ethernet" -ServerAddresses 192.168.1.1
    2)Toevoegen aan OU servers
    Add-Computer -domainname sdhost -cred administrator@host -OUPath "OU=Servers,OU=OU,DC=Host,DC=net"
    Herstarten
    OPPASSEN HIERMEE ALS S2 ZELF DC MOET WORDEN!
    Voorzie je server van de DNS-rol via windows powershell:
    1)Import-Module Servermanager
    2)Get-WindowsFeature
    2)Add-WindowsFeature "DNS" -restart
    Remoteaccess:
    S1 remote access geven voor administrators bij active directory
    view => advanced features enablen
    => Remote management users => HOST\Administrator toevoegen met full rechten
    => Remote Desktop users => HOST\Administrator toevoegen met full rechten
    Bekijk welke firewall regel op dit moment Remote Management nog blokkeert en laat
    die communicatie toe:
    1)Op S2 in powershell: Configure-SMRemoting.exe -enable
    2)op S1 => Server manager => manage => add servers => S2 ingeven => ok
    3)Active directory installeren op s2 via add roles (via S1)
    4)S2 promoveren to domain controller
    5)credentials van s1 gebruiken => naam subdomain 'premium'
    6)DSRM passwoord: P0wnerken
    7)PREMIUM
    DNS instellen van s2 zelf
    Set-DnsClientServerAddress -InterfaceAlias "Ethernet" -ServerAddresses 192.168.1.2
    C2)DNS server instellen op S2 : 192.168.1.2
    Toevoegen aan domein premium.host.net => inloggen met admin account van s2 domein
    herstarten van C2
    Maak†van†deze†tweede†server†nu†een†domeincontroller†voor†het†nieuwe†domein
    ìpremiumî.†Daar†zijn†twee†werkwijzen†voor.†Zoek†deze†methodes†op†en†noteer†deze
    summier†hieronder:
    - Werken met DCPROMO.exe
    - Werken met GUI vanop S1
    Je†mag†zelf†kiezen†welke†methode†je†toepast.†Noteer†hier†wel†de†commandoís†die†je
    toepast:
    Werken met GUI: new existing domain to current forest => naam PREMIUM
    Netwerkkaarten toevoegen:
    VCLOUD => Niet customizen!!!
    Firewall disablen S2:
    netsh firewall set opmode disable
    Op S1 => chrome => ip in url : https://192.168.1.150:446 => proceed => logingegevens:
    naam: openfiler
    pass: password
    Services => CIFS / NFS => Enable => Start
    manage volumes => 1GB volume => start cyl = 1, end cyl = 128 => ongeveer 1GB
    Add volume group => NFS als naam en 1GB volume toevoegen => Add volume => naar onder scrollen:
    Naam: NFS
    Bestandssysteem: EXT4 kiezen
    *Add new physical volume 10GB: MINSTENS 35 CYLINDERS TUSSENLATEN!!!!
    Start cyl = 164, end cyl = 1469, is ongeveer 10GB
    Volume groups => Nieuwe aanmaken met SMB als naam => Add volume => volume selecteren en toevoegen => naar uw smb volume group gaan
    => SMB volume kiezen => naam: SMB => MAX Geheugen => EXT4 bestandssysteem
    1)Clocksettings zetten via ntp server: ntp.belnet.be (Moet gelijk zijn met domaincontroller waarin je hem toevoegd)
    2)DNS zetten van S2
    Hostname: of
    Primary DNS: 192.168.1.2
    Secondary DNS: 192.168.1.1
    Gateway: 192.168.1.254
    3)Accounts:
    Expert view!
    *Use windows domain controller and authentication aanvinken
    Security Mode: Active directory
    Domain / workgroup: PREMIUM
    Domain controllers: s2.premium.VAhost.net
    ADS realm: PREMIUM.HOST.NET
    Join domain: aanvinken
    Administrator username: Administrator
    Administrator password: Azerty123
    *Naar onder scrollen tot kerberos 5: Aanvinken
    Realm: premium.host.net
    KDC: s2.premium.host.net
    Admin server: s2.premium.VAhost.net
    Share aanmaken:
    1)Shares => klikken op SMB / NFS => Nieuwe subfolder aanmaken: SMBshare / NFSshare
    2)subfolder klikken => maak share => bij rechten naar beneden scrollen => Domain admins: PG & RW, Domain users: RO
    3)Update
    Systeem beveiliging:
    1)system => Network access configuration => Nieuw netwerk toevoegen
    Name: Sharenetwork
    Network/host: 192.168.1.0
    Netmask: 255.255.255.0
    Type: Share
    2)Update
    Protocol aanzetten:
    Shares => subfolder smbshared => Volledig vanonder scrollen => SMB/CIFS protocol op rw zetten
    Connect to share met:
    root
    Azerty123
    Connect Z-schijf met SMB share:
    1)RMB op SMB share
    2)Map network drive
    3)Pad SMB share intypen
    4)connecten met share account of finish 1)Private storage en manueel ip adres ingeven
    Beveiliging backup:
    1)Active directory van S1
    2)OP s1 zelf volledig nieuwe OU: "TEMP Accounts" aanmaken => accidentally delete afzetten!!
    3)2USers aanmaken die lid zijn van de groep ("member of") Guest
    4)Op S1 => C schijf => nieuwe map map aanmaken en delen
    5)Op advanced sharing van gedeelde map => Guest 1 Full control => Everyone alleen read rechten
    6)Testen op client of je op Guest1 tekstbestand kan aanmaken en via Guest2 op die share map niet.
    7)Als het werkt Guest1 verwijderen en bekijk sharing permissions op Guest1 map
    *Wat stel je vast bij verwijderen Guest1 via active directory:
    De guest account wordt vervangen door een ander account met een lange naam
    die full control heeft over de map
    8)Guest1 terug opnieuw aanmaken, wat stel je vast?
    Guest1 heeft geen rechten meer over de map en de aangemaakte account blijft staan
    Recycle BIN:
    1)Open Active directory administrative center
    2)Klik op uw domein links
    3)Rechts => enable Recycle Bin
    4)Verwijder Guest1 op AD
    5)Guest1 komt te staan bij deleted users/objects op Recycle Bin
    6)Mogelijkheid om te restoren
    7)Delete OU Temp accounts => Lukt niet onmiddellijk => Omdat er nog objecten in zitten
    *Zoek op welke technieken je kan toepassen om een backup te nemen van je Active Directory. Bekijk uiteraard ook welke 2 manieren
    er zijn om een backup van je AD terug te plaatsen (Authoritative en non-authoritative):
    -13.1.1 Authoritative Restore
    Dit proces herstelt de AD na bc een wijziging die ongedaan gemaakt moet worden.
    AD wordt hersteld vanaf de backup, de backup overschrijft dan alle andere DC's met eventuele nieuwere informatie.
    -13.1.2 Non-Authoritative Restore
    Terugzetten van gegevens van de backup. Nadien ontvangt de DC updates van andere DC's die gemaakt zijn sinds de backup.
    Backup S1:
    Eerst probleem openfiler oplossen:
    1)openfiler opstarten vanuit vmcloud
    2)cd /etc/samba
    3)vim smb.conf (toevoegen: strict allocate = yes) => eerst i voor insert => opt einde escape => :wq voor opslaan
    4)/etc/init.d/smb restart
    Backup zelf
    1)Install windows backup in server manager => add roles => features
    2)Open windows backup
    3)Action => backup once
    4)Different options => Custom kiezen => System State backuppen
    5)Remote disk kiezen
    6)pad share: \\of\smb.smb.SMBshare
    7)Als backup mislukt, de aangemaakte files door de backup manueel verwijderen en backup terug opnieuw proberen
    !!!Als openfiler ineens verdwijnd van domein, moet je de tijd nakijken van beiden systemen (moeten gelijk zijn met max 5min verschil)
    Restore backup (authoritatief ingesteld)
    http://technet.microsoft.com/ru-ru/library/cc816878(v=ws.10).aspx
    1)Herstart de domeincontroller in Directory Services Restore Mode Remotely
    => run => Msconfig.msc => stapkes staan in url: http://technet.microsoft.com/ru-ru/library/cc794729(v=ws.10).aspx
    2)Restore uw ADDS van je backup a.d.h.v. een non-authoritatieve restore.
    Dit zorgt ervoor dat de domeincontroller terug in de staat komt waarop de objecten die verwijderd zijn
    er terug bijstaan.
    http://technet.microsoft.com/ru-ru/library/cc794755(v=ws.10).aspx
    in cmd:
    =>wbadmin get versions -backuptarget:\\of\smb.smb.SMBshare
    =>wbadmin start systemstaterecovery -version:12/03/2013-12:37 -backuptarget:\\of\smb.smb.SMBshare -quiet
    3)Markeer objecten als authoritatief zodat ze niet worden overschreven bij het restoren door synchronisatiefouten
    tussen de verschillende domeinen.
    http://technet.microsoft.com/ru-ru/library/cc816813(v=ws.10).aspx <== hieraan beginnen
    => open run => ntdsutil
    => activate instance ntds => enter
    => authoritative restore => enter
    => restore subtree "OU=Stagiairs,DC=Host,DC=net" => enter
    => quit => enter
    => Start terug op met de domaincontroller in normale modus dus dsrm opstartmode uitschakelen: Safe boot uitvinken
    Nakijken of beide OU's Stagiairs en Guests er nog staan
    (In dit geval is OU guests wel verwijderd doordat we maar 1 DC hebben dus de informatie
    wordt niet gesynchroniseerd met een 2de DC)
    - Debian Machine toevoegen:
    Netwerkgegevens: NIC0 / Private management network / static - manual / IP = 192.168.1.3
    Als Machine aangemaakt is, nieuwe netwerkkaart toevoegen:
    NIC1 / Private storage network / static - manual / IP = 172.16.0.13
    op Debian machine:
    1)su - => enter => pass: Azerty123 => enter
    2)commando: pico /etc/network/interfaces
    Voeg volgende lijntjes toe aan het bestand
    iface eth0 inet static
    address 192.168.1.3
    netmask 255.255.255.0
    gateway 192.168.1.254
    iface eth1 inet static
    address 172.16.0.13
    netmask 255.255.255.0
    CTRL + O (opslaan) => CTRL + X (afsluiten)
    3)pico /etc/resolv.conf
    veranderd de bestaande lijntjes naar deze:
    domain host.net
    search host.net
    nameserver 192.168.1.1
    4)ifdown / ifup van eth0/eth1
    IPV6 instellen:
    Zelf gekozen ULA subnet:
    fdac:1fff:b0b0 (tot dit gedeelte mag random gegenereerd worden vanaf 'fd')
    Subnet 1: fdac:1fff:b0b0:4bd0:: /64
    Subnet 2: fdac:1fff:b0b0:4bd1:: /64
    /sbin/ip
    Remote settings toewijzen voor domain users aan clients (en eventueel toevoegen aan domein als dit nog niet gebeurt is)
    IPV6 instellen via Netwerkinstellingen (Default gateway openlaten)
            NIC0                    NIC1
    S1: fdac:1fff:b0b0:4bd0::1 /64            fdac:1fff:b0b0:4bd1::11 /64
    dns: ::1                    dns: fdac:1fff:b0b0:4bd1::11
    S2: fdac:1fff:b0b0:4bd0::2 /64            fdac:1fff:b0b0:4bd1::12 /64
    (dns: ::1)                (dns: fdac:1fff:b0b0:4bd1::12)
    Openfiler: fdac:1fff:b0b0:4bd0::150 /64        fdac:1fff:b0b0:4bd1::1 /64    
    S3: fdac:1fff:b0b0:4bd0::3 /64            fdac:1fff:b0b0:4bd1::13 /64
    C1: fdac:1fff:b0b0:4bd0::101 /64
    dns: S1
    C2: fdac:1fff:b0b0:4bd0::102 /64
    dns: S2
    Voor windows server core:
    *powershell
        netsh interface ipv6 add address "Ethernet" fdac:1fff:b0b0:4bd0::2
        netsh interface ipv6 add address "Ethernet 2" fdac:1fff:b0b0:4bd1::12
    Voor linux: (zowel openfiler als debian)
    VOOR DEBIAN 7 (alleen ifup commando gebruiken niet ifdown):
    /sbin/ip -6 addr add fdac:1fff:b0b0:4bd0::3/64 dev eth0 (voor debian)
    /sbin/ip -6 addr add fdac:1fff:b0b0:4bd1::13/64 dev eth1 (voor debian)
    of statisch in /etc/network/interfaces:
    iface eth0 inet6 static
    address fdac:1fff:b0b0:4bd0::3
    netmask 64
    iface eth1 inet6 static
    address fdac:1fff:b0b0:4bd1::13
    netmask 64
    pico /etc/resolv.conf => lijntjes toevoegen
    => domain host.net
    => search host.net
    => nameserver 192.168.1.1
    => nameserver fdac:1fff:b0b0:4bd0::1
    VOOR OPENFILER eth0: vim /etc/sysconfig/network-scripts/ifcfg-eth0
    => IPV6_AUTOCONF=no
    => IPV6INIT=yes
    => Toevoegen: fdac:1fff:b0b0:4bd0::150/64
    VOOR OPENFILER eth1: vim /etc/sysconfig/network-scripts/ifcfg-eth1
    => IPV6_AUTOCONF=no
    => IPV6INIT=yes
    => Toevoegen: fdac:1fff:b0b0:4bd1::1/64
    ~~ /sbin/ip -6 addr add fdac:1fff:b0b0:4bd0::150/64 dev eth0 (voor openfiler)
    ~~ /sbin/ip -6 addr add fdac:1fff:b0b0:4bd1::1/64 dev eth1 (voor openfiler)
    Risico's gedeelde application pool:
        -1 proces per application pool (=>zwaar proces dat veel resources nodig heeft)
            (als dit proces vastloopt alle websites geimpacteerd)
        -gebruikers kunnen in principe aan elkaars bestanden
    1)IIS installeren op S2 via server manager op S1
    2)Role services in setup, volledig vanonder => management service aanvinken (dit staat remote management toe)
    3)Op S1 Web server zoeken en enkel van IIS de management console installeren zodat IIS van S2 beheerbaar is
    4)Powershell op S2:
    Invoke-command -ScriptBlock{Set-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\WebManagement\Server -Name EnableRemoteManagement -Value 1}
    Invoke-command -ScriptBlock {Set-Service -name WMSVC -StartupType Automatic}
    Invoke-command -ScriptBlock {Start-service WMSVC}
    In IIS manager op S1 => Add connection => S2.premium.sdhost.net => account: administrator van S2
    In IIS Manager => Sites => new Website, 2 website aanmaken
        -'klant1.sdhost.net' Physical path => C:\inetpub\wwwroot\Klant1 => hostname = Klant1.host.net
        -'klant2.sdhost.net' Physical path => C:\inetpub\wwwroot\Klant2 => hostname = Klant2.host.net
    In DNS A-record toevoegen:
        -hostname: www
        -IP: 192.168.1.2
    Voor toegang via IPv6 ook een AAAA-record toevoegen:
        -hostname: www
        -IP: fdac:1fff:b0b0:4bd0::2
    Voor elke site ook een een CNAME-record aanmaken:
        -Alias name: klant1, FQDN: www.host.net
        -Alias name: klant2, FQDN: www.host.net
    In deze standaardopstelling schuilen enkele risicoís. Geef twee risicoís die de huidige
    configuratie (gedeelde application pool) met zich mee kan brengen:
    - Als je een website hebt die zwaar CPU belastend is (zoals foto's herschalen) heeft dit ook effect op je andere websites
    - Omdat je websites binnen dezelfde apppool zitten hebben ze eenzelfde identiteit en kun je geen aparte permissies opzetten.
    GROUP MANAGEMENT SERVICE ACCOUNT:
    New-ADServiceAccount IISPool1 -DNSHostName s1.amhost.net -PrincipalsAllowedToRetrieveManagedPassword Administrator -KerberosEncryptionType RC4, AES128, AES256
    Install-ADServiceAccount IISPool1
    Maybe you can do this tutorial to, it is a tuto for learning DFS & DNSSEC..
    Wat betekent de optie “dnssecok”
        -> Deze optie stelt de dnssecOK bit in voor deze query
        -> Dit verteld de server that de client dnssec verstaat en dat deze server hiervan gebruik kan maken met deze client
    Krijg je een bevestiging dat dit een secure antwoord is? (RRSIG)
        -> Neen want de zone is nog niet gesigneerd
    Controleer of de client C1 ingesteld is om secure responses af te dwingen bij zijn DNS
    caching server: get-dnsclientnrptpolicy. Resultaat?
        -> Het resultaat is niks, vermoedelijk omdat er geen instellingen zijn hiervoor
    Probeer opnieuw een request op C1 voor S1 met Resolve­DNSName. Is het signeren
    van de zone voldoende om secure antwoorden te krijgen op de client?
        -> Er komt opnieuw geen RSIG record dus dit is niet voldoende
    Om secure DNS responses op de client voor het domein securezone.lab af te dwingen
    wordt in het domein Host.net een GPO ingesteld. (nieuwe GPO voor hele domein).
    zoek op en stel deze GPO in voor responses van securezone.lab.
        -> default domain policy -> Edit =>    -> Computer Configuration > Policies > Windows Settings > Name Resolution Policy.
        "In the details pane, under Create Rules and To which part of the namespace does this rule apply, choose Suffix from the drop-down list and type sec.contoso.com next to Suffix."
        "On the DNSSEC tab, select the Enable DNSSEC in this rule checkbox and then under Validation select the Require DNS clients to check that name and address data has been validated by the DNS server checkbox."
        "In the bottom right corner, click Create and then verify that a rule for sec.contoso.com was added under Name Resolution Policy Table."
        => GPupdate /force uitvoeren
        => Dan kan de policy bekeken worden
    Je zorgt er uiteraard ook voor dat deze policy toegepast werd op de client (C1) en controleer dit opnieuw met get-dnsclientnrptpolicy.
        => GPupdate /force
        => get-dnsclientnrptpolicy => levert hetzelfde resultaat als op de server
    Opnieuw: Resolve­DnsName s1.securezone.lab ­server S1 ­dnssecok Wat krijg je als antwoord te zien? Wat is de oorzaak?
    (Distribueer) Kopieer de trust achor data van de secure.lab zone op S2 naar S1 en importeer die op de DNS van S1 als trusted anchor. (keyset­securezone.lab)
        http://technet.microsoft.com/en-us/library/hh831411.aspx
    opnieuw: Resolve­DnsName s1.securezone.lab ­server S1 ­dnssecok Krijg je nu een (beveiligd antwoord)?
        ->Ik krijg nu een beveiligd antwoord van de DNS server gesigneerd door securezone.lab met geldigheidstermijn
    p23 Distributed File System
    Installeer op beide server de “file services role”.
        -> Add roles and features
        -> File services
            -> DFS
    Maak een namespace aan (DOCUMENTATION) in je domein hOst.net. Stel de share­permissions zo in dat de groep ‘auteurs’ schrijfrechten heeft. gewone gebruikers
    mogen enkel leesrechten hebben.
        -> DFS manager
        -> Namespaces => Add namespace
    maak een folder aan in de namespace DOCUMENTATION met als naam PDF
        -> Add folder
    maak een tweede target aan voor de PDF folder
        -> Add target to folder
    stel replicatie in tussen de twee folder targets. De inhoud wordt vanaf nu dus gesynct.
        -> Automatisch bij 2de target volg de wizard
    Welke andere stappen zijn nodig om een volledig redundant DFS systeem op te zetten?
        -> De folder moeten via DFS geschared staan
        -> De replicatie moet ingesteld worden
    maak een diagnostisch raport aan over hoe replicatie gebeurt, en corrigeer eventue vastgestelde problemen.
        -> Rechtermuisknop op de replication object
        -> Create diagnostic report
        -> kies de reports
    stel quota’s in. In de map PDF maak je een subfolder CATALOGS aan, maar zorg dat die niet groter dan 10MB kan worden. Stel hiervoor een harde limiet in.
        -> install FSRM bij file services
        -> klik quotas => add quota => kies het bestand
        -> nieuwe quota => 10mb hard aanvinken
        -> save
        http://technet.microsoft.com/en-us/library/cc875787(v=ws.10).aspx
    omdat we willen vermijden dat de volledige bandbreedte ingenomen wordt door DFS,beperken we de replication speed tot 2MBps.
        -> Klik op de replication -> rechterkolom kies vor edit replication group
        -> Stel de 2MBps in

  • Dbms_xslprocessor package problems in PL/SQL

    Hi all :)
    I was wondering if anyone has any ideas about this problem I'm having:
    When using the dbms_xslprocessor in PL/SQL, I consistently get dropped connections when trying to either transform a document, or search a document via XPath. So, for instance, if I call dbms_xslprocessor.selectSingleNode('XPATH'), the connection will drop out with the following error (I'm calling a stored procedure here):
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Similar problems happen on XMLType.transform(stylesheet).
    I've only managed to have this happen when transforming or searching documents that are not in the default (i.e., xmlns="") namespace, but I can't get rid of the namespace (too much other code is relying on it being there).
    What's even stranger is that the XSLPROCESSOR package works, but DBMS_XSLPROCESSOR package does not. Unfortunately, the XSLPROCESSOR package doesn't seem to handle namespaces well, and inserts seemingly random namespace declarations in the transformation results (for instance, on the root element, it puts in "xmlns:xmlns='http://www.w3.org/2000/xmlns/'") or redeclares namespaces on nodes that already have that namespace defined for them by their parents. Namespace prefixes, too.
    Does anybody have any ideas as to what this might be?
    Thanks in advance,
    Constantine

    I should also mention that I'm ABSOLUTELY sure that the stylesheets I'm using for transformation are correct as of XSLT ver. 1.0 -- multiple external processors (Xalan and MSXML, specifically) give the correct result on the same stylesheet.

  • Dbms_xslprocessor problems in PL/SQL

    Hi all :)
    -- This is a cross-posting of a question I placed on the XMLDB forum, but there seems to be more activity here
    -- Sorry about the duplicate
    I was wondering if anyone has any ideas about this problem I'm having:
    When using the dbms_xslprocessor in PL/SQL, I consistently get dropped connections when trying to either transform a document, or search a document via XPath. So, for instance, if I call dbms_xslprocessor.selectSingleNode('XPATH'), the connection will drop out with the following error (I'm calling a stored procedure here):
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Similar problems happen on XMLType.transform(stylesheet).
    I've only managed to have this happen when transforming or searching documents that are not in the default (i.e., xmlns="") namespace, but I can't get rid of the namespace (too much other code is relying on it being there).
    What's even stranger is that the XSLPROCESSOR package works, but DBMS_XSLPROCESSOR package does not. Unfortunately, the XSLPROCESSOR package doesn't seem to handle namespaces well, and inserts seemingly random namespace declarations in the transformation results (for instance, on the root element, it puts in "xmlns:xmlns='http://www.w3.org/2000/xmlns/'") or redeclares namespaces on nodes that already have that namespace defined for them by their parents. Namespace prefixes, too.
    I should mention that I'm absolutely sure the stylesheet is correct. External processors like MSXML and Xalan produce correct results.
    Does anybody have any ideas as to what this might be?
    Thanks in advance,
    Constantine

    what version of the XDK are you using ?
    Version of the javaparser ?

  • Letters disappear when reducing file size of PDF.

    I have a multi-page PDF file that contains bitmap images and vector text. It was originally created in InDesign and exported to PDF. When I exported the PDF at 300 dpi, everything rendered beautifully. However, when I reduced  the image quality through the PDF optimizer (set all images to 80 dpi at high quality), random letters disappered from my text. Weird, since these are vectors and should not be affected by lowered dpi settings anyway.
    The gaps are there where the letters should be, and I can even use my text cursor to copy and paste those missing letters into a text file, for example. So basically, all the data is still there, the letters just aren't showing up. This is not resolved by zooming in, and most unfortunately, the letters also do not show up when the document is printed. I have all my fonts embedded. Could this be due to the fact that I am using a lightweight font? (Verlag Extra Light). The letters that go missing seem to be completely at random. I have Acrobat Professional 8.

    Thanks for your suggestions. Yes, they all say embedded subset. I will try to replace the fonts just to see what happens, though Verlag is pretty central to the design of this project. I can't see it being a quality issue since it's a high-end font from Hoefler & Frere Jones (not to mention specially purchased for the project!)
    I will note that I was able to make some progress by using the Advanced Editing -> Touch Up Text tool to manually retype the missing letters. I did get them to show up that way. Unfortunately this is very time consuming and tends to mess up the text formatting. Still curious what causes this.

  • Replace WS2003 domain controller for WS2012 domain controller

    Hi, I think that is a common problem but I haven't found anythink exactly like this, only something similar, but I have a lot of doubts yet.
    The thing is that I have a network with two domain controllers:
    WS2003     - 192.168.0.1, who is the first domain controller I created and is also a file sharing server
    WS2008R2 - 192.168.0.8, who is a  new domain controller I added one year ago.
    Now, I want to replace the first one, keeping the second. One.
    I thinking of removing the first one and replace it with a new machine (WS2012) with the same IP and name host. I need the same host because clients are pointing to it to get the shared files.
    My main fear is that clients get some error related with trust relationship and I will have to rejoin them one by one to the domain.
    As I have another domain controller, Will the global catalog of the new machine be synchronized automaticly with the WS2008R2 domain controller?
    Do I need to demote the old domain controller before add the new one?
    Thanks a lot

    Hi Tomas,
    As pointed by Burakm you should have an additional file server and should avoid using a Domain controller which has priviledged access, to share files. This puts you at a security risk.
    Regarding the requirement of old host name:
    Here is something that would let you keep a different servername and IP, yet allow your users to connect to the old hostname and access the share. Use CNAME records of old server to point it to the new hostname.
    How to Configure Windows Machine to Allow File Sharing with DNS Alias
    You might also look for Distributed File System Shares.
    http://blogs.technet.com/b/josebda/archive/2009/06/26/how-many-dfs-n-namespaces-servers-do-you-need.aspx
    NOTE- You can't run in-place upgrade of a 2003 to 2012 DC.
    Regards,
    Satyajit
    Please “Vote As Helpful”
    if you find my contribution useful or “Mark As Answer” if it does answer your question. That will encourage me - and others - to take time out to help you.

Maybe you are looking for