Some Clients not using Local Distribution Point

I have a new physcial site that we are wanting to put the SCCM client on the workstations. The local DP has been setup and the SCCM 2012 config settings for the new site have been made. I did a manual push to 5 workstations and monitored the ccmsetup.log.
I had some clients using the local DP for the client install and some went back across the WAN to the MP to get the files. The ones that come back across the WAN to the MP have this in their log shown below. It does not appear to be boundary related or
Client OS as they are in the same subnet that is configured and have both xp and win7 working and not working. I found a few others have had this issue come up searching the web, but none of solutions helped in my situation. The client is installing on
all of the workstations I have manually pushed it to. I just can't figure out why some are not using the local DP and YES I have verified that the ones not using the local DP are in the same boundary group as the ones that do. 
Any help is appreciated. FVH081-DP1 is the local DP and AHDC400 is my MP
Only one MP AHDC400.phs-sfalls.amck.net is specified. Use it. ccmsetup 1/23/2015 2:56:41 PM 2728 (0x0AA8)
Searching for DP locations from MP(s)... ccmsetup 1/23/2015 2:56:41 PM 2728 (0x0AA8)
Current AD site of machine is FVH LocationServices 1/23/2015 2:56:41 PM 2728 (0x0AA8)
Local Machine is joined to an AD domain LocationServices 1/23/2015 2:56:41 PM 2728 (0x0AA8)
Current AD forest name is amck.net, domain name is fvh.amck.net LocationServices 1/23/2015 2:56:42 PM 2728 (0x0AA8)
DhcpGetOriginalSubnetMask entry point not supported. LocationServices 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Begin checking Alternate Network Configuration LocationServices 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Finished checking Alternate Network Configuration LocationServices 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Adapter {AB6BAB2C-7B65-441A-A83C-C91FF8B8498D} is DHCP enabled. Checking quarantine status. LocationServices 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Sending message body '<ContentLocationRequest SchemaVersion="1.00">
  <AssignedSite SiteCode="MCK"/>
  <ClientPackage/>
  <ClientLocationInfo LocationType="SMSPACKAGE" DistributeOnDemand="0" UseProtected="0" AllowCaching="0" BranchDPFlags="0" AllowHTTP="1" AllowSMB="0" AllowMulticast="0"
UseInternetDP="0">
    <ADSite Name="FVH"/>
    <Forest Name="amck.net"/>
    <Domain Name="fvh.amck.net"/>
    <IPAddresses>
<IPAddress SubnetAddress="10.7.64.0" Address="10.7.79.3"/>
    </IPAddresses>
  </ClientLocationInfo>
</ContentLocationRequest>
' ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Sending message header '<Msg SchemaVersion="1.1"><ID>{DA5C91BE-3A3E-4E14-AF6E-DB0E9AC36878}</ID><SourceHost>FVHH008</SourceHost><TargetAddress>mp:[http]MP_LocationManager</TargetAddress><ReplyTo>direct:FVHH008:LS_ReplyLocations</ReplyTo><Priority>3</Priority><Timeout>600</Timeout><ReqVersion>5931</ReqVersion><TargetHost>AHDC400.phs-sfalls.amck.net</TargetHost><TargetEndpoint>MP_LocationManager</TargetEndpoint><ReplyMode>Sync</ReplyMode><Protocol>http</Protocol><SentTime>2015-01-23T20:56:42Z</SentTime><Body
Type="ByteRange" Offset="0" Length="1068"/><Hooks><Hook3 Name="zlib-compress"/></Hooks><Payload Type="inline"/></Msg>' ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
CCM_POST 'HTTP://AHDC400.phs-sfalls.amck.net/ccm_system/request' ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Content boundary is '--aAbBcCdDv1234567890VxXyYzZ' ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Received header '<Msg SchemaVersion="1.1">
 <ID>{45121E9D-C6EE-4154-BF60-AADD6204B66E}</ID>
 <SourceID>GUID:1B6F821D-DF54-4E9D-89BD-D68DD917DBD2</SourceID>
 <SourceHost>AHDC400</SourceHost>
 <TargetAddress>direct:FVHH008:LS_ReplyLocations</TargetAddress>
 <ReplyTo>MP_LocationManager</ReplyTo>
 <CorrelationID>{00000000-0000-0000-0000-000000000000}</CorrelationID>
 <Priority>3</Priority>
 <Timeout>600</Timeout>
 <TargetHost>FVHH008</TargetHost><TargetEndpoint>LS_ReplyLocations</TargetEndpoint><ReplyMode>Sync</ReplyMode><Protocol>http</Protocol><SentTime>2015-01-23T20:56:42Z</SentTime><Body Type="ByteRange"
Offset="0" Length="2590"/><Hooks><Hook3 Name="zlib-compress"/><Hook Name="authenticate"><Property Name="Signature">3082018F06092A864886F70D010702A08201803082017C020101310B300906052B0E03021A0500300B06092A864886F70D0107013182015B30820157020101303430203110300E0603550403130741484443343030310C300A06035504031303534D53021058042BA685CEB3A74AC16009523D655A300906052B0E03021A0500300D06092A864886F70D0101010500048201002E2019E353A4244A8CA9D2451A6206393F00541279E76F3EFEED3C768C36F01EB88834E74E53D3063FC56D5A899C604036B8DCBACC765156270E5417D0A384440A2B29B08487F9BCEB84C3642D736587692675CBFB78DAF8017D94C5782E5166868F7B0B01E006319B1BDF6FA37DE9AFE5389C5CADF3A72572B08D01D68EE369C9830F4952B6C1B38F710B87888C65C27EB8176B8064BC392DB06C966112F119AD62E53C7B79EC26CEA9CFE027D401E535EAB166E18A5F37CB806EC21AF66510A41B5B4936953682DAF157EA50E02D51DF8A78DE4E12A368AE7693EEC37ACFAAC16ACF4C5DA0838F5821413C79A478DBAF1DCDAE23F6734C1D70882D3CBF4433</Property><Property
Name="AuthSenderMachine">AHDC400;AHDC400.phs-sfalls.amck.net;</Property><Property Name="MPSiteCode">MCK</Property></Hook></Hooks><Payload Type="inline"/></Msg>' ccmsetup 1/23/2015
2:56:42 PM 2728 (0x0AA8)
Received reply body '<ContentLocationReply SchemaVersion="1.00"><ContentInfo PackageFlags="16777216"><ContentHashValues/></ContentInfo><Sites><Site><MPSite SiteCode="MCK" MasterSiteCode="MCK"
SiteLocality="LOCAL" IISPreferedPort="80" IISSSLPreferedPort="443"/><LocationRecords><LocationRecord><URL Name="http://FVH081-DP1.phs-sfalls.amck.net/SMS_DP_SMSPKG$/AHS00002" Signature="http://FVH081-DP1.phs-sfalls.amck.net/SMS_DP_SMSSIG$/AHS00002"/><ADSite
Name="FVH"/><IPSubnets><IPSubnet Address="10.7.64.0"/><IPSubnet Address=""/></IPSubnets><Metric Value=""/><Version>7804</Version><Capabilities SchemaVersion="1.0"><Property
Name="SSLState" Value="0"/></Capabilities><ServerRemoteName>FVH081-DP1.phs-sfalls.amck.net</ServerRemoteName><DPType>SERVER</DPType><Windows Trust="1"/><Locality>LOCAL</Locality></LocationRecord></LocationRecords></Site><Site><MPSite
SiteCode="MCK" MasterSiteCode="MCK" SiteLocality="LOCAL"/><LocationRecords/></Site></Sites><ClientPackage FullPackageID="AHS00002" FullPackageVersion="1" FullPackageHash="5EF3A189C48F3469440A83026EC8ECD36EAD6EAF3B5D35663F8201BDE175413C"
MinimumClientVersion="5.00.7804.1000" RandomizeMaxDays="7" ProgramEnabled="false" LastModifiedTime="30282566;3841038464" SiteVersionMatch="true" SiteVersion="5.00.7804.1000" EnablePeerCache="true"/></ContentLocationReply>' ccmsetup 1/23/2015
2:56:42 PM 2728 (0x0AA8)
Found local location 'http://FVH081-DP1.phs-sfalls.amck.net/SMS_DP_SMSPKG$/AHS00002' ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Discovered 1 local DP locations. ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
PROPFIND 'http://FVH081-DP1.phs-sfalls.amck.net/SMS_DP_SMSPKG$/AHS00002' ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Got 401 challenge Retrying with Windows Auth... ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
PROPFIND 'http://FVH081-DP1.phs-sfalls.amck.net/SMS_DP_SMSPKG$/AHS00002' ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Failed to correctly receive a WEBDAV HTTP request.. (StatusCode at WinHttpQueryHeaders: 401) ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Failed to check url http://FVH081-DP1.phs-sfalls.amck.net/SMS_DP_SMSPKG$/AHS00002. Error 0x80004005 ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
Enumerated all 1 local DP locations but none of them is good. Fallback to MP. ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
GET 'HTTP://AHDC400.phs-sfalls.amck.net/CCM_Client/ccmsetup.cab' ccmsetup 1/23/2015 2:56:42 PM 2728 (0x0AA8)
C:\WINDOWS\ccmsetup\ccmsetup.cab is Microsoft trusted. ccmsetup 1/23/2015 2:56:43 PM 2728 (0x0AA8)

Single forest with mutiple child domains. SCCM is in one child domain trying to get this other child domain's workstations as clients. We have the same setup for another child domain and it worked without any issues.
The IIS log file in C:\inetpub\logs\logfiles\w3svc1 only shows the successful installs. There are no entries about any failures in the log to correlate unfortunately. All failures have been XP SP3 machines except for one Win7 pc and all successes have been
Win7 except for one xp machine.
You wouldn't think it is network related because some workstations can access the DP fine.
It's not boundaries because the successes and failures are all sharing the same subnet that is configured.
Wasn't MSI 4.5 required for SCCM 2012 clients?

Similar Messages

  • What is the process that SCCM clients access content from Distribution Points?

    I already know that the (Network Access Account) NAA is one method, and that it also has to have the
    Access this computer from the net network right configured on the Distribution Point (Default server settings allow this). I am trying to understand what happens when the connection to the Distribution Point fails when using the NAA. Does the
    client use the computer account (computername$) of the SCCM client or is the SCCM Site Server account (SCCMSiteServer$) instead. After reading
    this and
    this, I am not sure now. Hopefully someone can help clear this up. 
    --Tony

    The first link also states: When Configuration Manager tries to use the computername$
    account to download the content and it fails, it automatically tries the Network Access Account again, even if it has previously tried and failed.
    The way this reads to me is that the Network Access Account is first attempted since it is tried again after the computername$ is
    attempted. Peter, I am I just misunderstanding what it is trying to say? Also, is computername$ referring to the client machine and not
    the SCCM site server? Is the SCCM site server ever used which is referenced in Gerry's blog? 
    Just to make sure I completely understand the process goes:
    Local Computer account (ComputerName$)
    Network Access Account (NAA)
    No use of SCCM Site Server\s
    --Tony

  • Multicasting Using Virtual Distribution Point

    For a while now I have been troubleshooting various problems related to multicasting with SCCM 2012 r2. My most recent hurdle has been overcoming
    awful multicast stream speeds. Production Distribution Points are Windows Server 2012 r2 virtualized on ESX 5.5 using the VMXNET3 network adapter. Networking (so far) has not been able to find any errors or packet loss on the network, so I am thinking maybe
    the Virtualization layer could be cause of my headaches. I stood up a physical Distribution Point in same network as some physical clients. From testing with the physical Distribution Point the speeds have definitely increased. Speeds were maxing out at about
    20 Mbps once all of the client machines joined the multicast stream. I am more confident now that something in the virtualization layer is causing the problem, maybe the virtual switch? I just don’t know. Does anyone out there have any thoughts or suggestions
    from where to go from here? What to check in the virtualization layer, etc? Has anyone else experienced slow multicast speeds when using a virtual ESX host as a Multicast Distribution Point, or even had acceptable speeds for that matter?
    -Tony

    On the virtual side of things, Microsoft acknowledged there is a known bug and to get semi usable speeds the WDS registry setting of TpMaxBandwidth needs to be changed from 100 [default] to 1. However, even with that we still have not been able to push past
    11%-14% network utilization. boo! :-) 

  • Is the client/agent required on distribution points?

    For license reasons we are advised to uninstall the agent on a bunch of servers. All our distribution point servers are amongst these. I can't find out if the config mgr client is actually required on the DPs or not. Anyone here know this?

    No, it's not required.

  • Some Clients Not Updating. Reporting "Compliant." hr=8007000E Error in WindowsUpdate.log

    I have a significant number (but not all) of my SCCM 2012 R2 CU3 clients not updating though my SCCM software updates. On these problem clients, I get this error in WindowsUpdate.log:
    "COMAPI WARNING: ISusInternal::GetUpdateMetadata2 failed, hr=8007000E"
    Then these machines report "Compliant" even though they don't install the updates. Almost all of our workstations are Windows 7 SP1 32bit. We are running SCCM 2012 R2 CU3. My site servers are running Windows 2008 R2.
    I don't see much in WUAhandler.log or scanagent.log. These client are however, getting my SCEP definition updates just fine. (I have an ADR for those.) And when you go out to Microsoft for security updates it works. I have tried all of the usual Windows
    Updates repair suggestions (re-register dlls, rename software distribution folder, etc.) And I tried un-installing and re-installing the SCCM client on a problem PC, to no avail. I also tried using a Software Update Group with fewer updates (<100) and targeting
    a problem system with only that SUG, to no avail.
    Any assistance would be greatly appreciated. Thank you.

    Hi all,
    One of the bigger nuissances of this particular bug is that it's hard to identify from a central location that you've fallen victim to it. Without spot-checking client machines you'd be none-the-wiser. This most likely results in a lot of shops out their
    thar are completely unaware they have a security issue with a false sense of "fully patched" security.
    I've create the following guidelines to identify whether you are indeed one of the victims.
    create a script configuration item.
    Select All Windows 7 32 bit as the supported platform
    Use String as the data type
    Choose powershell as your script language of choice
    Paste the following text in the discovery script:select-string-pattern'GetWARNING:
    ISusInternal::GetUpdateMetadata2 failed, hr=8007000E'-path"$env:windir\windowsupdate.log"
    Add the configuration item to a Configuration baseline
    Deploy the configuration baseline to All Windows 7 32bit machines
    The report list of assets by compliance state for a given baseline is a good report to check the results.
    !!!! Any machines reporting compliant to this baseline have a serious issue as they won't install any software updates, yet report compliant on all !!!!
    Good luck
    Hi, Does the configuration item need any kind of compliance rule setup to make it work?

  • Some indexes not used in JDBC call

    Hello everyone,
    I'm having a problem where a JDBC PreparedStatement without bind parameters can take more than a minute to execute a query that takes less than a second to execute in SQL*Plus. The query is identical, the database instance is the same, neither query is cached, and the query returns only 18 records with 11 columns all of which are either VARCHAR2 or NUMBER. I'm using Oracle's JDBC 2.0 drivers (classes12.jar) and Oracle 8i (Release 8.1.7.4.0) database. Oracle DB is set to use the cost-based optimizer.
    I did an explain plan in SQL*Plus and via JDBC. It turns out that some of the unique indexes that are used when executing the query in SQL*Plus are not used when executing via JDBC.
    Does anyone know why this would happen?
    Thanks,
    Jeff

    since you use a bind variable,oracle's cost based
    optimizer can not decide correctly whether to use
    this index is a good idea.The OP said he was NOT using bind variables in the testing within the SQL String of the PreparedStatement so this comment doesn't address his current problem.
    To the OP:
    Sounds like you have an Oracle permissions issue not related to JDBC specifically. Shouldn't be to hard to determine what the permission differences are between the two userids.
    Regarding proper use of PreparedStatement
    ALWAYS use PreparedStatement and host variables. There are 100's if not 1000's of posts documentation why this is a good idea here on the forums. Here are a couple of reasons why PreparedStatement with Host variables is a good idea.
    1) PreparedStatement using host variables will provide you best overall system performance.
    2) PreparedStatement using host variables eliminates the very real security risk of SQL injection.
    3) PreparedStatement using host variables aids the programmer in handling escape sequence and the frequent errors associated with special characters within SQL strings.
    4) PreparedStatement using host variables allows JDBC to take care of the majority of data conversions between Java and your database simplifying and standardizing data conversion coding
    There are isolated cases where using Host variables impedes performance when compared against dynamic SQL (SQL with literals) but they are few and far between (1: 1000?) and the standard should be to always use PreparedStatement with host variables.
    Good luck on resolving your current problem and remember to always use PreparedStatements WITH host variables when coding in Java!
    WFF

  • Some clients not logging in

    Hello,
    We have 20 Mac clients connected wirelessly via an AirPort Extreme. We have an Xserve, and it runs Open Directory, AFP, DHCP, and Software Update. All of our clients have been working fine until recently. Some clients shake the login window when all the correct information is provided (List of users, correct password). This has only happened to a few client, and I noticed that in Directory Utility, they were running LDAPv3 version 3.1, while the working ones were running 3.0. Also, sometimes the server DNS IP address must be entered twice as two different DNS servers on the client machine for it to work. Any suggestions?
    Thanks

    You may have tried this already, but sometimes it works:
    Log in as admin on the client, go to System Preferences>Accounts>Login Options and hit the 'Edit...' button for the Network Account Server. Delete the entry, add it again, save & restart the client.
    Good luck. Logs of both the client and the server may give you a good hint, but without them, we're shooting a bit in the dark.
    Cheers,
    M

  • SCEP 2012R2 downloading Endpoint Protection definitions from Microsoft, rather than using internal Distribution Point

    Hi all, 
    Need your help figuring out why SCEP definitions are being updated from Microsoft and not from the local DP. 
    * I have a new 5 site SCCM hierarchy with a Primary site installed in EMEA HQ and a secondary site in 4 x USA offices. 
    * A Software update point and Endpoint protection point are deployed in HQ primary site. 
    * Software updates for SCEP have been synched down to the Primary site server which has WSUS role installed, a software update group created and an Automatic Deployment rule created to push these definition updates to the relevant device collection. 
    * Distribution > Content Status shows the software update package has been replicated successfully to all 5 DP's in the environment. 
    * An antimalware policy that specifies only SCCM as the definition updates has been created and is deployed to the relevant device collection. 
    * Custom client settings that disable alternate sources for initial definition update have also been created and deployed to the relevant device collection. 
    **** Yet, a closer look at the MPRUNCMD.log on client machines, shows that definition updates are coming from Microsoft
    I'm baffled why they still download from Microsoft despite disallowing this and making the DP the only source. 
    MpCmdRun: Command Line: "c:\Program Files\Microsoft Security Client\MpCmdRun.exe" SignaturesUpdateService -UnmanagedUpdate
     Start Time: ‎Mon ‎Apr ‎27 ‎2015 07:28:02
    Start: Signatures Update Service
    Update Started
    Search Started (MU/WU update) (Path: http://www.microsoft.com)...
    Time Info - ‎Mon ‎Apr ‎27 ‎2015 07:28:55 Search Completed 
    Update completed succesfully. no updates needed
    End: Signatures Update Service
    MpCmdRun: End Time: ‎Mon ‎Apr ‎27 ‎2015 07:28:55
    Note - One of the secondary sites has a very poor internet connection, so it's not feasible for definitions to be downloaded from the web. This is why a solution is required. 
    Thanks....

    Hi,
    Could these clients get other updates from SCCM?
    You could check the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\CCM\EPAgent\LastAppliedPolicy to see if the definition updates policy is applied to the client.
    Best Regards,
    Joyce
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Some clients not receiving SCEP definition updates

    I have a collection for some of our application servers that is used in conjunction with an ADR to deploy the SCEP definition updates. 12 of the servers in this collection recently had the SCCM 2012 R2 client installed on them. (The collection has a total
    of 23 servers in it)
    I can see that these 12  servers have the Antimalware policy applied, but are not getting the SCEP updates.  The summary for SCEP is:  Service started without any malware protection engine; AV signatures out of date; AS signatures out
    of date.
    The policy application state is "Succeeded" with the recent date and time.
    When I view the status of the deployment, the enforcement state is "Failed to install update(s) " with an error code of 0X87D00667 - No current or future service window exists to install software updates.
    These servers are members of another collection that is used for deploying the Monthly updates.  This "update" collection does have a maintenance window on it specific to software updates, with no recurrence schedule.
    Do maintenance windows apply to the machine then, regardless of what collection they are in?
    These 12 servers, for the Endpoint Protection client settings have the "Allow EP client installation and restarts outside MW" set to No, and the Suppress any required computer restarts after the EP client is installed set to Yes. 
    For the Software Updates client setting, the update scan schedule and deployment re-evaluation is set to every 7 days.
    So, in looking at this, it appears that these servers will never get any SCEP updates because they are members of another collection that has a MW, even though the SCEP collection does not have a MW?
    Is that correct?

    I added a MW on the collection that is used for SCEP updates.  I made the MW effective yesterday, but the MW hours were from 5:30am-7:30am daily (which should have started this morning, 1/30, at 5:30am).
    In the updatesdeployment.log, I see the MW starting:
    CUpdateAssignmentsManager received a SERVICEWINDOWEVENT START Event UpdatesDeploymentAgent 1/30/2015 5:30:00 AM 3004 (0x0BBC)
    No current service window available to run updates assignment with time required = 1 UpdatesDeploymentAgent 1/30/2015 5:30:00 AM 3004 (0x0BBC)
    CUpdateAssignmentsManager received a SERVICEWINDOWEVENT END Event UpdatesDeploymentAgent 1/30/2015 7:30:00 AM 3312 (0x0CF0)
    No current service window available to run updates assignment with time required = 1 UpdatesDeploymentAgent 1/30/2015 7:30:00 AM 3312 (0x0CF0)
    Attempting to cancel any job started at non-business hours. UpdatesDeploymentAgent 1/30/2015 7:30:00 AM 3312 (0x0CF0)
    However, the definitions are not installed. These 12 servers have the SCEP client, but no definitions installed.
    There are 11 servers in this collection that are getting the definition updates, but the 12 servers in this collection that have recently had the SCCM client installed on it are not getting the updates.    So I know that the ADR is working.
    What am I missing to get these 12 servers to install/update the definitions?

  • JDev 10.1.3.4 on Vista 64 bit not using local timezone setting

    When I try to get the current date by doing a new java.util.Date() I get the date and time but its not in my local timezone. I am in Eastern Time zone US & Canada which is GMT - 5.00. JDev for some reason keeps giving the time without subtracting 5 hours from GMT . So if now is 13.41pm its saying it is 18.41pm. I was using jdk 1.5, now using 1.6 and same problem. I tried it on Eclipse and everything works fine. I tried compiling and running a small program without JDev, using the jdks, and all is well. This must be something with Jdev.
    Anyone else have the same problem?

    Hi Thanassis,
    not sure what has changed between 10.1.3.3 and 10.1.3.4 in that area;
    but, without knowing Steve's sample in depth, I would say that the error you get in 10.1.3.4 is expected as the current row has changed in your Web Container.
    To avoid the JBO-35007, you can change the StateValidation on the iterator
        <iterator ...  StateValidation="false"/> That's the preferred option when only one (or a few) iterator causes the error
    This won't be possible however if the code is generic and involves all iterators (as in CustomViewObjectImpl).
    Then you have to change the property EnableTokenValidation on the page definition:
        <pageDefinition ...  EnableTokenValidation="false"> I'm currently working on other JBO-35007 errors reported by customers and I'm waiting for feedback from development about the Token Validation.
    I'll let you know as soon as I get more news.
    Regards,
    Didier.

  • Archivelink - how to set valid in only some clients, not in others?

    Hi,
    We’re building an ASP-solution, where storage of outgoing documents will be an optional service, where the customers that choose to use the functionality will have to pay for it. Each customer will run in a separate client in the same instance.
    Now, what we need to figure out is how to deactivate the functions for those customers that choose NOT to pay for the service. As the repositories are cross-client, and configuration/customizing will be as close to identical as possible in the different client, we have problems identifying how to do this. It is not a viable option to change configuration for some customers - we’re talking lots of clients here….
    Does anyone have a proposal for how we could handle this situation?
    Best regards,
    Sten Erik

    Solved by use of exits

  • [SOLVED] urxvt not using locale

    Help! I am stuck here.
    For the life of me I cannot find a reason why urxvt won't use my locale.
    Output of locale
    LANG=en_DK.utf8
    LC_CTYPE="en_DK.utf8"
    LC_NUMERIC="en_DK.utf8"
    LC_TIME="en_DK.utf8"
    LC_COLLATE=C
    LC_MONETARY="en_DK.utf8"
    LC_MESSAGES="en_DK.utf8"
    LC_PAPER="en_DK.utf8"
    LC_NAME="en_DK.utf8"
    LC_ADDRESS="en_DK.utf8"
    LC_TELEPHONE="en_DK.utf8"
    LC_MEASUREMENT="en_DK.utf8"
    LC_IDENTIFICATION="en_DK.utf8"
    LC_ALL=
    Output of locale -a
    C
    POSIX
    en_DK
    en_DK.iso88591
    en_DK.utf8
    en_US
    en_US.iso88591
    en_US.utf8
    Whenever I try to start urxvt from a(nother) terminal I get
    urxvt: the locale is not supported by Xlib, working without locale support.
    I have tried using both en_DK.utf8 and en_US.utf8. Neither works for me. (I would prefer en_DK.utf8)
    If I set the locale to C, then I don't get the error message. However I also don't get the filenames displayed correctly then.
    Anyone know what I can do?
    I am using lxde with openbox. Don't know if that makes a difference.
    I have spend the last 4 hours looking for a solution to this. Google turns up a lot of results, but they all (more or less) just tell me to make sure my locale is correct!? The word frustrating comes to mind
    EDIT: Somehow urxvt is using my locale for showing the files correctly, but not for input from the keyboard. The keys that normally produce æ, ø and å (danish chars) just doesn't do anything at all?
    EDIT2: Just in case someone stumbles upon this, here is what I decided to do. I didn't exactly solve the problem, but found another terminal to use instead.
    What I actually was trying to do was getting yeahconsole to work. I really liked using yakuake in kde, but after deciding to use lxde didn't want to pull in the complete QT library. yeahconsole however which is actually a wrapper for xterm/urxvt didn't want to behave the way I wanted it to. Well the truth is urxvt didn't want to work the way I wanted it to
    Fortunately for my nerves I found another quake like terminal called stjerm. This one works great (until now...). It has tabs and recognizes my locale setting. It's not in the official repos, but is easily installed from AUR.
    EDIT3: OK. This is kind of annoying. Now I don't need the solution any more and of course now I have found it
    To make urxvt use my locale correctly I must set it as follows: (note the LC_CTYPE setting)
    LANG=en_DK.utf8
    LC_CTYPE="en_US.utf8"
    LC_NUMERIC="en_DK.utf8"
    LC_TIME="en_DK.utf8"
    LC_COLLATE=C
    LC_MONETARY="en_DK.utf8"
    LC_MESSAGES="en_DK.utf8"
    LC_PAPER="en_DK.utf8"
    LC_NAME="en_DK.utf8"
    LC_ADDRESS="en_DK.utf8"
    LC_TELEPHONE="en_DK.utf8"
    LC_MEASUREMENT="en_DK.utf8"
    LC_IDENTIFICATION="en_DK.utf8"
    LC_ALL=
    I mark this topic as solved.
    Last edited by madeye (2009-10-31 21:21:06)

    Move 'exec awesome' to the bottom of your ~/.xinitrc.

  • Why not using LOCALs in LOOPs ?

    Why avoiding LOCALs in LOOPs? (I saw some posts that suggest it).
    In my case, I have one button (Stop) and I want to be able to stop the application along 3 different independent loops. How can I do it without locals ?
    Thanks

    The use of a sequence you describe here is ok but I think the goal here is to initialize the local to a known state BEFORE you start reading it in the loops. After the VI has stopped there is no need to set the local false again. Cleanup is usually a waste of time since the VI is stopped anyway. Also for initializing a single local just follow the example attached...
    Michael Aivaliotis
    VI Shots LLC
    Attachments:
    Stopping_parallel_loops_with_locals.vi ‏24 KB

  • DHCP: Some clients not getting IP address

    Recently setup a new DHCP server on Mac OS X Server 10.5.8 running on an Xserve.  We migrated from a Linux server.
    The Xserve was originally just a file server.  So the only services currently running are: AFP, DHCP, NFS, and SMB.  No additional software is running.
    The DHCP server ran just fine for the first couple weeks.  But then we found some computers just stopped getting IP addresses from the DHCP server.  Some were new computers introduced to the network.  Some were laptops that had left and come back.  However, the DHCP server is definitely still giving out IP addresses and renewing them for most new and existing computers.  There have been five computers that have not gotten IP addresses so far, and that had been the case both on the wireless and on a wired connection.  Two were PC's, one running Windows 7 and one running Windows XP with Lenovo's ThinkVantage software.  The other three were different models of MacBook Pros.
    For those five computers, we managed to get them working in two ways.  One, we can select to use DHCP with a manual address.  When we do that, it manages to pick up all the other information from the DHCP server like DNS and gateway.  The second thing we can do is configure the DHCP server to supply a static IP address by providing it with the MAC address of these machines.  When we do that, the computers receive the IP address from the DHCP server.
    So I guess you could say the problem I'm experiencing is for a few computers the DHCP server seems to only be able to provide static addresses, but not dynamic ones with a lease time.
    I have logging set to the highest for the DHCP server.  Below is the first thing I noticed that keeps showing up.  Sometimes it shows a different MAC address than the one below.  None of the afflicted computers have that MAC address, though.  I have not seen any other errors in the logs for the DHCP server.
    Jan 24 12:09:47 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:09:47 fileserver bootpd[73839]: service time 0.000304 seconds
    Jan 24 12:09:50 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:09:50 fileserver bootpd[73839]: service time 0.000280 seconds
    Jan 24 12:09:54 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:09:54 fileserver bootpd[73839]: service time 0.000264 seconds
    Jan 24 12:10:03 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:10:03 fileserver bootpd[73839]: service time 0.000265 seconds
    Jan 24 12:10:11 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:10:11 fileserver bootpd[73839]: service time 0.000283 seconds
    Jan 24 12:10:19 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:10:19 fileserver bootpd[73839]: service time 0.000291 seconds
    Jan 24 12:10:28 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:10:28 fileserver bootpd[73839]: service time 0.000324 seconds

    Recently setup a new DHCP server on Mac OS X Server 10.5.8 running on an Xserve.  We migrated from a Linux server.
    The Xserve was originally just a file server.  So the only services currently running are: AFP, DHCP, NFS, and SMB.  No additional software is running.
    The DHCP server ran just fine for the first couple weeks.  But then we found some computers just stopped getting IP addresses from the DHCP server.  Some were new computers introduced to the network.  Some were laptops that had left and come back.  However, the DHCP server is definitely still giving out IP addresses and renewing them for most new and existing computers.  There have been five computers that have not gotten IP addresses so far, and that had been the case both on the wireless and on a wired connection.  Two were PC's, one running Windows 7 and one running Windows XP with Lenovo's ThinkVantage software.  The other three were different models of MacBook Pros.
    For those five computers, we managed to get them working in two ways.  One, we can select to use DHCP with a manual address.  When we do that, it manages to pick up all the other information from the DHCP server like DNS and gateway.  The second thing we can do is configure the DHCP server to supply a static IP address by providing it with the MAC address of these machines.  When we do that, the computers receive the IP address from the DHCP server.
    So I guess you could say the problem I'm experiencing is for a few computers the DHCP server seems to only be able to provide static addresses, but not dynamic ones with a lease time.
    I have logging set to the highest for the DHCP server.  Below is the first thing I noticed that keeps showing up.  Sometimes it shows a different MAC address than the one below.  None of the afflicted computers have that MAC address, though.  I have not seen any other errors in the logs for the DHCP server.
    Jan 24 12:09:47 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:09:47 fileserver bootpd[73839]: service time 0.000304 seconds
    Jan 24 12:09:50 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:09:50 fileserver bootpd[73839]: service time 0.000280 seconds
    Jan 24 12:09:54 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:09:54 fileserver bootpd[73839]: service time 0.000264 seconds
    Jan 24 12:10:03 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:10:03 fileserver bootpd[73839]: service time 0.000265 seconds
    Jan 24 12:10:11 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:10:11 fileserver bootpd[73839]: service time 0.000283 seconds
    Jan 24 12:10:19 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:10:19 fileserver bootpd[73839]: service time 0.000291 seconds
    Jan 24 12:10:28 fileserver bootpd[73839]: DHCP DISCOVER [en1]: 1,0:23:32:c1:31:c3
    Jan 24 12:10:28 fileserver bootpd[73839]: service time 0.000324 seconds

  • Restrict clients from a distribution Point

    Is there any way to restrict distribution points to clients from a particular limiting collection or at least change the priority of a distribution point used?

    Our organization has several entities that provide support desktop. Central IT provides core services support such as networking, directory services and email services to the entire organization.  Central IT also provides Desktop support to a large
    portion of our organization.  We offer the entities of our organization that have their own desktop support use of our tools such as SCCM.  It is configured with collection limiting to limit access to each entities OU so that the admins can administer
    the machines in their ou and that's it.  We also allow them to use the content, i.e. Applications, programs, task sequences etc that we (Centeral IT) create.  In some cases the admins distribute our content to their Distribution points.  I am
    noticing that this causes some of our deployments to use their distribution points for content and not ours.  We don't have a very clear cut IP configuration to break out each entities with their own boundary group.  I just want our clients to use
    our distribution points because with they use other distribution points, we are noticing slowness in deployments such as OSD.

Maybe you are looking for