Help with script for "Import AD users from CSV with checks first"

Hi everyone.
Can anyone tell me if i am on the right track here?
I want to be able to provide a CSV with users and import into a new AD domain/forest. But i know there already are some AD accounts so i need to run some checks. For now i am checking only on Name and Email.
The issue i have is is with "write-host "$UserFullName was not found. Creating user.". It outputs this 19 times which is the same amount of times as there are users in AD in this OU.
Not sure if my method is right. 
$allImportedUsers=@()
$allUsersArray=@()
$searchbase="OU=PL,DC=EUECS,DC=CORP,DC=ARROW,DC=COM"
$path = "OU=Users,OU=PL,DC=EUECS,DC=CORP,DC=ARROW,DC=COM"
$allImportedUsers = Import-CSV polandusers_edited.csv
$allUsersArray = Get-ADObject -LDAPFilter "objectClass=User" -SearchBase $searchbase -Properties * | select *
$allImportedUsers | ForEach-Object {
$Firstname = $_.givenname
$Lastname = $_.surname
$UserMail = $_.EmailAddress
$UserFullName = $Firstname+" "+$Lastname
$importUserSAM = $_.Samaccountname
$allUsersArray | ForEach-Object {
$currUserFirstname = $_.givenName
$currUserLastname = $_.sn
$currUserMail = $_.mail
$currUserFullName = $_.name
$currUserSAM = $_.samaccountname
$currUserDN = $_.DistinguishedName
if($currUserFullName -eq $UserFullName) {
write-host "CSV $UserFullName matched on name"
$userMatched=1
if($currUserMail -eq $UserMail -AND $userMatched -ne 1) {
write-host "CSV $UserFullName matched on email"
$userMatched=1
if($userMatched -ne 1) {
write-host "$UserFullName was not found. Creating user."
$userMatched=0
If only i had a cool signature!! ;)

Move below to outside the inner for-each loop:
if($userMatched -ne 1) {
write-host "$UserFullName was not found. Creating user."
$userMatched=0
so the bottom should look like
}if($userMatched -ne 1) {
write-host "$UserFullName was not found. Creating user."
$userMatched=0

Similar Messages

  • Powershell script for removing some users from a particular Site Collection

    Hi,
    I am looking for a PowerShell script to delete a few users from a particular Site Collection. I am unable to delete them from/_catalogs/Users/simple.aspx page therefore need some other medium to
    delete users from the site collection.
    My ultimate aim is to have no user profile with "tp_deleted" field's value as 0 in the USERINFO table. Currently there are about 40 odd users with this field's value as 0 and this is affecting my crawling of this content database.

    Thanks for the reply Alex & eHaze,
    I have a content source of root site which crawls all the site collections under it. Out of the 9 site collections, only 8 are getting crawled and 1 doesn't get crawled at all. The error in the crawl logs is 
    The SharePoint item being crawled returned an error when requesting data from the web service. ( Error from SharePoint site: Value does not fall within the expected range. )
    I tried a lot of things, searched over the net and finally found
    this which helped me solve the same issue in my development environment. I deleted these users from userInfo table and ran a full crawl. And the issue was fixed.
    Now since I cannot delete the users from userInfo table directly from PROD environment, I used .../_catalogs/Users/simple.aspx list
    to delete users from this site collection. While some of the users I could delete, quite a few I could not. Clicking on the profile redirected me to the home page rather than the info page of the profile. 
    This
    is why I have to delete these users from the site collection.
    Alex - the link you shared, I guess it is for a web application level.
    eHaze - the script you shared throws this error:
    Get-SPSite : Cannot find an SPSite object with Id or Url: http://dev-apps/divisions/BT. At C:\PowerShell Scripts\DeleteUserFromSiteCollection1.ps1:4 char:19
    + $site = get-spsite <<<< $siteURL
    + CategoryInfo : InvalidData: (Microsoft.Share...SPCmdletGetSite:
    SPCmdletGetSite) [Get-SPSite], SPCmdletPipeBindException
    + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletGetSite
    You cannot call a method on a null-valued expression.
    At C:\PowerShell Scripts\DeleteUserFromSiteCollection1.ps1:9 char:27
    + $site.SiteUsers.Remove <<<< ($LoginName)
    + CategoryInfo : InvalidOperation: (Remove:String) [], RuntimeExc
    eption
    + FullyQualifiedErrorId : InvokeMethodOnNull
    hope this info helps.

  • Need a script to import the data from flat file

    Hi Friends,
    Any one have any scripts to import the data from flat files into oracle database(Linux OS). I have to automate the script for every 30min to check any flat files in Incoming directory process them with out user interaction.
    Thanks.
    Srini

    Here is my init.ora file
    # $Header: init.ora 06-aug-98.10:24:40 atsukerm Exp $
    # Copyright (c) 1991, 1997, 1998 by Oracle Corporation
    # NAME
    # init.ora
    # FUNCTION
    # NOTES
    # MODIFIED
    # atsukerm 08/06/98 - fix for 8.1.
    # hpiao 06/05/97 - fix for 803
    # glavash 05/12/97 - add oracle_trace_enable comment
    # hpiao 04/22/97 - remove ifile=, events=, etc.
    # alingelb 09/19/94 - remove vms-specific stuff
    # dpawson 07/07/93 - add more comments regarded archive start
    # maporter 10/29/92 - Add vms_sga_use_gblpagfile=TRUE
    # jloaiza 03/07/92 - change ALPHA to BETA
    # danderso 02/26/92 - change db_block_cache_protect to dbblock_cache_p
    # ghallmar 02/03/92 - db_directory -> db_domain
    # maporter 01/12/92 - merge changes from branch 1.8.308.1
    # maporter 12/21/91 - bug 76493: Add control_files parameter
    # wbridge 12/03/91 - use of %c in archive format is discouraged
    # ghallmar 12/02/91 - add global_names=true, db_directory=us.acme.com
    # thayes 11/27/91 - Change default for cache_clone
    # jloaiza 08/13/91 - merge changes from branch 1.7.100.1
    # jloaiza 07/31/91 - add debug stuff
    # rlim 04/29/91 - removal of char_is_varchar2
    # Bridge 03/12/91 - log_allocation no longer exists
    # Wijaya 02/05/91 - remove obsolete parameters
    # Example INIT.ORA file
    # This file is provided by Oracle Corporation to help you customize
    # your RDBMS installation for your site. Important system parameters
    # are discussed, and example settings given.
    # Some parameter settings are generic to any size installation.
    # For parameters that require different values in different size
    # installations, three scenarios have been provided: SMALL, MEDIUM
    # and LARGE. Any parameter that needs to be tuned according to
    # installation size will have three settings, each one commented
    # according to installation size.
    # Use the following table to approximate the SGA size needed for the
    # three scenarious provided in this file:
    # -------Installation/Database Size------
    # SMALL MEDIUM LARGE
    # Block 2K 4500K 6800K 17000K
    # Size 4K 5500K 8800K 21000K
    # To set up a database that multiple instances will be using, place
    # all instance-specific parameters in one file, and then have all
    # of these files point to a master file using the IFILE command.
    # This way, when you change a public
    # parameter, it will automatically change on all instances. This is
    # necessary, since all instances must run with the same value for many
    # parameters. For example, if you choose to use private rollback segments,
    # these must be specified in different files, but since all gc_*
    # parameters must be the same on all instances, they should be in one file.
    # INSTRUCTIONS: Edit this file and the other INIT files it calls for
    # your site, either by using the values provided here or by providing
    # your own. Then place an IFILE= line into each instance-specific
    # INIT file that points at this file.
    # NOTE: Parameter values suggested in this file are based on conservative
    # estimates for computer memory availability. You should adjust values upward
    # for modern machines.
    # You may also consider using Database Configuration Assistant tool (DBCA)
    # to create INIT file and to size your initial set of tablespaces based
    # on the user input.
    # replace DEFAULT with your database name
    db_name=DEFAULT
    db_files = 80 # SMALL
    # db_files = 400 # MEDIUM
    # db_files = 1500 # LARGE
    db_file_multiblock_read_count = 8 # SMALL
    # db_file_multiblock_read_count = 16 # MEDIUM
    # db_file_multiblock_read_count = 32 # LARGE
    db_block_buffers = 100 # SMALL
    # db_block_buffers = 550 # MEDIUM
    # db_block_buffers = 3200 # LARGE
    shared_pool_size = 3500000 # SMALL
    # shared_pool_size = 5000000 # MEDIUM
    # shared_pool_size = 9000000 # LARGE
    log_checkpoint_interval = 10000
    processes = 50 # SMALL
    # processes = 100 # MEDIUM
    # processes = 200 # LARGE
    parallel_max_servers = 5 # SMALL
    # parallel_max_servers = 4 x (number of CPUs) # MEDIUM
    # parallel_max_servers = 4 x (number of CPUs) # LARGE
    log_buffer = 32768 # SMALL
    # log_buffer = 32768 # MEDIUM
    # log_buffer = 163840 # LARGE
    # audit_trail = true # if you want auditing
    # timed_statistics = true # if you want timed statistics
    max_dump_file_size = 10240 # limit trace file size to 5 Meg each
    # Uncommenting the line below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest = disk$rdbms:[oracle.archive]
    # log_archive_format = "T%TS%S.ARC"
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    # rollback_segments = (name1, name2)
    # If using public rollback segments, define how many
    # rollback segments each instance will pick up, using the formula
    # # of rollback segments = transactions / transactions_per_rollback_segment
    # In this example each instance will grab 40/5 = 8:
    # transactions = 40
    # transactions_per_rollback_segment = 5
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    global_names = TRUE
    # Edit and uncomment the following line to provide the suffix that will be
    # appended to the db_name parameter (separated with a dot) and stored as the
    # global database name when a database is created. If your site uses
    # Internet Domain names for e-mail, then the part of your e-mail address after
    # the '@' is a good candidate for this parameter value.
    # db_domain = us.acme.com      # global database name is db_name.db_domain
    # FOR DEVELOPMENT ONLY, ALWAYS TRY TO USE SYSTEM BACKING STORE
    # vms_sga_use_gblpagfil = TRUE
    # FOR BETA RELEASE ONLY. Enable debugging modes. Note that these can
    # adversely affect performance. On some non-VMS ports the db_block_cache_*
    # debugging modes have a severe effect on performance.
    #_db_block_cache_protect = true # memory protect buffers
    #event = "10210 trace name context forever, level 2" # data block checking
    #event = "10211 trace name context forever, level 2" # index block checking
    #event = "10235 trace name context forever, level 1" # memory heap checking
    #event = "10049 trace name context forever, level 2" # memory protect cursors
    # define parallel server (multi-instance) parameters
    #ifile = ora_system:initps.ora
    # define two control files by default
    control_files = (ora_control1, ora_control2)
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = TRUE
    # Uncomment the following line, if you want to use some of the new 8.1
    # features. Please remember that using them may require some downgrade
    # actions if you later decide to move back to 8.0.
    #compatible = 8.1.0
    Thanks.
    Srini

  • Automate script for import data

    automate script for importing data to the cube from txt files.
    Ex: - I have below 2 files for import
    aaaa.txt
    bbbb.txt
    For the above simply i can write esscmd or Maxl script for importing files .
    Esscmd :
    IMPORT 3 "\aaaa.txt" 4 "N";
    IMPORT 3 "\bbbb.txt" 4 "N"
    my question , suppose if some more files added to the existing files, for example cccc.txt got added ,how can i write a code for this?

    Well I guess this will do
    dataDir=" " # Give the data directory path here
    maxlFile="maxl1.msh"
    cd $dataDir
    for file in `ls *` ;
    do
    maxlStmt="import database sample.basic data from data_file '$file' using rules_file 'xxx.rul'
    on error write to 'xxx.log';";
    printf $maxlStmt >> $maxlFile
    done
    ls * is lists all the files in the directory. If you want to be more specific and if you yhink tat your data folder may contain more files then you can use "ls *.txt" if your file extension to load is txt.
    Well I haven't tested this as I am on vacation...So you can just follow the similar way.
    This is only for the 3rd point which you said

  • Hi am having issues with getting the list of users from the Active Director

    Hi am having issues with getting the list of users from the Active Directory, can anyone help me with this!

    Hi Jason,
    Try this:
    1.  In Ultiboard select Tools>>Netlist Editor>>Pins, press the Delete button
    2.  Select all nets in the Select the Net to Delete dialog and then press the Delete button.  This will clear all nets in the layout, don't worry all traces, parts are still on the design.
    3.  Go to Multisim and select Transfer>>Forward annotate to Ultiboard.    This will add all nets that you removed back and it should fix the pin problem
    Tien P.
    National Instruments

  • Bulk Create Users from CSV: Error: "Put": "There is no such object on the server."?

    Hi,
    I'm using the below PowerShell script, by @hicannl which I found on the MS site, for bulk creating users from a CSV file.
    I've had to edit it a bit, adding some additional user fields, and removing others, and changing the sAMAccount name from first initial + lastname, to firstname.lastname. However now when I run it, I get an error saying:
    "[ERROR]     Oops, something went wrong: The following exception occurred while retrieving member "Put": "There is no such object on the server."
    The account is created in the default OU, with the correct firstname.lastname format, but then it seems to error at setting the "Set an ExtensionAttribute" section. However I can't see why!
    Any help would be appreciated!
    # ERROR REPORTING ALL
    Set-StrictMode -Version latest
    # LOAD ASSEMBLIES AND MODULES
    Try
    Import-Module ActiveDirectory -ErrorAction Stop
    Catch
    Write-Host "[ERROR]`t ActiveDirectory Module couldn't be loaded. Script will stop!"
    Exit 1
    #STATIC VARIABLES
    $path = Split-Path -parent $MyInvocation.MyCommand.Definition
    $newpath = $path + "\import_create_ad_users_test.csv"
    $log = $path + "\create_ad_users.log"
    $date = Get-Date
    $addn = (Get-ADDomain).DistinguishedName
    $dnsroot = (Get-ADDomain).DNSRoot
    $i = 1
    $server = "localserver.ourdomain.net"
    #START FUNCTIONS
    Function Start-Commands
    Create-Users
    Function Create-Users
    "Processing started (on " + $date + "): " | Out-File $log -append
    "--------------------------------------------" | Out-File $log -append
    Import-CSV $newpath | ForEach-Object {
    If (($_.Implement.ToLower()) -eq "yes")
    If (($_.GivenName -eq "") -Or ($_.LastName -eq ""))
    Write-Host "[ERROR]`t Please provide valid GivenName, LastName. Processing skipped for line $($i)`r`n"
    "[ERROR]`t Please provide valid GivenName, LastName. Processing skipped for line $($i)`r`n" | Out-File $log -append
    Else
    # Set the target OU
    $location = $_.TargetOU + ",$($addn)"
    # Set the Enabled and PasswordNeverExpires properties
    If (($_.Enabled.ToLower()) -eq "true") { $enabled = $True } Else { $enabled = $False }
    If (($_.PasswordNeverExpires.ToLower()) -eq "true") { $expires = $True } Else { $expires = $False }
    If (($_.ChangePasswordAtLogon.ToLower()) -eq "true") { $changepassword = $True } Else { $changepassword = $False }
    # A check for the country, because those were full names and need
    # to be land codes in order for AD to accept them. I used Netherlands
    # as example
    If($_.Country -eq "Netherlands")
    $_.Country = "NL"
    ElseIf ($_.Country -eq "Austria")
    $_.Country = "AT"
    ElseIf ($_.Country -eq "Australia")
    $_.Country = "AU"
    ElseIf ($_.Country -eq "United States")
    $_.Country = "US"
    ElseIf ($_.Country -eq "Germany")
    $_.Country = "DE"
    ElseIf ($_.Country -eq "Italy")
    $_.Country = "IT"
    Else
    $_.Country = ""
    # Replace dots / points (.) in names, because AD will error when a
    # name ends with a dot (and it looks cleaner as well)
    $replace = $_.Lastname.Replace(".","")
    $lastname = $replace
    # Create sAMAccountName according to this 'naming convention':
    # <FirstName>"."<LastName> for example
    # joe.bloggs
    $sam = $_.GivenName.ToLower() + "." + $lastname.ToLower()
    Try { $exists = Get-ADUser -LDAPFilter "(sAMAccountName=$sam)" -Server $server }
    Catch { }
    If(!$exists)
    # Set all variables according to the table names in the Excel
    # sheet / import CSV. The names can differ in every project, but
    # if the names change, make sure to change it below as well.
    $setpass = ConvertTo-SecureString -AsPlainText $_.Password -force
    Try
    Write-Host "[INFO]`t Creating user : $($sam)"
    "[INFO]`t Creating user : $($sam)" | Out-File $log -append
    New-ADUser $sam -GivenName $_.GivenName `
    -Surname $_.LastName -DisplayName ($_.LastName + ", " + $_.GivenName) `
    -StreetAddress $_.StreetAddress -City $_.City `
    -Country $_.Country -UserPrincipalName ($sam + "@" + $dnsroot) `
    -Company $_.Company -Department $_.Department `
    -Title $_.Title -AccountPassword $setpass `
    -PasswordNeverExpires $expires -Enabled $enabled `
    -ChangePasswordAtLogon $changepassword -server $server
    Write-Host "[INFO]`t Created new user : $($sam)"
    "[INFO]`t Created new user : $($sam)" | Out-File $log -append
    $dn = (Get-ADUser $sam).DistinguishedName
    # Set an ExtensionAttribute
    If ($_.ExtensionAttribute1 -ne "" -And $_.ExtensionAttribute1 -ne $Null)
    $ext = [ADSI]"LDAP://$dn"
    $ext.Put("extensionAttribute1", $_.ExtensionAttribute1)
    Try { $ext.SetInfo() }
    Catch { Write-Host "[ERROR]`t Couldn't set the Extension Attribute : $($_.Exception.Message)" }
    # Move the user to the OU ($location) you set above. If you don't
    # want to move the user(s) and just create them in the global Users
    # OU, comment the string below
    If ([adsi]::Exists("LDAP://$($location)"))
    Move-ADObject -Identity $dn -TargetPath $location
    Write-Host "[INFO]`t User $sam moved to target OU : $($location)"
    "[INFO]`t User $sam moved to target OU : $($location)" | Out-File $log -append
    Else
    Write-Host "[ERROR]`t Targeted OU couldn't be found. Newly created user wasn't moved!"
    "[ERROR]`t Targeted OU couldn't be found. Newly created user wasn't moved!" | Out-File $log -append
    # Rename the object to a good looking name (otherwise you see
    # the 'ugly' shortened sAMAccountNames as a name in AD. This
    # can't be set right away (as sAMAccountName) due to the 20
    # character restriction
    $newdn = (Get-ADUser $sam).DistinguishedName
    Rename-ADObject -Identity $newdn -NewName ($_.LastName + ", " + $_.GivenName)
    Write-Host "[INFO]`t Renamed $($sam) to $($_.GivenName) $($_.LastName)`r`n"
    "[INFO]`t Renamed $($sam) to $($_.GivenName) $($_.LastName)`r`n" | Out-File $log -append
    Catch
    Write-Host "[ERROR]`t Oops, something went wrong: $($_.Exception.Message)`r`n"
    Else
    Write-Host "[SKIP]`t User $($sam) ($($_.GivenName) $($_.LastName)) already exists or returned an error!`r`n"
    "[SKIP]`t User $($sam) ($($_.GivenName) $($_.LastName)) already exists or returned an error!" | Out-File $log -append
    Else
    Write-Host "[SKIP]`t User $($sam) ($($_.GivenName) $($_.LastName)) will be skipped for processing!`r`n"
    "[SKIP]`t User $($sam) ($($_.GivenName) $($_.LastName)) will be skipped for processing!" | Out-File $log -append
    $i++
    "--------------------------------------------" + "`r`n" | Out-File $log -append
    Write-Host "STARTED SCRIPT`r`n"
    Start-Commands
    Write-Host "STOPPED SCRIPT"

    Here is one I have used.  It can be easily updated to accommodate many needs.
    function New-RandomPassword{
    $pwdlength = 10
    $bytes = [byte[]][byte]1
    $pwd=[string]""
    $rng=New-Object System.Security.Cryptography.RNGCryptoServiceProvider
    while (!(($PWD -cmatch "[a-z]") -and ($PWD -cmatch "[A-Z]") -and ($PWD -match "[0-9]"))){
    $pwd=""
    for($i=1;$i -le $pwdlength;$i++){
    $rng.getbytes($bytes)
    $rnd = $bytes[0] -as [int]
    $int = ($rnd % 74) + 48
    $chr = $int -as [char]
    $pwd = $pwd + $chr
    $pwd
    function AddUser{
    Param(
    [Parameter(Mandatory=$true)]
    [object]$user
    $pwd=New-RandomPassword
    $random=Get-Random -minimum 100 -maximum 999
    $surname="$($user.Lastname)$random"
    $samaccountname="$($_.Firstname.Substring(0,1))$surname"
    $userprops=@{
    Name=$samaccountname
    SamAccountName=$samaccountname
    UserPrincipalName=“$[email protected]”)
    GivenName=$user.Firstname
    Surname=$surname
    SamAccountName=$samaccountname
    AccountPassword=ConvertTo-SecureString $pwd -AsPlainText -force
    Path='OU=Test,DC=nagara,DC=ca'
    New-AdUser @userprops -Enabled:$true -PassThru | |
    Add-Member -MemberType NoteProperty -Name Password -Value $pwd -PassThru
    Import-CSV -Path c:\users\administrator\desktop\users.csv |
    ForEach-Object{
    AddUser $_
    } |
    Select SamAccountName, Firstname, Lastname, Password |
    Export-Csv \accountinformation.csv -NoTypeInformation
    ¯\_(ツ)_/¯

  • Restrict users from changing password on first login?

    Hi,
    I am doing mass user upload into UME using script import. How should I use the below functionality to restrict the users from changing password on first login?
    IUserAccount uacc =UMFactory.getUserAccountFactory().newUserAccount(uid,newUser.getUniqueID());
    uacc.setPassword("saras");
    uacc.setPasswordChangeRequired(false);
    How to implement above functionality with mass upload from script import?
    Thanks
    Srinivas
    Edited by: srinivas M on Jan 20, 2009 9:05 PM

    hi srinivas,
    try this api
    http://help.sap.com/javadocs/NW04S/current/se/com/sap/security/api/IUserAccount.html#isPasswordChangeRequired()
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/40d562b7-1405-2a10-dfa3-b03148a9bd19
    this document able to retrive the password.. same positon u can disable the field
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10649c90-24af-2b10-1086-ea0667ec3655
    thanks

  • Looking for Help with Active Directory Script to Remove a User from msExchDelegateListLink

    I'm struggling to put together an Active Directory Powershell script that will remove a specific user from the msExchDelegateListLink.
    It looks like Set-AdUser would do the trick. I would want to remove a user in the format of
    {CN=Wood\, Sandy,OU=Networking,OU=IT,DC=my,DC=domain,DC=com}
    Has anyone succeeded in doing this before?
    Orange County District Attorney

    I use this:
    $user = '<user name>'
    $userDN = Get-ADUser $user | select -ExpandProperty DistinguishedName
    $delegates = Get-ADUser $user -Properties msExchDelegateListBL |
    select -ExpandProperty msExchDelegateListBL
    foreach ($delegate in $delegates)
    Set-ADUser $delegate -Remove @{msExchDelegateListLink = "$UserDN"}
    Never quite got around to putting it into a function.
    [string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "

  • Need help with a script for moving bulk users to another OU and removing/assigning groups

    I've never used PowerShell before and have been asked to track down a script that can move bulk users from one OU to another, and remove and assign new group membership. I've been googling it for about 30 minutes and haven't really gotten anywhere. If
    somebody can point me in the right direction or give some tips I'd greatly appreciate it. I'm sure this kind of task has been done by several people in similar environments I just haven't been able to find those people/examples. 

    Here's what I've got so far...
    Moving to new OU
    CSV constructed like below...
    DN  
                                                                                                                                                    TargetOU
    “CN=John R, OU=BB,OU=ES,OU=Students,OU=OSD,DC=usd233,=DC=local”
                          "OU=PRT,OU=MS,OU=Students,OU=OSD,DC=usd233,DC=local"
    Import-Module activedirectory
    $UserList = Import-Csv "c:\yourCSVhere.csv"
    foreach ($User in $UserList) {
    $User.DN
    $User.TargetOU
    Move-ADObject -Identity $User.DN -TargetPath $User.TargetOU
    Would this work? I also need to remove the user from two groups and add them to two different groups as well. Would I need to use the addUsertoGroups and removeUserfromGroups commands?

  • With 3.6.12, Help menu item "For Internet Explorer Users" results in a "Page Not Found" error. Is this a known bug? Is it being addressed?

    Clicking the 3.6.12 Help menu item "For Internet Explorers" item results in a "Page Not Found" error.
    The URL of the resultant page is https://support.mozilla.com/en-US/kb/Windows%20start%20page?as=u
    Is this a known bug, if so, is it being addressed? If so, someone needs to work faster.
    If it is not a known bug, someone needs to get right on it - you want people to switch to Firefox and then there is this bug -- not a good way to impress Internet Explorer users.

    Hi Claudio Roca,
    I have encountered the same problem. Do you found the solution for this issue? I will much appreciated if you willing to share your solution and email to [email protected] Thank you so much.
    Regards,
    Hau Chee

  • Script for Importing data

    Hii All
    I want to run import of all the datafile s with help of script
    I have many datafile s with differant users and I want to run db import
    Plz Help me How to write a script .
    Thanking You

    GaurAVS,
    I don't know what were the reaosns that made you selected this option, question did you try using any other options to move data from source to destination. Couple of questions for you?
    1. One single export dump for each schema user or is it more than one file?
    2. I assume from your earlier post, you got more than 50 schema users in your database?
    Here is simple bash script, assuming you have one export dump file for each schema user. Make a directory call data and move all the dump file, please read script carefully for more instruction and modify according to your needs.
    In addition before running this script on linux box run follwoing from command prompt
    $dos2unix import.sh
    #!/bin/bash
    function importfile
         echo "importing $file for user $1"
         imp $1/$2 full=y file=$file log=./data/`basename $file dmp`log
         test=$?
         if [ $test -ne 0 ]
         then
                echo "Importing for user $1 failed ..."
             exit 1;
         fi
    function listfile
        # you have to supply username and password here
           for file in ./data/*.dmp
           do
             echo "exporting $file for $1"
             # If you used username as filename for each export dump
              unset username
              +username=`basename $file .dmp`
              echo "Username is set to $username"
              # Now to get password for this username ; you can create a file with
              # this format username=password
              # $1 -- Username
              # $2 -- Password
              unset pw
              pw=`cat ./data/pwd.txt.txt|grep $username|cut -d'=' -f2`
              echo "Password is set to $pw"
              # You have to write a small function to get username and password
              importfile $username $pw
               done
    listfile $1 $2Edited by: OrionNet on Dec 20, 2008 11:47 PM

  • Possible to delete Offline Files content for a specific user from the Client Side Cache (CSC) ?.

    Hello Everyone,
    We would like to implement a script to delete the offline files in the Client Side Cache (CSC) for a nominated user (on Windows 7 x64 enterprise).
    I am aware that;
    1. We can use a registry value to flush the entire CNC cache (for all users) next time the machine reboots.
    2. If we delete the user's local profile it appears that Windows 7 also removes their content from the local CSC.
    However, we would like to just delete the CSC content for a particular nominated user without having to delete their local user profile.
    In our environment we have many users that share workstations but only use them occasionally. We don't use roaming profile so we would like to retain all the users' local profiles but still delete the CSC content for any users that haven't
    logged on in a week.
    Any ideas or info would be appreciated !
    Thanks, Makes

    Hi,
    I don't think this is possible.
    If you want to achieve it via script, I suggest you post it in official script forum for more professional help:
    http://social.technet.microsoft.com/Forums/scriptcenter/en-US/home?category=scripting
    The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Karen Hu
    TechNet Community Support

  • Need help writing script for Digital DAQ

    Hello,
    I am trying to write an aquisition program for digital input coming from a hall sensor.  I will be passing a magnet over a hall sensor on a motor and want to record the number of rotations.  I have pasted together a script based on various websites, but it isn't functioning the way I want.  I would like to be able to collect data points indefinately, or fixed, and then print out the resulting count.  I have attepted to use the ctypes wrapper in Python.  The NI functions remain unchanged though, so if you don't know Python it shouldn't be a problem to read.  I am using a NI PCI-6503. 
    NI PCI-6503
    NI PCI-6503
    NI PCI-6503
    The code:
    # C:\Program Files\National Instruments\NI-DAQ\Examples\DAQmx ANSI C\Analog In\Measure Voltage\Acq-Int Clk\Acq-IntClk.c
    import ctypes
    import numpy
    nidaq = ctypes.windll.nicaiu # load the DLL
    # Setup some typedefs and constants
    # to correspond with values in
    # C:\Program Files\National Instruments\NI-DAQ\DAQmx ANSI C Dev\include\NIDAQmx.h
    # the typedefs
    int32 = ctypes.c_long
    uInt32 = ctypes.c_ulong
    uInt64 = ctypes.c_ulonglong
    float64 = ctypes.c_double
    TaskHandle = uInt32
    # the constants
    DAQmx_Val_ChanForAllLines = int32(0)
    def CHK(err):
    """a simple error checking routine"""
         if err < 0:
              buf_size = 200
              buf = ctypes.create_string_buffer('\000' * buf_size)
              nidaq.DAQmxGetErrorString(err,ctypes.byref(buf),buf_size)
              raise RuntimeError('nidaq call failed with error %d: %s'%(err,repr(buf.value)))
    # initialize variables
    taskHandle = TaskHandle(0)
    max_num_samples = 1000
    # array to gather points
    data = numpy.zeros((max_num_samples,),dtype=numpy.float64)
    # now, on with the program
    CHK(nidaq.DAQmxCreateTask("",ctypes.byref(taskHandle)))
    CHK(nidaq.DAQmxCreateDIChan(taskHandle,'Dev1/port1/line1',"", DAQmx_Val_ChanForAllLines))
    CHK(nidaq.DAQmxStartTask(taskHandle))
    read = int32()
    CHK(nidaq.DAQmxReadDigitalU32(taskHandle,int32(-1),float64(1),DAQmx_Val_GroupByChannel,data.ctypes.data,max_num_samples,ctypes.byref(read),None))
    print "Acquired %d points"%(read.value)
    if taskHandle.value != 0:
         nidaq.DAQmxStopTask(taskHandle)
         nidaq.DAQmxClearTask(taskHandle)
    print "End of program, press Enter key to quit"
    raw_input()
    This script returns "Acquired 1 points" immediately, regardless of how many times the hall sensor has been switched.  
    I apologize for the sloppy code, but I understand very little of the underlying principles of these DAQ functions, despite reading their documentation.  Seeing as how the code does not return an error, I am hoping there is just a minor tweak or two that will be farily obvious to someone with more experience.  
    Thanks

    Hey Daniel,
    Most of this script is from http://www.scipy.org/Cookbook/Data_Acquisition_with_NIDAQmx. 
    I needed to run the ReadDigitalU32 function in Python loop because my DAQ device did not support timed aquistion and I don't have LabView.  
    import numpy
    import time
    import ctypes
    nidaq = ctypes.windll.nicaiu # load the DLL
    # Setup some typedefs and constants
    # to correspond with values in
    # C:\Program Files\National Instruments\NI-DAQ\DAQmx ANSI C Dev\include\NIDAQmx.h
    # the typedefs
    int32 = ctypes.c_long
    uInt32 = ctypes.c_ulong
    uInt64 = ctypes.c_ulonglong
    float64 = ctypes.c_double
    TaskHandle = uInt32
    # the constants
    DAQmx_Val_Cfg_Default = int32(-1)
    DAQmx_Val_Volts = 10348
    DAQmx_Val_Rising = 10280
    DAQmx_Val_FiniteSamps = 10178
    DAQmx_Val_GroupByChannel = 0
    DAQmx_Val_ChanForAllLines = int32(0)
    # initialize variables
    taskHandle = TaskHandle(0)
    max_num_samples = 10000
    data = numpy.zeros((max_num_samples,),dtype=numpy.float64)
    read = uInt32()
    bufferSize = uInt32(10000)
    def CHK(err):
    """a simple error checking routine"""
         if err < 0:
              buf_size = 200
              buf = ctypes.create_string_buffer('\000' * buf_size)
              nidaq.DAQmxGetErrorString(err,ctypes.byref(buf),buf_size)
              raise RuntimeError('nidaq call failed with error %d: %s'%(err,repr(buf.value)))
    # Start up DAQ
    CHK(nidaq.DAQmxCreateTask("",ctypes.byref(taskHandle)))
    CHK(nidaq.DAQmxCreateDIChan(taskHandle,"Dev?/port?/line?","", DAQmx_Val_ChanForAllLines))
    CHK(nidaq.DAQmxStartTask(taskHandle))
    #initialize lists and time 1 
    t1 = time.time()
    times = []
    points = []
    while True:
         try:
              CHK(nidaq.DAQmxReadDigitalU32(taskHandle,int32(-1),float64(1),
              DAQmx_Val_GroupByChannel,data.ctypes.data,uInt32(2*bufferSize.value)
             ,ctypes.byref(read), None))
              t2 = time.time()
             #data[0] is the position in the numpy array 'data' that holds the value aquired
             #from your device
             points.append(data[0])
             times.append(t2 - t1) 
             t1 = time.time()
             #Choose time interval between data aquisition
             time.sleep(.02)
         except KeyboardInterrupt:
              break
    #process the lists of points and time intervals as you so choose
    Thanks for the help!
    Paul

  • ALUI Gateway Not Returning Scripts for Subset of Users

    We have a problem where the ALUI gateway is not returning some .NET scripts for a subset of users. We have the ALUI 6.5 portal and our using the .NET accelerator 3.1.
    The situation is that this subset of users request one of our portal pages via https, which then reaches through our firewall to our remote server which is running the .NET portlet. The .NET page is served and returned to the users correctly and quickly, but this particular subset of users do not see the result rendered in their browsers for about 3 minutes. A view html source in the browser, as well as tools like Fiddler, show the page is indeed in the browser, but it is stuck trying to request some .NET scripts, and only displays the page when those requests timeout.
    The .NET scripts that are problems are both WebResource.axd and ScriptResource.axd, which in some cases are in our .NET portlets because of the .NET framework itself, but in other cases they are there only because of the ALUI portal itself, when it munges the .NET portlet to handle multiple server forms and validators and such. These .axd scripts are gatewayed so that the client browser requests them through the ALUI gateway, which in turn requests them through our firewall to our remote server -- which always serves these scripts correctly and quickly according to the IIS logs. The problem seems to lie in the ALUI gateway, as it is receiving these scripts correctly and quickly, but it is not returning them to this subset of users. Instead the ALUI gateway seems to be processing for about 3 minutes, and eventually returns an html error page, which of course the client never sees since it is expecting javascript, but we can capture the error page via Fiddler and its just telling us there was a timeout -- the client browser just notes that there is a javascript error.
    The really bizarre part is that this only happens for a subset of users, which amounts to about 20% of our users. There are 2 things that delineate these users that we have found so far. First, these users have email addresses that are 27 - 30 characters long, and the email address is our login id. Note that both shorter and longer email addresses are OK, so there is not some limit to email addresses like this might sound like at first. Secondly, these users have to be in a particular branch of our ldap store, which means they are replicated across to the portal in a particular group. We can move these "bad" users into another branch of our ldap store and once they are replicated to the portal then they work fine, and then if we move them back they return to not working. We cannot find any other difference in our ldap branches or in the corresponding ALUI groups, plus its only the ones in that particular branch with the email lengths in that very specific range.
    The gatewayed requests for these scripts vary by user since the PTARGS in the gatewayed request include the integer userid, but that does not seem to matter because we can have a "good" user successfully request the script with a "bad" user's id, and we can have a "bad" user fail to successfully request the script with a "good" user's id. That seems to point to maybe the authentication cookie being the differentiating factor that determines whether or not a gatewayed request for one of these script files will succeed or fail. So far we have only seen the problem with these particular .net axd scripts, but that may simply be because we don't have many, if any, other scripts or resources that need to be gatewayed since we usually put resources on our imageserver -- these being different because .NET and/or the ALUI portal puts these references in there for us whether we like it or not. Long-term we can re-architect our .NET portlets to not get have these axd scripts, although as mentioned earlier, we also see the ALUI portal put these axd scripts in our portlets as part of their munging process -- so that is not in our control completely. We do need to test if this subset of users can successfully request other gatewayed resources or not -- this is actually the first time I thought of that test case, so all I can say right now is its axd scripts that we know are problems, but it may or may not be a bigger problem.
    One last comment, as we appear to have found a work-around, but it does not make sense at all, and its not our preferred solution, so we still very much believe there is a problem elsewhere -- most likely in the ALUI gateway, but possibly somehow related to authentication that we do not understand. Our work-around that so far seems to work is to make our remote server be accessed via https instead of http -- which matches the way the client browsers call our portal (https). Again that first doesn't make sense, since this is only a problem for a small subset of users -- obviously calling our remote server via http works successfully for all other users, so its not just is a simple case that we must match protocols or it won't work. We also use http successfully for our calls to the remote server for portlets that are Java, although its possible that they don't have any gatewayed resources. But we also would just prefer to not use https for our internal calls in our own network as there is no need for the extra overhead -- and by the way our dev and qa environments do use http even for these .NET portlets and do not have the same problem. What's different in our production environment? The only things that should be different are that we have multiple portal servers and multiple remote servers that are load balanced (not sure that's the right term for how the remote servers are combined) -- and of course we have a firewall between them that does not exist in dev or qa.
    So we would very much appreciate any thoughts on this, whether you've seen something like it before, or just have some additional insight into the gateway and/or authentication process that seems to be the issue.
    Thanks, Paul Wilson

    We've ran into this problem when using the Microsoft ReportViewer control. In our case, we found that the portal gateway malformed the urls containing webresource.axd, so the browser was unable to get the correct address to the files. Note that there are usually multiple links to the axd files, they return different resources depending on the query string they get.
    To solve the problem, we ended up with a bit of a hack solution, but it works well. We extracted the resources we needed from the ReportViewer control's assembly using Reflector, and then published them on the image server. The next piece was to override the Render method of the page that hosted the control. In our custom version of Render, we parsed the html of the page, and replaced the contents of the src= elements with pt:images// links. These processed just fine in the portal's transformer, and our resources started showing up.
    Our Render looks something like the following code sample. The "HACKReportViewerControlPortalImageGatewayFix" class has all of the code to do the parsing. In this case, it is specific to the report viewer, because it has some special considerations for parsing the urls. My bet if that your code will be quite custom as well. Therefore, I've not included this piece of code. The important piece below is the invocation of MyBase.Render, which tells the page to render all of it's contents. Once that method is done, all of the HTML for the page is in the writer. The ModifyImageTags method then parses the html, doing the necessary replacements. Finally, the modified html is written to the page's writer, so it can be output following the normal .net processes. Also note that when parsing for urls to replace, don't do all of them, just look for the ones containing axd.
    (VB.NET)
    Protected Overrides Sub Render(ByVal writer As System.Web.UI.HtmlTextWriter)
              Dim fixer As New HACKReportViewerControlPortalImageGatewayFix
              MyBase.Render(fixer.GetWriter)
              writer.Write(fixer.ModifyImageTags())
    End Sub
    This works great for images. However, if you are dealing with javascripts, I'm not sure if this will work for you - as some .NET controls send different scripts depending on the browser. For example, in IE, you get more buttons on the toolbar for the ReportViewer, so you get more javascript too. When using FF, you get less buttons, and less script. We didn't have a problem with the scripts, so we haven't needed to solve this one.
    As for timing, this type of solution doesn't take much to put together. You are really just doing some string parsing and replacements. If you are a regex ninja, it's probably even easier. We had our solution working in a day or two.
    An added benefit to this solution is that you are putting less bytes through the portal's gateway, and sending that traffic to the image server instead.

  • Printing Issues with HP D145 and multiple users from G5

    Hi - I've a HP D145 All-In-One connected via USB to a G5 (as described below). The machine has multiple accounts on for different members of my family. The printer is directly connected to the G5 via USB connection but is shared with other Macs. Printing has become increasingly difficult, and we now have the following problems:
    1. Printing will only work from the administrators account.
    2. Printing periodically (daily, generally after a few hours of use) stops from the administrators account
    Solutions tried:
    a. Check set up for the non-administrators accounts....Looks OK for printing
    b. When printing stops - I've tried repairing the permissions with the Disk Utility. This no longer works.
    c. Run Printer Setup Repair (with option to repair all user accounts) - Works for the administrators account providing the computer and printer are restarted, otherwise no effect.
    d. Reinstalling the software - No effect
    It’s beginning to look like I need to replace this printer with one that works.
    You can help me in two ways - how to solve the printer problems, or alternatively suggest a colour All-in-One printer that can work with multiple accounts and does not have the problems mentioned above.
    Many thanks in advance,
    Regards,
    Melvyn

    Below I've copied the fix HP proposed and some additional information that they have sent me.
    1. The proposed fix that we could not make work:
    Hello Melvyn,
    Thank you for contacting HP Total Care.
    Simply reinstalling the software will not be enough to solve the
    problem. Try these steps to resolve the issue at hand:
    From the Administrator account:
    Double-click on the Macintosh Hard Disk.
    Open Applications.
    Click and drag Hewlett Packard Folder to the trash.
    Close all windows.
    Double-click on the Macintosh Hard Disk.
    Open the Applications folder.
    Open the Utilities folder.
    Double-click on Process Viewer/Activity Monitor
    If Activity Monitor does not appear:
    Choose Monitor in the toolbar across the top.
    Check Show Activity Monitor.
    Ensure the Activity Monitor shows "All Processes", instead of "My
    Processes".
    Single-click and quit each of the applications starting with HP, using
    the Stop Sign that says "Quit Process" beneath it.
    Close Activity Monitor.
    Double-click Macintosh Hard Disk.
    Open the Library folder.
    Open the Preferences folder.
    Click and drag either HP AIO Preferences or Hewlett Packard Preferences
    to the trash.
    Remove the HP Registry file.
    Go back to the Library folder.
    Open the Printers folder.
    Remove the HP folder
    Double click Macintosh Hard Disk.
    Open the Users folder.
    Choose the User logged into.
    Open the Library folder.
    Open the Preferences folder.
    Click and drag either HP AIO Preferences or Hewlett Packard Preferences
    to the trash.
    Click and drag files beginning with com.hp to the trash.
    Click and drag files beginning with HP to the trash.
    Double-click the Macintosh Hard Disk.
    Open the Applications folder.
    Open the Utilities folder.
    Double-click Printer Setup Utility.
    Delete any HP All-In-One printers listed (click the printer name to
    highlight it, then click the Delete button on the printer list)
    Repeat process for all HP All-In-One Printers.
    Remove HP Director/Image Zone from dock by dragging to the trash.
    Restart the Macintosh.
    Empty the Trash.
    From each user account:
    Double click Macintosh Hard Disk.
    Open the Users folder.
    Choose the User logged into.
    Open the Library folder.
    Open the Preferences folder.
    Click and drag either HP AIO Preferences or Hewlett Packard Preferences
    to the trash.
    Click and drag files beginning with com.hp to the trash.
    Click and drag files beginning with HP to the trash.
    Double-click the Macintosh Hard Disk.
    Open the Applications folder.
    Open the Utilities folder.
    Double-click Printer Setup Utility.
    Delete any HP All-In-One printers listed (click the printer name to
    highlight it, then click the Delete button on the printer list)
    Repeat process for all HP All-In-One Printers.
    Remove HP Director/Image Zone from dock by dragging to the trash.
    Restart the Macintosh.
    Empty the Trash.
    From the Administrator account:
    Download and install the latest software for this product. You can
    obtain this software from the following link:
    http://h10025.www1.hp.com/ewfrf/wc/softwareDownloadIndex?lc=en&cc=us⟨=en&os=219& product=64992&dlc=en&softwareitem=oj-35225-1
    From each user account:
    Double-click the Macintosh Hard Disk.
    Open the Applications folder.
    Open the Utilities folder.
    Double-click Printer Setup Utility.
    Click Add, ensure you are in the Default Browser.
    Select the HP All-In-One Printer.
    Click Add at the bottom-right corner of the screen.
    Set this printer as the default.
    You should now be able to print from all accounts.
    Sincerely,
    2. The reasons why it may not have worked:
    We need to be sure that you are fully logging out of one account and
    then back into the next. The fast user switching feature of the Mac OS
    can create problems with our printer drivers. For this reason, fast user
    switching is not supported by HP.
    The other issue we may be running into would be the Mac OS itself. Over
    the past few months the Mac OS has gone through several updates. When
    the 10.4.4 version was released it created some issues with our existing
    software. An updated version of our software has been developed which
    works well with 10.4.4 and higher, but it is not available for this
    printer.
    It may be that as each new OS update is installed on your computers that
    the issue becomes worse. If you are running the Mac OS 10.4.3 or lower
    then the version 7.3.1 of our software should work fine. If it is 10.4.4
    or higher there will be some issues as the new OS versions do not seem
    to like the older drivers.
    I would have to recommend retrying the full uninstall the Derek had
    outlined. This time however, just install the downloaded 7.3.1 software
    from our web site for the D145. Once it has been installed we need to
    test it fully without adding the scanjet software. I am hoping that
    without both version of our software installed it may work better.
    This should resolve the issue. If you need further assistance, please
    reply to this message and we will be happy to assist you further.
    3. The repsonse to questions on switching and OS X 10.4.4
    You are correct, since you are running the 10.4.4 I cannot guarantee
    that the older software version will allow full functionality. It should
    allow for printing and as long as you avoid the fast user switching,
    when trying to print, it should keep working well.
    I understand the convenience of fast user switching but you will have to
    log completely out of one user, and then log properly into the next
    before you can expect the printer to function.
    I would still recommend trying the steps I had outlined as it really is
    the only thing you can do that may work.
    G5, 2 GHz dual processor, 2Gb RAM, 500GB Hard disk   Mac OS X (10.4.3)   Clarification of the issues from HP

Maybe you are looking for