Spreading users across multiple AP's

Hi
I have a requirement to provide Wireless to over 250 people in one big arena. I am thinking of using an Lwapp install but was wondering how will I force users to associate evenly across all access points. I want to avoid the situation whereby 30 associate with AP1 and 5 associate with AP2
Any help would be appreciated

Hi Martin,
Hope so everything's fine at your end.
Typically Ap1200's or LWAPP AP's manage load balancing internally on their own using the Auto-RF features if you are using some 4000 series controllers dunno about others. So this auto-rf features should equally load balance the client across all Ap's, well if that doesn't happen for som e reason then, you may change the configuration to do that.
You may create seperate wlans to be broadcasted through different lwapps with seperate ssid's.
For example,
AP1200-1 broadcasts wlan 1 connecting around 35 clients
AP1200-2 broadcasts wlan 2 connecting another set of 35 or more.. and so on.
AP1200-3 broadcasts wlan 1 again but is at a distance of say more than 100 feet or so connecting another set of 35 or more.. and so on.
Please feel free to contact me if you need any more assistance.
Thanks & Regards,
Karthik Narasimhan

Similar Messages

  • How to identify a user across multiple pages

    Hi,
    I'm doing a homebanking and I would like to know how to identify a user across multiple pages.
    I have already take a look at HTTPSESSION, but I didn't understand.
    Can someone help me.
    I'm send the servlet Logon.
    import java.io.*;
    import java.sql.*;
    import java.util.Date;
    import java.util.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    public class Cons_logon extends HttpServlet
         private Connection conexao = null;
         Login1 login1;
         public void init (ServletConfig cfg) throws ServletException
              super.init(cfg);
              try
                   Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
                   conexao = DriverManager.getConnection("jdbc:odbc:bank");
              catch (Exception e)
                   System.out.println(e.getMessage());
         public void doPost (HttpServletRequest req,
    HttpServletResponse res)
    throws ServletException, IOException
              String Suser, Spassword;
         PrintWriter out;
              res.setContentType("text/html");
    out = res.getWriter();
    String opcao = req.getParameter("log");
    Thanks

    I would recommend using the authentication mechanism that's guaranteed by the servlet spec. If you do that, you can just call
    request.getRemoteUser()
    to get the user name across multiple pages.
    If you want to use your own login scheme, you can create a new session object and map it to a user name somewhere in your app. Or you can just put the name of the user on the session. But the preferred way is to use the default authentication scheme defined by the spec.

  • Splitting up a job for 140k+ users across multiple servers

    Hello, 
    I am pretty new to Powershell and want to learn more about scaling stuff and just started working with jobs.
    In this particular case I am just doing mass enable or disable at a per user level.  The other script I need to do this with grabs and checks values on around 6000 distribution groups and using the current values and type it creates new commands
    to add/remove certain users or permissions in bulk with Invoke-Expression.  I *think* it would probably be best in my case to run these across servers as well.
    Basically what I am looking at is:
    Using one large list/array, counting it, splitting it, using the resources it has available with jobs.
    One of the problems I have had with this but seems I have mostly figured out is how I combine or 'foreach' several different values that may need to be applied to separate objects on certain servers with certain users and certain attributes. 
    Last night I ran the first script that could do that but it took me awhile and looks like a wreck I am sure - but it worked!
    Now to tackle size.
    Thank You

    Hi Paul,
    looking good so far. Did a little rewrite of what you posted:
    Function Disable-Stuff
    Param (
    [Parameter(Position = 0, Mandatory = $true)]
    [string]
    $file,
    [Parameter(Position = 1)]
    [ValidateSet('CAS', 'MBX', 'ALL')]
    [string]
    $servertype = "CAS"
    # Collect server lists
    $servers = @()
    switch ($servertype)
    "CAS" { $servers += Get-ClientAccessServer | Select -ExpandProperty name }
    "MBX" { $servers += Get-MailboxServer | select -ExpandProperty name }
    "ALL"
    $servers += Get-ClientAccessServer | Select -ExpandProperty name
    $servers += Get-MailboxServer | select -ExpandProperty name
    # Remove duplicate names (just in case)
    $servers = $servers | Select -Unique
    default { }
    # Calculate set of operations per server
    $boxes = ($servers).count
    $content = Get-Content $file
    $split = [Math]::Round(($content.count / $boxes)) + 1
    # Create index counter
    $int = 0
    # Split up task
    Get-Content $filepath -ReadCount $split | ForEach {
    # Store file content in variable
    $List = $_
    # Select Server who does the doing
    $Server = $servers[$int]
    # Increment Index so the next set of objects uses the next Server
    $int++
    # Do something amazing
    # ... <-- Content goes here
    Disable-Stuff "c:\job\disable.txt" "CAS"
    Notable changes:
    Removed the test variables out of the function and added them as parameters
    Modified the Parameters a bit:
    - $file now is mandatory (the function simply will not run without it)
    - The first parameter will be interpreted as the file path
    - The second parameter will be interpreted as Servertype
    - $Servertype can only be CAS, MBX or ALL. No other values accepted
    - $Servertype will be set to CAS unless another servertype is specified
    you if/ifelse/else construct has been replaced with a switch (I vastly prefer them but they do the same functionally
    I removed the unnecessary temporary storage variables.
    Appended a placeholder scriptblock at the end that shows you how to iterate over each set of items and select a new server each time.
    I hope this helps you in your quest to conquer Powershell :)
    Cheers,
    Fred
    There's no place like 127.0.0.1

  • Best practice for SSH access by a user across multiple Xserves?

    Hello.
    I have 3 Xserves and a Mac Mini server I'm working with and I need SSH access to all these machines. I have given myself access via SSH in Server Admin access settings and since all 4 servers are connected to an OD Master (one of the three Xserves), I'm able to SSH into all 4 machines using my username/password combination.
    What I'm unsure of though is, how do I deal with my home folder when accessing these machines? For example, currently, when I SSH into any of the machines, I get an error saying...
    CFPreferences: user home directory at /99 is unavailable. User domains will be volatile.
    It then asks for my password, which I enter, and then I get the following error...
    Could not chdir to home directory 99: No such file or directory
    And then it just dumps me into the root of the server I'm trying to connect to.
    How should I go about dealing with this? Since I don't have a local home directory on any of these servers, it has no where to put me. I tried enabling/using a network home folder, but I end up with the same issue. Since the volume/location designated as my home folder isn't mounted on the servers I'm trying to connect to (and since logging in via SSH doesn't auto-mount the share point like AFP would if I was actually logging into OS X via the GUI), it again says it can't find my home directory and dumps me into the root the server I've logged in to.
    If anyone could lend some advice on how to properly set this up, it would be much appreciated!
    Thanks,
    Kristin.

    Should logging in via SSH auto-mount the share point?
    Yes, of course, but only if you've set it up that way.
    What you need to do is designate one of the servers as being the repository of home directories. You do this by simply setting up an AFP sharepoint on that server (using Server Admin) and checking the 'enable user home directories' option.
    Then you go to Workgroup Manager and select your account. Under the Home tab you'll see the options for where this user's home directory is. It'll currently say 'None' (indicating a local home directory on each server). Just change this to select the recently-created sharepoint from above.
    Save the account and you're done. When you login each server will recognize that your home directory is stored on a network volume and will automatically mount that home directory for you.

  • How to query user across multiple forest with AD powershell

    Hi Guys
      Our situation like this , we have two forest ,let say forestA.com and forestB.com, and they are many subdomian in forest A.
      I'd like to write a script to the AD object information via get-adobject -identify xxxx
      My accont belongs to forestA.com , and the computer i logged on belongs to forestB.com ,A & B have forest trust.
      Now the problem is if the object i quried belngs to forestB.com ,the Get-ADObject works fine ,however if the object belongs to forestA.com ,i got the error "Get-ADObject: Cannot find a object with identify: 'xxxx' under: 'DC=forestB,DC=com'.
      So how can i have a script than can query user in both forest

    Prepared this some time ago for a PowerShell Chalk & Talk. Just change the forest names and credentials. Each Active Directory cmdlet you are calling works on the current drive. So to switch between the forests you need just change the drive / location.
    This is also quite nice for migration scenarios.
    $forests = @{
    'forest1.net' = (New-Object pscredential('forest1\Administrator', ('Password1' | ConvertTo-SecureString -AsPlainText -Force)))
    'forest2.net' = (New-Object pscredential('forest2\Administrator', ('Password2' | ConvertTo-SecureString -AsPlainText -Force)))
    'forest3.net' = (New-Object pscredential('forest3\Administrator', ('Password3' | ConvertTo-SecureString -AsPlainText -Force)))
    'a.forest1.net' = (New-Object pscredential('a\Administrator', ('Password1' | ConvertTo-SecureString -AsPlainText -Force)))
    'b.forest1.net' = (New-Object pscredential('b\Administrator', ('Password1' | ConvertTo-SecureString -AsPlainText -Force)))
    Import-Module -Name ActiveDirectory
    $drives = $forests.Keys | ForEach-Object {
    $forestShortName = ($_ -split '\.')[0]
    $forestDN = (Get-ADRootDSE -Server $forestShortName).defaultNamingContext
    New-PSDrive -Name $forestShortName -Root $forestDN -PSProvider ActiveDirectory -Credential $forests.$_ -Server $forestShortName
    $result = $drives | ForEach-Object {
    Set-Location -Path "$($_):"
    Get-ADUser -Identity administrator
    $drives | Remove-PSDrive -Force
    $result
    -Raimund

  • Trade offs for spreading oraganizatons across suffixes - best practices?

    Hey Everyone, I am trying to figure out some best practices here. I'v looked through the docs but have not found anything that quite touches on this.
    In the past, here is how I created my directory (basically using dsconf create-suffix for each branch I needed)
    dsconf list-suffixes
    dc=example,dc=com
    ou=People,dc=example,dc=com
    ou=Groups,dc=example,dc=com
    o=Services,dc=example,dc=com
    ou=Groups,o=Services,dc=example,dc=com
    ou=People,o=Services,dc=example,dc=com
    o=listserv,dc=example,dc=com
    ou=lists,o=listserv,dc=example,dc=com
    A few years later, learning more, and setting up replication, it seems I may have made my life a bit more complicated that it should be. It seems i would need many more replication agreements to get every branch of the tree replicated. It also seems that different parts of the directory are stored in different backend database files.
    It seems like I should have something like this:
    dsconf list-suffixes
    dc=example,dc=com
    Instead of creating all the branches as suffixes or sub-suffixes, maybe i should have just created organization and organizational unit entries within a single suffix "dc=example,dc=com". This way I can replicate all data by replicating just one suffix. Is there a downside to having one backend db files containing all the data instead of spreading it across multiple files (were talking possibly 90K entries across the entire directory).
    Can anyone confirm the logic here or provide any insight?
    Thanks much in Advance,
    Deejam

    Well, there are a couple of dimensions to this question. The first is simply whether your DIT ought to have more or less depth. This is an old design debate that goes back to problems with changing DNs in X500 style DITs with lots of organizational information embedded in the DN. Nowadays DITs tend to be flatter even though there are more tools for renaming entries. You still can't rename entries across backends, though. The second dimension is, given a DIT, how should you distribute the containers in your DIT across the backend databases.
    As you have already determined, the principal design consideration for your backend configuration will be replication, though scalability and backup configuration might also come into it. From what you have posted, though, it does not look like you have that much data. So yes, you should configure database backends and associated suffixes with sufficient granularity to support your replication requirements. So, if a particular suffix needs to be replicated differently than another suffix, they need to be defined as distinct suffixes/backends. Usually we define the minimal number of suffixes and backends needed to satisfy the topological requirements, though I can imagine there might be cases where suffixes might be more fine grained.
    For large, extensible Directory topologies, I usually look for data that's sensibly divisible into "building blocks". So for instance you might have a top-level suffix "dc=example,dc=com" with a bunch of global ACIs, system users and groups that are going to need to be everywhere. Then you might have a large chunk of external customer data, and a small amount of internal employee data. I would consider putting the external users in a distinct suffix from the employees, because the two types of entries are likely to be quite different. If I have a need to build a public Directory somewhere, all I have to do is configure the external suffix and replicate it. The basic question I would be asking there is if I might ever need to expose a subset of the Directory, will the data already be partitioned for me or will I have to do data reorganization.
    In your case, it does not look likely you will need to chop up your data much, so it's probably simpler to stay monolithic and use only one backend.

  • Authentication Across Multiple Web Applications (Revisited)

              Its been an ongoing battle, but I've made some insight into this situation. The problem stands as it seems impossible to authenticate against one web application deployed as a WAR archive and have that authentication carry across to another web application with the same security constraints. I've been told by BEA that, quote:
              "It seems to me that we are violating section 11.6 of the servlet 2.2 spec which talks about webapps"
              I've also been told that this is fixed in WLS 6.0, reference issue #38732.
              For those of us building production environments using 5.1 instead of 6.0 XML based configuration, this does NOT solve our problem.
              I've dug further into the bowels of 5.1 and found that if you manually set the realm name in the login-config of the security constraint in the web.xml file in each WAR deployment as such:
                   <login-config>
                        <auth-method> [whichever method] </auth-method>
                        <realm-name>WebLogic Server</realm-name>
                   </login-config>
              Authentication will carry across web applications. However, I've noted that the session management then becomes unpredictable. For example:
              I log into the application TESTAPP1 which contains a protected servlet that outputs the session ID and attempts to get the authenticated principal name from the "_wl_authuser_" session variable. Upon first load of the page (after the login dialog box), the session is null [can be fixed with .getSession(true) call instead] and the "_wl_authuser_" object does not exist. Reload the page and the session appears as well as the "_wl_authuser_" object. Strange.
              I then move to TESTAPP2, which does not prompt me for authentication but also is missing the session in the same manner. Upon browser reload, the session is created with a different ID and the "_wl_authuser_" object is now available with the appropriate principal name.
              Upon moving back to TESTAPP1, I am not prompted for authentication however, I am assigned yet another session ID after browser reload, different from the first.
              So it seems that although authentication is carried across web applications, the session IDs as you move from TESTAPP1 to TESTAPP2 change, and then change again but not back to the original when going back to TESTAPP1.
              This is a particular problem since we are using Vignette's V5 as our main client and tracking sessions through V5 - this would quickly become unmanageable if a single page view access three or four different application components with three or four different session ids.
              I'm wondering if we can expect the same behavior from WLS 6.0?
              Ideally, I'd like to see WebLogic use a single session ID to track users across multiple web applications but still have session independence between applications. So if I store something in session in TESTAPP1, its not available in TESTAPP2. Does this outline the behaviour in WLS 6.0? Can anyone verify this?
              Some food for thought. Thanks!
              ./Chris
              Senior Systems Anaylst
              MassMutual Financial Group
              

    Hello! I am searching an answer to this question too!!!
    Did you get some news regarding this item?
    Regards,
    C.M.

  • Spreading user data across multiple HD's?

    I hope this is the right forum for this. I'm on a Mac Pro 1,1 and recently installed a few extra hard drives to optimize performance for video editing (accoring to recommendations over at another big software company's support docs -- this is not about FCP).
    My goal is to have this general set-up:
    SSD: Boot Drive (OS & Apps) -- already set up
    Disk 2: Editing Project Files, media and exports
    Disk 3: Cache, renders and previews
    Disk 4: Ideally this would be for docs, itunes, photos and anythings non video related.
    The thing is, this advice comes from PC users mainly and the OSX User Folder structure isn't an issue.
    Question: Can the same user folder's contents be spread across multiple HD's? I realize I'll need to physically place a different "house" icon folder in each HD, probably, but can those have different contents? How can I make them all boot on start-up so the disk allocation is pretty much not noticeable?
    Thanks!

    A simple version of what you are trying is to establish a Boot Drive, with only System, Library, Applications, and the hidden unix files. All user files are moved off to a different drive.
    Here are some simple recipes for Moving the "Home" folder"
    Japamac's Blog: Make space for Performance -- Moving the Home Folder
    http://chris.pirillo.com/how-to-move-the-home-folder-in-os-x-and-why/
    You can embellish this basic setup any way you wish, especially putting Movie data on a different drive, and Movie cache data on yet another drive.

  • Spreading a single form across multiple pages

    I'd like to implement a single record form that spreads it's items across multiple pages for the purpose of grouping related information on each page such as "Contact Details"," Education Profile" etc. There are way too many columns for a single page.
    In Oracle forms you can create a form across multiple canvases or on different tabs of a tabbed canvas. Each item has a property to position it on which canvas.
    Is this possible in ApEx? I can see that transaction processing could be complex.
    How do I avoid, for example, inserting the record a second time if I commit from a different page?
    Any comments appreciated.
    Paul P

    Another way to do this without javascript and ajax that works pretty well is to setup a list that represents the logical "sections" and display it as a sidebar list.
    Create a hidden item on the page called PX_ACTIVE_SECTION and set the visibility of each region on the page based on the value of this item. For example: :PX_ACTIVE_SECTION = 'Contact Details'. You can also have multiple regions associated with a single tab. Set the default value to whatever section you wish to display first.
    Next, set each list item to be "current" when :PX_ACTIVE_SECTION is equal to that item ('Contact Details', 'Education Profile', etc.). Also set the URL destination of each item to: javascript:doSubmit('section name');
    Finally, add a branch back to the current page that sets PX_ACTIVE_SECTION to &REQUEST.. This traps the doSubmit call so you can set the hidden item. (Add a condition, if needed, to prevent this branch from executing due to other buttons and requests).
    The result is that the user can switch freely between sections and save after all data is entered. The page does refresh (since it doesn't use AJAX), but if your regions aren't too big, it should be reasonable.

  • Sharing an iTunes Library across multiple user account and a network.

    Sharing an iTunes Music Library across multiple user accounts.
    Hello Everybody!
    Firstly, this was designed to be run in Mac OS X 10.4 Tiger. It will not work with earlier versions of Mac OS X! Sorry.
    Here's a handy tip for keeping your hard drive neat and tidy, it also saves space, what in effect will be done is an iTunes music library will be shared amongst multiple users on the same machine. There are advantages and disadvantages to using this method.
    • Firstly I think it might be worthwhile to state the advantages and disadvantages to using this approach.
    The advantages include:
    - Space will be saved, as no duplicate files will occur.
    - The administrator will be able to have complete control over the content of the iTunes library, this may be useful for restricting the content of the Library; particularly for example if computer is being used at and education institution, business or any other sort of institution where things such as explicit content would be less favorable.
    - The machine will not be slowed by the fact that every user has lots of files.
    The disadvantages to this system include.
    - The fact that the account storing the music will have to be logged in, and iTunes will have to be active in that account.
    - If the account housing the music is not active then nobody can use the iTunes library.
    - There is a certain degree of risk present when an administrator account must be continually active.
    - Fast User Switching must be enabled.
    Overview:
    A central account controls all music on the machine/network, this is achieved by storing iTunes files in a public location as opposed to in the user's directory. In effect the system will give all users across the machine/network access to the same music/files without the possibility of files 'doubling up' because two different users like the same types of music. This approach saves valuable disk space in this regard and may therefore prove to be useful in some situations.
    This is a hearty process to undertake, so only follow this tutorial if you're willing to go all the way to the end of it.
    Process:
    Step 1:
    Firstly, we need to organize the host library, I tidied mine up, removing excess playlists, random files, things like that. this will make thing a bit easier in the later stages of this process.
    Once the library is tidied up, move the entire "iTunes" folder from your Home directory to the "//localhost" directory (The Macintosh HD) and ensure that files are on the same level as the "Applications", "Users", "Library" and "System" directories; this will ensure that the files in the library are available to all users on the machine (this also works for networks)
    Optionally you can set the ownership of the folder to the 'administrator' account (the user who will be hosting the library.), you may also like to set the permissions of 'you can' to "Read & Write" (assuming that you are doing this through the user who will host the library); secondly you should set the "Owner" to the administrator who will be hosting the library and set their "access" to "Read & Write" (this will ensure that the administrator has full access to the folder). The final part of this step involves setting access for the "Others" tab to "Read Only" this will ensure that the other users can view but not modify the contents on the folder.
    Overview:
    So far we have done the following steps:
    1. Organized the host library.
    2. Placed the iTunes directory into a 'public' directory so that other users may use it. (this step is essential if you plan on sharing the library across multiple accounts on the same machine. NOTE: this step is only necessary if you are wanting to share you library across multiple accounts on the same machine, if you simply want to share the music across a network, use the iTunes sharing facility.
    3. set ownership and permissions for the iTunes music folder.
    Step 2:
    Currently the administrator is the only user who can use this library, however we will address this soon. In this step we will enable iTunes music sharing in the administrator's account, this will enable other users to access the files in the library.
    If you are not logged in as the administrator, do so; secondly, open iTunes and select "Preferences" from the "iTunes" menu, now click the "Sharing" tab, if "share my library on my local network" is not checked, the radio buttons below this will now become active, you may choose to share the entire libraries contents, or share only selected content.
    Sharing only selected content may be useful if their is explicit content in the library and minors use the network or machine that the library is connected to.
    If you have selected "share entire library" go to Step 3, if you have selected share "share selected playlists" read on.
    After clicking "share selected playlists" you must then select the playlists that you intend to share across your accounts and network. Once you have finished selecting the playlists, click "OK" to save the settings.
    Overview:
    In this step we:
    1. Enabled iTunes sharing in the administrator's account, now, users on the local network may access the iTunes library, however, users on the same machine may not.
    Step 3:
    Now we will enable users on the same machine to access the library on the machine. This is achieved by logging in as each user, opening iTunes, opening iTunes preferences, and clicking "look for shared music". now all users on the machine may also access the library that the administrator controls.
    This in effect will mean that the user will not need to use their user library, it will be provided to them via a pseudo network connection.
    As a secondary measure, I have chosen to write a generic login script that will move any content from the user's "Music/iTunes/iTunes Music" directory to the trash and then empties the user's trash.
    This is done through the use of an Automator Application: this application does the following actions.
    1. Uses the "Finder" action "Get Specified Finder Items"
    1a. The user's "~/Music/iTunes/iTunes Music" folder
    2. Uses the "Finder" action "Get Folder Contents"
    3. Uses the "Finder" action "Move to Trash"
    4. Uses the "Automator" action "Run AppleScript"
    4a. with the following:
    on run {input, parameters}
    tell application "Finder"
    empty trash
    end tell
    return input
    end run
    IMPORTANT: Once the script is adapted to the user account it must be set as a login item. in order to keep the script out of the way i have placed it in the user's "Library" directory, in "Application Support" under "iTunes".
    Overview:
    Here we:
    1. Enabled iTunes sharing in the user accounts on the host machine, in effect allowing all users of the machine to view a single iTunes library.
    2. (Optional) I have created a login application that will remove any content that has been added to user iTunes libraries, this in effect stops other users of the machine from adding music and files to iTunes.
    Step 4:
    If it is not already enabled, open system preferences and enable Fast User Switching in Accounts Options.
    Summary:
    We have shared a single iTunes library across multiple user account, while still allowing for network sharing. This method is designed to save space on machines, particularly those with smaller hard drives.
    I hope that this hint proves to be helpful and I hope everybody will give me feedback on my process.
    regards,
    Pete.
    iBook G4; 60GB Hard Drive, 512MB RAM, Airport Extreme   Mac OS X (10.4.6)   iWork & iLife '06, Adobe CS2, Final Cut Pro. Anything and Everything!!!

    how to share music between different accounts on a single computer

  • How do you share Aperture file across multiple users on same Mac?

    How do you share Aperture file across multiple users on same Mac? Seems this should be a preferences choice.

    When you share your library between users, you may run into permission and ownership problems, if both users are editing the Aperture library and not only reading it. To avoid that, it helps to put the Aperture library onto a separate disk or a separate partion of your hard drive. For s separate partition or disk you can enable the "ignore ownership on this volume" flag. Then all users can access the library as owners of this library.
    You might try to put the aperture library into a shared folder on your mac, but that has caused problems recently, i.e. when the library also contains video files.
    Regards
    Léonie

  • Can Windows Server Backup spread a single backup job across multiple disks if they are not set up as a virtual disk?

    This may be a dumb question, but I can't seem to find any definitive information after having done many, many searches.  Short question is - can Windows Server Backup spread a single backup job across multiple disks if they are not in a storage
    pool or some other RAID/JBOD structure?
    Background:
    I'm running Server 2012 Essentials with all Windows Updates installed.  I have been backing up approx 2.8TB of data (Bare Metal Recovery, C:, S: (shared folders), and system reserved) for the past year+ onto a storage pool made up of two-2TB external
    USB drives.  Backup is slow (takes approx 1.5 days to complete), but generally works.  Not surprisingly I was constantly getting capacity low messages so I decided to increase my backup storage pool by adding a 3TB drive and another spare 750GB drive
    for a total of 7.75TB.  Instead of having four separate external USB enclosures, I bot a 4-bay enclosure - Startech.com model #S3540BU33E to simplify this (or so I thought!).
    The first problem I had was adding the two new drives to the existing storage pool. I think that is because the Startech uses a JMicron USB controller that reports identical uniqueid's for all drives so only one shows up in the GUI interface for creating storage
    pools. After doing research on this, I set up a new storage pool and virtual disk using all four drives via Powershell and thought I was good. However, when the backup ran, it failed after filling the first drive, saying there was no remaining capacity. In
    reality there were three remaining empty drives and there storage pool reported almost 5TB of avail capacity. I assumed this was due to the identical uniqueid issue so I decided to try a different tactic.
    Instead of using a storage pool that combines all four disks into one virtual disk, I just added each of them to Windows Server Backup as individual drives thinking it would manage them collectively. I.e., when a drive filled up during a particular backup,
    it would just start using the next drive and so on. Apparently this was a foolish assumption because the backup failed again as soon as the first disk filled up.
    So now I don't know if this is still an issue with the identical uniqueid's or if Server Backup actually can't spread a single backup across multiple individual drives that aren't in a pool or other virtual disk implementation. Hence, my original question.
    My guess is that it does *not* spread them across individual disks, but I just wanted to get confirmation.
    Thanks

    Mandy,
    Thank you for following up on my question.
    Unfortunately the article you referenced doesn't address what I am trying to accomplish.
    The article focuses on saving the same backup job to multiple disks and rotating the disks between on and offsite for enhanced protection.  However, it still requires that an individual backup job fits on a single disk.
    What I am trying to determine is if a single backup job can span across more than one physical disk (during the backup process) without those physical disks being in some type of virtual disk implementation (e.g., storage pool, RAID, etc.).
    Thanks,
    Gerry

  • User Account Authentication across multiple Solaris servers - Best Practice

    Hi,
    I am new to Solaris admin and would like to know the best practice/setup for authenticating user accounts across multiple solaris servers.
    Currently we have 20 - 30 Solaris 8 & 10 servers which each have their own user accounts setup. I am planning to replace these with a similar number of Solaris 10 servers and would like to centralise the user accounts and their authentication.
    I would be grateful for any suggestions on the best setup and any links to tutorials.
    Thanks
    Jools

    i would suggest LDAP + kerberos, LDAP for name lookups and krb5 for auth. provides secure auth + extensable directory for users and other apps if needed. plus, it provides a decent spring board to add other unix plats into the mix since this will support any unix/linux/bsd plat. you could integrate this design with a windows AD env if you want as well.
    [http://www.sun.com/bigadmin/features/articles/kerberos_s10.jsp] sol + ldap+ AD
    [http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server] sol + ldap (openldap)
    [http://aput.net/~jheiss/krbldap/howto.html] sol + ldap + krb5
    now these links are all using some diff means, however they should give you some ideas as to whats out there. sol 10 comes with suns ldap server and you can use the krb5 server which comes with it as well. many many diff ways to do this. many many more links out there as welll. these are just a few.

  • How do you Spread an Image across multiple pages and print?

    I'm trying to create a large poster to be put on a bulletin board. I've mangaed to spread the image across multiple pages but I can't figure how to be able to print the pages have the edges line up successfully after cropping. I've been playing with the bleed settings but can't figure it out. Here a shot of what im working with:
    Im working in CS3 on Mac OS X

    Create your InDesign document at your final size and when you print it out put a check mark in the box for Tile

  • Enforce Default Printer Presets Across Multiple Users

    Hi
    Please could anyone tell me if there is a way to enforce the default printer preset across multiple users on the same computer for a network attached printer?
    I want to make sure that all of my family who each have separate accounts all print using greyscale by default.
    I would like to do this without copying plists about. I was hoping that there is some global way to do it as an administrator.
    TIA

    Hello Ian,
    I did a couple of simple tests on this topic and from what I can see there is no easy way for this to be done. The preferences (plists) are being saved in each users preference, and depending on which printer you are using the located and format for this preference is not easily copied.
    Unless there is a third party app (or something I missed) then I don't see a simple solution to this.
    Regards,
    Paul

Maybe you are looking for

  • I want to query oracle database from sql server can anyone tell steps 2 fo

    i want to query oracle database from sql server can anyone tell steps 2 followed i tried with linked servers but it is throwing errors can anyone hepl in this regard The operation could not be performed because the OLE DB provider 'MSDAORA' was unabl

  • How to install SAP NetWeaver 7.01 ABAP Trial Version software

    Hi, I have SAP NetWeaver 7.01 ABAP Trial Version software. Can anyone help how to install this software. and I also need the path to install SAP NetWeaver 7.01 ABAP Trial Version software. Thanks & Regards, Chandu.

  • Degraded Text Image Quality with Reader 9

    I upgraded to Adobe Reader 9, which I use to open an online daily publication. However, with Reader 9, print quality of this publication (an online newspaper) is now severely degraded, to the point where I have to enlarge the image to about 400% in o

  • Error -17501 in Teststand3.1 when accessing SelectedSteps

    If I try to query the property SequenceContext.SelectedSteps, I'm getting the errorcode -17501 (Unexpected Operating System Error). This happens only in Teststand 3.1 not in Teststand 2.0.1. I tried Labview6.1 and 7.1, but it returns always the same

  • Need wireless bridge to go with my wrt600n, any suggestions?

    hi, i need to add 2 additional wireless ports into my home entertainment system, and need to add a device that i can bridge to my wrt600n. i currently have a wap54g running as a bridge, but with the addition of a xbox 360 elite, i now need another po