Best practice for home setup/Roaming with built-in Dell wireless card?

I have a client with a huge house; I have done a site survey and it will require 3 APs to blanket the entire thing. I am wonderig what the best way to set this up for him would be? He would like to be able to walk around with the house and not lose connectivity. He is attempting to use his built-in Dell wireless card/client. As far as I know, these do not support Cisco roaming. What I have done is setup 3 diffrent SSIDs for each part of the house. Depending on his location he manually connects to the correct AP. Is this the best setup?
I had attempted to set all APs with the same SSIDs but found that the client would not roam and also found it difficult to connect to any of the APs at all. I believe the card was getting confused since there was overlap of the same SSID.

If he has a huge house, I don't think his laptop would be that old and also, the Dell's use the Intel wifi cards. Having all 3 ap's setup for the same SSID is what you should do and don't know why you are having so many issues. I would assume the ap's are on channel 1, 6, and 11 and how is the power setting? Do all these ap's run back to the same switch?

Similar Messages

  • What is the guideline and/or best practice for EMC setup on ASM?

    We are going to use EMC CX4-480 for ASM storage on RAC. What is the guideline and best practice for EMC setup on ASM?
    Thanks for the advice!

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • Best practice for TM on AEBS with multiple macs

    Like many others, I just plugged a WD 1TB drive (mac ready) into the AEBS and started TM.
    But in reading here and elsewhere I'm realizing that there might be a better way.
    I'd like suggestions for best practices on how to setup the external drive.
    The environment is...
    ...G4 Mac mini, 10.4 PPC - this is the system I'm moving from, it has all iPhotos, iTunes, and it being left untouched until I get all the TM/backup setup and tested. But it will got to 10.5 eventually.
    ...Intel iMac, 10.5 soon to be 10.6
    ...Intel Mac mini, 10.5, soon to be 10.6
    ...AEBS with (mac ready) WD-1TB usb attached drive.
    What I'd like to do...
    ...use the one WD-1TB drive for all three backups, AND keep a copy of system and iLife DVD's to recover from.
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    How do I get an image of the install DVD onto the 1TB drive?
    How do I do that? (install?, ISO image?, straight copy?)
    And what about the 2nd disk (for iLife?) - same partition, a different one, ISO image, straight copy?
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)>
    I know its a lot of question but here are the two objectives...
    1. Use TM in typical fashion, to recover the occasion deleted file.
    2. The ability to perform a bare-metal point-in-time recovery (not always to the very last backup, but sometimes to a day or two before.)

    dmcnish wrote:
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    Hi, and welcome to the forums.
    You can, but you really only need a separate partition for the Mac that's backing-up directly. It won't have a Sparse Bundle, but a Backups.backupdb folder, and if you ever have or want to delete all of them (new Mac, certain hardware repairs, etc.) you can just erase the partition.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    Right.
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    I don't think so. I've never tried it, but even if it works, it will be very slow. So connect via F/W or USB (the PPC Mac probably can't boot from USB, but the Intels can).
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)
    That's actually two different questions. To do a full system restore, you don't load OSX at all, but you do need the Leopard Install disc, because it has the installer. See item #14 of the Frequently Asked Questions *User Tip* at the top of this forum.
    If for some reason you do install OSX, then you can either "transfer" (as part of the installation) or "Migrate" (after restarting, via the Migration Assistant app in your Applications/Utilities folder) from your TM backups. See the *Erase, Install, & Migrate* section of the Glenn Carter - Restoring Your Entire System / Time Machine *User Tip* at the top of this forum.
    In either case, If the backups were done wirelessly, you must transfer/migrate wirelessly (although you can speed it up by connecting via Ethernet).

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • Best Practices for Initial Setup

    Hi, I'm going to be helping a friend setup his new Time Capsule and Airport Express for his home network. I've been following some of the threads here about the nightmare problems some people are having with speed, connectivity, reliability, etc.
    So I was thinking it might be helpful to have a list of steps to take among setting up that might avoid some pitfalls. Feel free to add stuff not mentioned.
    1) Should firmware be updated first? What is current version?
    2) Would doing a '7-Pass Erase' on the TC first potentially avoid some issues (seems like this has helped some folks)?
    3) If you plan on storing other data on the TC yourself (not a TM backup), do you need to partition the drive first? What is the best method for this? Would this have any potential on slowing TM backups?
    4) What is best method for determining best channel to set TC to? (ie, are there ways to test?)
    5) Should you perform initial TM backups from each computer over ethernet before attempting wireless?
    6) What tools are there to accurately check your wifi and/or file transfer performance? Are there listed standards that a good working TC 'should' be to compare against.
    THANKS in advance. I'm trying to help them avoid some of the pitfalls people have encountered. Hopefully this can help others as well.

    1) I would update first Airport Utility is 5.3.2 and firmware 7.3.2
    2) not needed is over kill
    3) partition is a waste of valuable space, Airport Utility is not going to do that; you'll need to pull the drive and do it then you'll run into unusual problem over the long term.
    4) jump by at least 6

  • Best Practice for Droid Gmail Contacts with Exchange ActiveSync?

    Hi, folks.  After going through an Address Book nightmare this past summer, I am attempting to once again get my Contacts straight and clean.  I have just started a new job and want to bring my now clean Gmail contacts over to Exchange.  The challenge is creating duplicate contacts, then defining a go-forward strategy for creating NEW contacts so that they reside in both Gmail and Exchange without duplication.  Right now, my Droid is master and everything is fine.  However, once I port those contacts from Gmail onto my laptop, all hell breaks loose... Does Verizon have a Best Practice finally documented for this?  This past summer I spoke with no less than 5 different Customer Support reps and got 3 different answers... This is not an uncommon problem...

    In parallel to this post, I called Verizon for Technical Support assistance.  Seems no progress has been made.  My issue this past summer were likely a result of extremely poor quality products from Microsoft, which included Microsoft CRM, Microsoft Lync (new phone system they are touting which is horrible), and Exchange.  As a go-forward strategy, I have exported all Gmail contacts to CSV for Outlook and have imported them to Exchange.  All looks good.  I am turning off phone visibility of Gmail contacts and will create all new contacts in Exchange.

  • Best Practices for Creating eLearning Content With Adobe

    As agencies are faced with limited resources and travel restrictions, initiatives for eLearning are becoming more popular. Come join us as we discuss best practices and tips for groups new to eLearning content creation, and the best ways to avoid complications as you grow your eLearning library.
    In this webinar, we will take on common challenges that we have seen in eLearning deployments, and provide simple methods to avoid and overcome them. With a little training and some practice, even beginners can create engaging and effective eLearning content using Adobe Captivate and Adobe Presenter. You can even deploy content to your learners with a few clicks using the Adobe Connect Training Platform!
    Sign up today to learn how to:
    -Deliver self-paced training content that won't conflict with operational demands and unpredictable schedules
    -Create engaging and effective training material optimized for knowledge retention
    -Build curriculum featuring rich content such as quizzes, videos, and interactivity
    -Track program certifications required by Federal and State mandates
    Come join us Wednesday May 23rd at 2P ET (11A PT): http://events.carahsoft.com/event-detail/1506/realeyes/
    Jorma_at_RealEyes
    RealEyes Connect

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best practice for replacing a package with equivalent, lots of deps

    I was having CPU issues, posted about a bit back in another thread, which brought up the fact that it might have been related to the nvidia version at that time (325.15). As a result, I switched to nvidia-beta and nvidia-utils-beta from AUR.
    Nvidia from extra is now up to 331.20, and I was thinking of switching back so that I wouldn't always be surprised after a kernel update that no screens were found (AUR packages don't tend to flag updates just because linux updated). Not a big deal, as I just have to re-build the AUR package and I'm set. Anyway, I was going to switch back to the standard nvidia packages, but am not sure what to do about the dependencies on libgl, provided by nvidia-libgl-beta (a split-package provided by nvidia-utils-beta):
    $ sudo pacman -S nvidia
    resolving dependencies...
    looking for inter-conflicts...
    :: nvidia and nvidia-beta are in conflict. Remove nvidia-beta? [y/N] y
    :: nvidia-utils and nvidia-utils-beta are in conflict. Remove nvidia-utils-beta? [y/N] y
    error: failed to prepare transaction (could not satisfy dependencies)
    :: nvidia-libgl-beta: requires nvidia-utils-beta
    $ sudo pacman -R nvidia-libgl-beta
    checking dependencies...
    error: failed to prepare transaction (could not satisfy dependencies)
    :: cairo: requires libgl
    :: freeglut: requires libgl
    :: glu: requires libgl
    :: libva: requires libgl
    :: qt4: requires libgl
    :: webkitgtk2: requires libgl
    :: xorg-xdriinfo: requires libgl
    $ sudo pacman -Rc nvidia-libgl-beta
    checking dependencies...
    :: avahi optionally requires gtk3: avahi-discover-standalone, bshell, bssh, bvnc
    :: avahi optionally requires gtk2: gtk2 bindings
    :: avahi optionally requires qt4: qt4 bindings
    :: avahi optionally requires pygtk: avahi-bookmarks, avahi-discover
    :: boost-libs optionally requires openmpi: for mpi support
    :: chromium-libpdf optionally requires chromium: default browser to use plugin in (one of the optional dependencies needs to be installed to use the library)
    :: dconf optionally requires gtk3: for dconf-editor
    :: ghostscript optionally requires gtk2: needed for gsx
    :: gvfs optionally requires gtk3: Recent files support
    :: harfbuzz optionally requires cairo: hb-view program
    :: imagemagick optionally requires librsvg: for SVG support
    :: jasper optionally requires freeglut: for jiv support
    :: jasper optionally requires glu: for jiv support
    :: jre7-openjdk optionally requires gtk2: for the Gtk+ look and feel - desktop usage
    :: libtiff optionally requires freeglut: for using tiffgt
    :: libwebp optionally requires freeglut: vwebp viewer
    :: mjpegtools optionally requires gtk2: glav GUI
    :: nvidia-utils-beta optionally requires gtk2: nvidia-settings
    :: pinentry optionally requires gtk2: for gtk2 backend
    :: pinentry optionally requires qt4: for qt4 backend
    :: smpeg optionally requires glu: to use glmovie
    :: v4l-utils optionally requires qt4
    :: wicd optionally requires wicd-gtk: needed if you want the GTK interface
    :: xdg-utils optionally requires exo: for Xfce support in xdg-open
    Packages (102): anycoloryoulike-icon-theme-0.9.4-2 arpack-3.1.2-2 bleachbit-1.0-1 cairo-1.12.16-1 chromium-31.0.1650.63-1 chromium-pepper-flash-stable-2:11.9.900.170-1
    cups-1.7.0-2 cups-filters-1.0.43-1 cups-pdf-2.6.1-2 darktable-1.4-2 dia-0.97.2-5 dropbox-2.6.2-1 emacs-24.3-4 enblend-enfuse-4.1.1-5 evince-gtk-3.10.3-1
    exo-0.10.2-2 farstream-0.1-0.1.2-3 ffmpeg-1:2.1.1-3 finch-2.10.7-4 firefox-26.0-2 flashplugin-11.2.202.332-1 foomatic-db-engine-2:4.0.9_20131201-1
    freeglut-2.8.1-1 geeqie-1.1-2 gegl-0.2.0-10 gimp-2.8.10-1 girara-gtk3-0.1.9-1 glew-1.10.0-2 glu-9.0.0-2 gmtp-1.3.4-1 gnome-icon-theme-3.10.0-1
    gnome-icon-theme-symbolic-3.10.1-1 gnome-themes-standard-3.10.0-1 gstreamer0.10-bad-plugins-0.10.23-7 gtk-engine-murrine-0.98.2-1 gtk-engines-2.21.0-1
    gtk2-2.24.22-1 gtk3-3.10.6-1 gtkspell-2.0.16-3 guvcview-1.7.2-1 hplip-3.13.11-2 hugin-2013.0.0-5 hwloc-1.8-1 impressive-0.10.3-8 jumanji-20110811-1
    libglade-2.6.4-5 libgxps-0.2.2-3 libpurple-2.10.7-4 libreoffice-base-4.1.4-1 libreoffice-calc-4.1.4-1 libreoffice-common-4.1.4-1 libreoffice-draw-4.1.4-1
    libreoffice-gnome-4.1.4-1 libreoffice-impress-4.1.4-1 libreoffice-writer-4.1.4-1 librsvg-1:2.40.1-3 libtiger-0.3.4-3 libunique-1.1.6-5 libva-1.2.1-1
    libva-vdpau-driver-0.7.4-1 libxfce4ui-4.10.0-1 libxfcegui4-4.10.0-1 lxappearance-0.5.5-1 meshlab-1.3.2-4 mpd-0.18.6-1 obconf-2.0.4-1 octave-3.6.4-6
    openbox-3.5.2-6 openmpi-1.6.5-1 pango-1.36.1-1 pangox-compat-0.0.2-1 pdf2svg-0.2.1-7 pidgin-2.10.7-4 poppler-0.24.5-1 poppler-glib-0.24.5-1
    pygtk-2.24.0-3 python2-cairo-1.10.0-1 python2-gconf-2.28.1-8 python2-opengl-3.0.2-5 qt4-4.8.5-7 qtwebkit-2.3.3-1 r-3.0.2-1 rstudio-desktop-bin-0.98.490-1
    screenkey-0.2-5 scribus-1.4.3-2 thunar-1.6.3-1 tint2-svn-652-3 truecrypt-1:7.1a-2 vlc-2.1.2-1 webkitgtk2-1.10.2-8 wicd-gtk-1.7.2.4-9 wxgtk-3.0.0-2
    wxgtk2.8-2.8.12.1-1 xfburn-0.4.3-6 xorg-utils-7.6-8 xorg-xdriinfo-1.0.4-3 xscreensaver-arch-logo-5.26-3 zathura-0.2.6-1 zathura-pdf-mupdf-0.2.5-3
    zukitwo-theme-openbox-20111021-3 zukitwo-themes-20131210-1 nvidia-libgl-beta-331.38-1
    Total Removed Size: 1756.12 MiB
    :: Do you want to remove these packages? [Y/n]
    As you might imagine, I'd prefer not to remove all of those packages just to switch my libgl providing package and then re-install.
    In digging around, I found this entry on downgrading packages without respecting dependencies.
    Is that the best method for doing what I describe above as well? Would I do something like `pacman -Rd nvidia-utils-beta` (without X running) and then install the packages from extra?

    It should be similar to switching to nouveau driver https://wiki.archlinux.org/index.php/Nouveau
    Just:
    # pacman -Rdds nvidia-beta nvidia-utils-beta
    # pacman -S nvidia nvidia-utils

  • Can I connect with built-in laptop wireless card?

    Hi, I have a WRT54G set up and working fine on my main pc, connected to cable modem with a pc and laptop using linksys adapters.   BUT I am trying to connect my third laptop using the internal wireless built in and was wondering how to connect this way.   It was easy with the adapters because I used the linksys software that came with them.   But with the built-in which is an Intel-PRO, it states the "wireless connection is not connected" and when I access the properties it tells me that I should configure with the software that came with the device.  I'm running XP as the OS.    I'm stumped.  Any advice?  Thanks.

    In the non-working computer, temporarily turn off the software firewall, and see if this helps.
    Also, what wireless card are you using?  The Intel Pro Wireless 2200bg card is known to have connection problems that were fixed with the latest driver.  If you have this card, go to the Intel web site and download and install the latest driver.

  • SX20 and SX10 best practices for small conference rooms using built in Display Speakers

    Hi, 
    Im planning to deploy some small meeting rooms using SX10 and SX20 codecs, I was wondering if someone could give me some key points to consider: 
    1. Display recommendation, nice built in speakers, low delay (goog performance with codec echo canceller) I wish to use a 60 - 70 inch Led TV (not yet defined which brand and model)  I would like to receive some feedback about nice performance displays with nice audio and echo cancellation performance based on your experience. 
    2. What would be the drawbacks of using SX10 and SX20 in larger meeting rooms maybe 10 or 12 people?
    3. When to use external microphone? or when is recommended to use only  internal microphone?
    Thanks in advance for your help. 
    Best Regards,  have a nice day!

    Pretty well any modern display should work well - different people are going to have different ideas about whether the speakers are any good or not, so you'll have to listen to a few yourself and make up your mid what you think is "better".  The echo cancellation can be tweaked manually on the codec to suit the display if required (rather than leaving it on the "auto" setting which, in our installations, has caused more troubles than it's worth).
    The main drawback of using the SX10, or the cheaper of the range of the SX20s, is the camera and the amount of zoom.  In a larger room, you're not going to get a good close up view with a 2.5x camera.  I'd suggest for a bigger room, you look at the 12x cameras.
    As a general rule - use additional microphone(s) when you can't get someone within a 3m radius one of the other microphones.  So in a larger room, you may need many.
    Wayne
    Please remember to rate responses and to mark your question as answered if appropriate.

  • What is best practice for using a SAN with ASM in an 11gR2 RAC installation

    I'm setting up a RAC environment. Planning on using Oracle 11g release 2 for RAC & ASM, although the db will initially be 10g r2. OS: RedHat. I have a SAN available to me and want to know the best way to utilise that via ASM.
    I've chosen ASM as it allows me to store everything, including the voting and cluster registry files.
    So I think I'll need three disk groups: Data (+spfile, control#1, redo#1, cluster files#1), Flashback (+control#2, redo#2, archived redo, backups, cluster files#2) and Cluster - Cluster files#3. So that last one in tiny.
    The SAN and ASM are both capable of doing lots of the same work, and it's a waste to get them both to stripe & mirror.
    If I let the SAN do the redundancy work, then I can minimize the data transfer to the SAN. The administrative load of managing the discs is up to the Sys Admin, rather than the DBA, so that's attractive as well.
    If I let ASM do the work, it can be intelligent about the data redundacy it uses.
    It looks like I should have LUN (Logical Unit Numbers) with RAID 0+1. And then mark the disk groups as extrenal redundancy.
    Does this seem the best option ?
    Can I avoid this third disk group just for the voting and cluster registry files ?
    Am I OK to have this lower version of Oracle 10gr2 DB on a RAC 11gr2 and ASM 11gr2 ?
    TIA, Duncan

    Hi Duncan,
    if your storage uses SAN RAID 0+1 and you use "External" redundancy, then ASM will not mirror (only stripe).
    Hence theoretically 1 LUN per diskgroup would be enough. (External redundancy will also only create 1 voting disk, hence only one LUN is needed).
    However there are 2 things to note:
    -> Tests have shown that for the OS it is better to have multiple LUNs, since the I/O can be better handled. Therefore it is recommended to have 4 disks in a diskgroup.
    -> LUNs in a diskgroup should be the same size and should have same I/O characteristica. If you now have in mind, that maybe your database one time will need more space (more disks) than you should use a disk size, which can easily be added, without waisting too much space.
    E.g:
    If you have a 900GB database then does it make sense to only use 1 lun with 1TB?
    What happens if the database grows, but only grows slightly above 1TB? Then you should add another disk with 1TB.... You loose a lot of space.
    Hence it does make more sence to use 4 disks á 250GB, since the "disks" needed to grow the disk groups can be extended more easily. (just add another 250G disk).
    O.k. there is also the possibility to resize a disk in ASM, but it is a lot easier to simple add an additional lun.
    PS: If you use a "NORMAL" redundancy diskgroup, then you need at least 3 disks in your diskgroup (in 3 failgroups) to be able to handle the 3 voting disks.
    Hope that helps a little.
    Sebastian

  • Best practice for redesigning a website with CSS

    I have a site I created in early 2000's in Dreamweaver back in the Macromedia days www.kid-ebooks.com. It is basic HTML using a lot of tables, some flash, etc. I have since created many other sites using CSS and Dreamweaver (Currently on CS5.5) and I now want to do a complete redesign of Kid-ebooks, using CSS. I also however want to keep a lot of existing content, links, etc. I don't know what is the best approach, whether it be to try and apply a CSS to the site, design in parallel and then switch when ready, copy and paste content? What is be recommendation for starting over, but keeping some content?
    Thanks,
    Dave

    I've used this formula with success.  Make a back-up site beforehand.
    Use Find & Replace > Source code > Tags >  to strip out all of the <table> <td> <tr> <font> <background> <bold> and any other deprecated code from the site.  Ultimately, you will be left with unstyled content, hyperlinks, & images.  Validate code and fix any errors.
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists 
    http://alt-web.com/
    http://twitter.com/altweb

Maybe you are looking for