Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
I have 3 VDC's Admin, F2 and M2
all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
All vPC's are connected on the M2 cards, configured in the M2 VDC
We have 2 Nexus representing each "site"
In one site we have a vPC domain "100"
The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
Please see the diagram.
There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
There is also the third vPC that connects the 4 Nexus's together. (po172)
We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
We are only concerned about layer two for this part of the testing.
As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
track 101 list boolean OR
n7k-1(config-track)# object 1
n7k-1(config-track)# object 2
n7k-1(config-track)# object 5
n7k-1(config-track)# end
n7k-1(config)# vpc domain 101
n7k-1(config-vpc-domain)# track 101
The other site is the same, just 100 instead of 101.
We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
Any wisdom would be greatly appreciated.
Nick

Nick,
I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
Logically your topology is like this:
    |                             |
    |   Nexus Pair          |
3750-1-----------------------3750-2
Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
HSRP is fine.
As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
JayaKrishna

Similar Messages

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Best KVM for using 2 computers with one monitor?

    Hi. I am using a Dell U2711 monitor with a main, big, wonderful machine that has an nvidea display card and am considering to get an additional, small machine, to install Linux/GNU, and wondering, to use the one monitor for both computers.
    So, can anyone recommend the best KVM switch for these specifications, or what am i to look for in a kvm switch, assuming i'm using CS5.5 Master COllection on the big machine and dont want to lose quality by adding the switch.
    thanks

    I've been using the Rosewill RKV-2DVI for about a year with an XP 32 bit machine and a W7 64 bit machine.
    http://www.rosewill.com/products/1687/ProductDetail_Overview.htm
    http://www.rosewill.com/Mgnt/Uploads2/AttachmentForProduct/usermanual-rkv-2dvi.pdf

  • Best Practice for Migration of BO  from one server to another

    Hi All,
               I would like to know what is the Best pratice for Migration of BO from One server to another.
    i have Installed BO Xi R2 on my server.
    Thanks,
    Anendu Bothra
    Edited by: Anendu Bothra on Mar 5, 2009 10:24 AM

    You need to copy your input and output file stores from the old server to the new server. By default these are located in the <Business Objects install path>\FileStore directory.
    Then you need to stop the CMS, Right click the CMS,Click the Configuration tab, and then click Specify.
    Choose Copy, then click OK.
    Choose the version information for the source CMS database.
    Select the database type for the source CMS database, and then specify
    its database information (including host name, user name, and password).
    Select the database type for the destination CMS database, and then
    specify its database information (including host name, user name, and
    password).
    When the CMS database has finished copying, click OK.
    Once this process has been completed start the CMS and click on update objects -> located on the top of the CCM.
    I'd advise taking full backups beforehand.

  • Best practice for TM on AEBS with multiple macs

    Like many others, I just plugged a WD 1TB drive (mac ready) into the AEBS and started TM.
    But in reading here and elsewhere I'm realizing that there might be a better way.
    I'd like suggestions for best practices on how to setup the external drive.
    The environment is...
    ...G4 Mac mini, 10.4 PPC - this is the system I'm moving from, it has all iPhotos, iTunes, and it being left untouched until I get all the TM/backup setup and tested. But it will got to 10.5 eventually.
    ...Intel iMac, 10.5 soon to be 10.6
    ...Intel Mac mini, 10.5, soon to be 10.6
    ...AEBS with (mac ready) WD-1TB usb attached drive.
    What I'd like to do...
    ...use the one WD-1TB drive for all three backups, AND keep a copy of system and iLife DVD's to recover from.
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    How do I get an image of the install DVD onto the 1TB drive?
    How do I do that? (install?, ISO image?, straight copy?)
    And what about the 2nd disk (for iLife?) - same partition, a different one, ISO image, straight copy?
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)>
    I know its a lot of question but here are the two objectives...
    1. Use TM in typical fashion, to recover the occasion deleted file.
    2. The ability to perform a bare-metal point-in-time recovery (not always to the very last backup, but sometimes to a day or two before.)

    dmcnish wrote:
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    Hi, and welcome to the forums.
    You can, but you really only need a separate partition for the Mac that's backing-up directly. It won't have a Sparse Bundle, but a Backups.backupdb folder, and if you ever have or want to delete all of them (new Mac, certain hardware repairs, etc.) you can just erase the partition.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    Right.
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    I don't think so. I've never tried it, but even if it works, it will be very slow. So connect via F/W or USB (the PPC Mac probably can't boot from USB, but the Intels can).
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)
    That's actually two different questions. To do a full system restore, you don't load OSX at all, but you do need the Leopard Install disc, because it has the installer. See item #14 of the Frequently Asked Questions *User Tip* at the top of this forum.
    If for some reason you do install OSX, then you can either "transfer" (as part of the installation) or "Migrate" (after restarting, via the Migration Assistant app in your Applications/Utilities folder) from your TM backups. See the *Erase, Install, & Migrate* section of the Glenn Carter - Restoring Your Entire System / Time Machine *User Tip* at the top of this forum.
    In either case, If the backups were done wirelessly, you must transfer/migrate wirelessly (although you can speed it up by connecting via Ethernet).

  • Is it a best practice to have a template with one master page?

    I am a newbie FM 11 writer and am cleaning up some unorganized books. Should I copy one set of Master Pages to all files in the book. Currently my TOC and certain other files have unique master pages. I would like to set up our books using best practices and would like input from the community. Thanks.

    There are two schools of thought on this. The specific sub-template approach or the "kitchen sink" approach.
    In the "kitchen sink" (i.e. everything, including the...) approach, the FM template is loaded with everything required for the project in a single file. It's simple to deploy, import it to all files and you're good to go. However, the author may have to deal with all sorts of superfluous tags and page layouts in some specific file types, like the cover pages, TOC, Index and other generated files. The onus is on the author to select the correct items to use from the multitude of choices.
    The sub-template approach is modular approach where one creates the various components in separate template files, e.g. paragraph and character tags, tables, page layouts, etc. and combines to create specific templates for the various book components. These component-combined templates only have the minimum that is required for each type of document component. This is a lego-like approach and it provides more flexibility (IMHO) with modifying, updating and creating new templates. This is easier (perhaps less intimidating would be a better term) for the author to use as their choices are much more limited in any given context. However, they do have to apply the correct templates to the specific book components.
    In all cases, you need to document the usage of all components in the template(s), so authors will know the intent of each and every tag, table, sttyle, page layout, etc.

  • Best Practice for Droid Gmail Contacts with Exchange ActiveSync?

    Hi, folks.  After going through an Address Book nightmare this past summer, I am attempting to once again get my Contacts straight and clean.  I have just started a new job and want to bring my now clean Gmail contacts over to Exchange.  The challenge is creating duplicate contacts, then defining a go-forward strategy for creating NEW contacts so that they reside in both Gmail and Exchange without duplication.  Right now, my Droid is master and everything is fine.  However, once I port those contacts from Gmail onto my laptop, all hell breaks loose... Does Verizon have a Best Practice finally documented for this?  This past summer I spoke with no less than 5 different Customer Support reps and got 3 different answers... This is not an uncommon problem...

    In parallel to this post, I called Verizon for Technical Support assistance.  Seems no progress has been made.  My issue this past summer were likely a result of extremely poor quality products from Microsoft, which included Microsoft CRM, Microsoft Lync (new phone system they are touting which is horrible), and Exchange.  As a go-forward strategy, I have exported all Gmail contacts to CSV for Outlook and have imported them to Exchange.  All looks good.  I am turning off phone visibility of Gmail contacts and will create all new contacts in Exchange.

  • Best Practices for Creating eLearning Content With Adobe

    As agencies are faced with limited resources and travel restrictions, initiatives for eLearning are becoming more popular. Come join us as we discuss best practices and tips for groups new to eLearning content creation, and the best ways to avoid complications as you grow your eLearning library.
    In this webinar, we will take on common challenges that we have seen in eLearning deployments, and provide simple methods to avoid and overcome them. With a little training and some practice, even beginners can create engaging and effective eLearning content using Adobe Captivate and Adobe Presenter. You can even deploy content to your learners with a few clicks using the Adobe Connect Training Platform!
    Sign up today to learn how to:
    -Deliver self-paced training content that won't conflict with operational demands and unpredictable schedules
    -Create engaging and effective training material optimized for knowledge retention
    -Build curriculum featuring rich content such as quizzes, videos, and interactivity
    -Track program certifications required by Federal and State mandates
    Come join us Wednesday May 23rd at 2P ET (11A PT): http://events.carahsoft.com/event-detail/1506/realeyes/
    Jorma_at_RealEyes
    RealEyes Connect

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best practice for Near Real time with Oracle

    Hi,
    We plan to run a scenario to demonstrate DS ability for real time or near real time scenario.
    We identified different solutions and we are trying to choose the most appropriate :
    - Oracle 11g + Sybase Replication server + Data Services
    - Oracle 11g + Data services : real time jobs ( xml messages )
    - Oracle 11g + Data Services : CDC scenario
    I need to load data into a SAP Sybase IQ database : is there any known issue with one of the previous solution ?
    The challenge is to load a set of 10 tables, for more than 5 millions rows,
    Thanks for your advices,
    Guillaume

    Hi!
    Didn't anyone have this requirement for migrations? I have tested with the mysql select into file clause. Seems to work for simple data types - we're now testing with blobs...
    Markus

  • Best practice for replacing a package with equivalent, lots of deps

    I was having CPU issues, posted about a bit back in another thread, which brought up the fact that it might have been related to the nvidia version at that time (325.15). As a result, I switched to nvidia-beta and nvidia-utils-beta from AUR.
    Nvidia from extra is now up to 331.20, and I was thinking of switching back so that I wouldn't always be surprised after a kernel update that no screens were found (AUR packages don't tend to flag updates just because linux updated). Not a big deal, as I just have to re-build the AUR package and I'm set. Anyway, I was going to switch back to the standard nvidia packages, but am not sure what to do about the dependencies on libgl, provided by nvidia-libgl-beta (a split-package provided by nvidia-utils-beta):
    $ sudo pacman -S nvidia
    resolving dependencies...
    looking for inter-conflicts...
    :: nvidia and nvidia-beta are in conflict. Remove nvidia-beta? [y/N] y
    :: nvidia-utils and nvidia-utils-beta are in conflict. Remove nvidia-utils-beta? [y/N] y
    error: failed to prepare transaction (could not satisfy dependencies)
    :: nvidia-libgl-beta: requires nvidia-utils-beta
    $ sudo pacman -R nvidia-libgl-beta
    checking dependencies...
    error: failed to prepare transaction (could not satisfy dependencies)
    :: cairo: requires libgl
    :: freeglut: requires libgl
    :: glu: requires libgl
    :: libva: requires libgl
    :: qt4: requires libgl
    :: webkitgtk2: requires libgl
    :: xorg-xdriinfo: requires libgl
    $ sudo pacman -Rc nvidia-libgl-beta
    checking dependencies...
    :: avahi optionally requires gtk3: avahi-discover-standalone, bshell, bssh, bvnc
    :: avahi optionally requires gtk2: gtk2 bindings
    :: avahi optionally requires qt4: qt4 bindings
    :: avahi optionally requires pygtk: avahi-bookmarks, avahi-discover
    :: boost-libs optionally requires openmpi: for mpi support
    :: chromium-libpdf optionally requires chromium: default browser to use plugin in (one of the optional dependencies needs to be installed to use the library)
    :: dconf optionally requires gtk3: for dconf-editor
    :: ghostscript optionally requires gtk2: needed for gsx
    :: gvfs optionally requires gtk3: Recent files support
    :: harfbuzz optionally requires cairo: hb-view program
    :: imagemagick optionally requires librsvg: for SVG support
    :: jasper optionally requires freeglut: for jiv support
    :: jasper optionally requires glu: for jiv support
    :: jre7-openjdk optionally requires gtk2: for the Gtk+ look and feel - desktop usage
    :: libtiff optionally requires freeglut: for using tiffgt
    :: libwebp optionally requires freeglut: vwebp viewer
    :: mjpegtools optionally requires gtk2: glav GUI
    :: nvidia-utils-beta optionally requires gtk2: nvidia-settings
    :: pinentry optionally requires gtk2: for gtk2 backend
    :: pinentry optionally requires qt4: for qt4 backend
    :: smpeg optionally requires glu: to use glmovie
    :: v4l-utils optionally requires qt4
    :: wicd optionally requires wicd-gtk: needed if you want the GTK interface
    :: xdg-utils optionally requires exo: for Xfce support in xdg-open
    Packages (102): anycoloryoulike-icon-theme-0.9.4-2 arpack-3.1.2-2 bleachbit-1.0-1 cairo-1.12.16-1 chromium-31.0.1650.63-1 chromium-pepper-flash-stable-2:11.9.900.170-1
    cups-1.7.0-2 cups-filters-1.0.43-1 cups-pdf-2.6.1-2 darktable-1.4-2 dia-0.97.2-5 dropbox-2.6.2-1 emacs-24.3-4 enblend-enfuse-4.1.1-5 evince-gtk-3.10.3-1
    exo-0.10.2-2 farstream-0.1-0.1.2-3 ffmpeg-1:2.1.1-3 finch-2.10.7-4 firefox-26.0-2 flashplugin-11.2.202.332-1 foomatic-db-engine-2:4.0.9_20131201-1
    freeglut-2.8.1-1 geeqie-1.1-2 gegl-0.2.0-10 gimp-2.8.10-1 girara-gtk3-0.1.9-1 glew-1.10.0-2 glu-9.0.0-2 gmtp-1.3.4-1 gnome-icon-theme-3.10.0-1
    gnome-icon-theme-symbolic-3.10.1-1 gnome-themes-standard-3.10.0-1 gstreamer0.10-bad-plugins-0.10.23-7 gtk-engine-murrine-0.98.2-1 gtk-engines-2.21.0-1
    gtk2-2.24.22-1 gtk3-3.10.6-1 gtkspell-2.0.16-3 guvcview-1.7.2-1 hplip-3.13.11-2 hugin-2013.0.0-5 hwloc-1.8-1 impressive-0.10.3-8 jumanji-20110811-1
    libglade-2.6.4-5 libgxps-0.2.2-3 libpurple-2.10.7-4 libreoffice-base-4.1.4-1 libreoffice-calc-4.1.4-1 libreoffice-common-4.1.4-1 libreoffice-draw-4.1.4-1
    libreoffice-gnome-4.1.4-1 libreoffice-impress-4.1.4-1 libreoffice-writer-4.1.4-1 librsvg-1:2.40.1-3 libtiger-0.3.4-3 libunique-1.1.6-5 libva-1.2.1-1
    libva-vdpau-driver-0.7.4-1 libxfce4ui-4.10.0-1 libxfcegui4-4.10.0-1 lxappearance-0.5.5-1 meshlab-1.3.2-4 mpd-0.18.6-1 obconf-2.0.4-1 octave-3.6.4-6
    openbox-3.5.2-6 openmpi-1.6.5-1 pango-1.36.1-1 pangox-compat-0.0.2-1 pdf2svg-0.2.1-7 pidgin-2.10.7-4 poppler-0.24.5-1 poppler-glib-0.24.5-1
    pygtk-2.24.0-3 python2-cairo-1.10.0-1 python2-gconf-2.28.1-8 python2-opengl-3.0.2-5 qt4-4.8.5-7 qtwebkit-2.3.3-1 r-3.0.2-1 rstudio-desktop-bin-0.98.490-1
    screenkey-0.2-5 scribus-1.4.3-2 thunar-1.6.3-1 tint2-svn-652-3 truecrypt-1:7.1a-2 vlc-2.1.2-1 webkitgtk2-1.10.2-8 wicd-gtk-1.7.2.4-9 wxgtk-3.0.0-2
    wxgtk2.8-2.8.12.1-1 xfburn-0.4.3-6 xorg-utils-7.6-8 xorg-xdriinfo-1.0.4-3 xscreensaver-arch-logo-5.26-3 zathura-0.2.6-1 zathura-pdf-mupdf-0.2.5-3
    zukitwo-theme-openbox-20111021-3 zukitwo-themes-20131210-1 nvidia-libgl-beta-331.38-1
    Total Removed Size: 1756.12 MiB
    :: Do you want to remove these packages? [Y/n]
    As you might imagine, I'd prefer not to remove all of those packages just to switch my libgl providing package and then re-install.
    In digging around, I found this entry on downgrading packages without respecting dependencies.
    Is that the best method for doing what I describe above as well? Would I do something like `pacman -Rd nvidia-utils-beta` (without X running) and then install the packages from extra?

    It should be similar to switching to nouveau driver https://wiki.archlinux.org/index.php/Nouveau
    Just:
    # pacman -Rdds nvidia-beta nvidia-utils-beta
    # pacman -S nvidia nvidia-utils

  • Best Practice for Many Complex Elements In One PNG File

    I use Fireworks all the time, but I'm still embarrasingly ignorant on how to do some things property.
    For instance, when I do mockups of a new customers homepage, there may be many small or medium sized complex objects on the page.  By "objects" I mean like lots of modules advertising their different services, and each module has its own graphics, text, backgrounds, etc all with lots of different effects applied.
    Should all of these things reside in one large png file, or does Fireworks offer some way for me to create some of these as separate png files and just drop an "instance" of them into the main file?  I read something about adding files to the Common Library, but I will only need these files for one single customer so that may be overkill.
    Or do I even need to worry about separating them at all and should just continue having everything for a single customer in one big single png file?
    Please let me know if my question is not clear.

    In reading a bit further, it looks like a good option for me might be to just select everything that makes up a specific module and just "group" it. That way I can move it around and turn it off and on easier.
    So I'm thinking that unless theres something Im going to reusing more than once in a document (which in this case I'm not) then its fine to have everything all in one png?  And the only time I use the Common/Document Library is when i'm reusing things?
    I'm just trying to wrap my head around what all my options are in Fireworks, as so far my experience is limited to just dragging around rectangles and shapes and stuff on one single canvas to create website mockups.

  • What is the best practice for formatting ext drives to be used by both Mac and PC users?

    I have scanned what is available in the community about this issue and wondering if anyone has anything new to add.  I need to create backup drives of department files (large photo and video) that both we Mac and the other PC users can access.  The only consistent rec I see is FAT32, but that seems to have size limitations.  Any other suggestions?  Thanks.

    The problem with FAT32 is that 4 GB or bigger files can't be stored. In this case, and as you have a Mac OS X version that supports it (10.6.5), you can use exFAT, compatible with Windows XP, Vista, 7, 8 and Mac OS X (10.6.5 and later).
    To format a drive with exFAT, do this on a PC. If you do it on a Mac, a PC won't be able to read it. Apart from this, you will be able to store the files you need onto external drives, as exFAT hasn't got the limitations that FAT32 has.
    Another option is to use NTFS, but you will need a third-party application on your Mac to be able to write on your external drives

  • What is best practice for using a SAN with ASM in an 11gR2 RAC installation

    I'm setting up a RAC environment. Planning on using Oracle 11g release 2 for RAC & ASM, although the db will initially be 10g r2. OS: RedHat. I have a SAN available to me and want to know the best way to utilise that via ASM.
    I've chosen ASM as it allows me to store everything, including the voting and cluster registry files.
    So I think I'll need three disk groups: Data (+spfile, control#1, redo#1, cluster files#1), Flashback (+control#2, redo#2, archived redo, backups, cluster files#2) and Cluster - Cluster files#3. So that last one in tiny.
    The SAN and ASM are both capable of doing lots of the same work, and it's a waste to get them both to stripe & mirror.
    If I let the SAN do the redundancy work, then I can minimize the data transfer to the SAN. The administrative load of managing the discs is up to the Sys Admin, rather than the DBA, so that's attractive as well.
    If I let ASM do the work, it can be intelligent about the data redundacy it uses.
    It looks like I should have LUN (Logical Unit Numbers) with RAID 0+1. And then mark the disk groups as extrenal redundancy.
    Does this seem the best option ?
    Can I avoid this third disk group just for the voting and cluster registry files ?
    Am I OK to have this lower version of Oracle 10gr2 DB on a RAC 11gr2 and ASM 11gr2 ?
    TIA, Duncan

    Hi Duncan,
    if your storage uses SAN RAID 0+1 and you use "External" redundancy, then ASM will not mirror (only stripe).
    Hence theoretically 1 LUN per diskgroup would be enough. (External redundancy will also only create 1 voting disk, hence only one LUN is needed).
    However there are 2 things to note:
    -> Tests have shown that for the OS it is better to have multiple LUNs, since the I/O can be better handled. Therefore it is recommended to have 4 disks in a diskgroup.
    -> LUNs in a diskgroup should be the same size and should have same I/O characteristica. If you now have in mind, that maybe your database one time will need more space (more disks) than you should use a disk size, which can easily be added, without waisting too much space.
    E.g:
    If you have a 900GB database then does it make sense to only use 1 lun with 1TB?
    What happens if the database grows, but only grows slightly above 1TB? Then you should add another disk with 1TB.... You loose a lot of space.
    Hence it does make more sence to use 4 disks á 250GB, since the "disks" needed to grow the disk groups can be extended more easily. (just add another 250G disk).
    O.k. there is also the possibility to resize a disk in ASM, but it is a lot easier to simple add an additional lun.
    PS: If you use a "NORMAL" redundancy diskgroup, then you need at least 3 disks in your diskgroup (in 3 failgroups) to be able to handle the 3 voting disks.
    Hope that helps a little.
    Sebastian

Maybe you are looking for

  • Is there a way to use dynamic built string in the "from" clause

    Hi all, im having one problem and now, im not sure how to solve it easily at all... :) Is there someone that would be so kind and put a eye on it? ..thx I have plsql proc, in which i have a list of table_names. For each of that table i need to run a

  • HP Pavilion DV7; System will not boot; Memory Test Failed using Advanced System Diagnostic​s

    Not sure exactly how, but I managed to contracted several variants of the nasty Rovnix virus and now my system will not even boot into Safe Mode after trying to restore to several different restore points.  Enabled bootlogging, but not sure how to re

  • Music not showing up in icloud

    I believe iCloud is set up properly because "Find my Phone" works just fine. However, my music library from iTunes is not in iCloud and available with iPhone. Any help is appreciated.

  • Drill Down Problem in Report

    Hi, I am using the product hierarchy in the report and its used for drilling down. when i say drilldown the report is not working and TIME OUT warning is coming. I have run the same report in quality system which contains more or less the same data.

  • How to clean tablew rows in ADF 10g (Not 11g)

    Hi !. My problem is that I was moved to a project in my company where we are developing with ADF 10.1.3.4.0 and EJB and I can't find the way to empty the rows from my table when the page loads... I know how to do it n 11g but not in this one... I wil