Best practice for Near Real time with Oracle

Hi,
We plan to run a scenario to demonstrate DS ability for real time or near real time scenario.
We identified different solutions and we are trying to choose the most appropriate :
- Oracle 11g + Sybase Replication server + Data Services
- Oracle 11g + Data services : real time jobs ( xml messages )
- Oracle 11g + Data Services : CDC scenario
I need to load data into a SAP Sybase IQ database : is there any known issue with one of the previous solution ?
The challenge is to load a set of 10 tables, for more than 5 millions rows,
Thanks for your advices,
Guillaume

Hi!
Didn't anyone have this requirement for migrations? I have tested with the mysql select into file clause. Seems to work for simple data types - we're now testing with blobs...
Markus

Similar Messages

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • What is the best Path for a J2EE developer with oracle?

    Hi,
    I am a J2EE developer, for the time being I work at a Commercial Bank as an enterprise application developer. I have learnt java when I was following a local IT diploma and with the help of books, works at my working place and the internet , today I am developing J2EE applications with JSP,Servlets,JSF2.0,EJB3.0 and third party JSF libraries etc. (I am also developing softwares using other programing languages such as Asp.net, C#.net, WPF etc, but I prefer to be in the java path). Other than that, I'm also working as the UI designer of most of our applications.
    I have those skills and practice after working for 4 years as a web/enterprise application developer & a UI designer, but now I have to focus on some paper qualifications and hence I am doing BCS.
    Now I want to be a java professional in Oracle's path, and I need to know what is the best path I can select to move with Oracle. I finished my classes of SCJP , but didn't do the exams as there were some rumors that Oracle will dump those exams in the future. I am interested in Oracle university, but I am unable to even think about it as I live in Sri Lanka and don't have that much of financial wealth to go USA and join.
    So I really appreciate if any Oracle professional could suggest me the best educational path according to what I mentioned about my technical and career background. Because I have a dream to join Oracle one day as an employee and being a good contributer to the same forum, which I am getting helps today!
    Thanks!!!

    As you can see on our website, Oracle did not retire the Java certifications. You can browse through the available certifications and hopefully help to determine your path.
    http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=140
    SCJP has now become Oracle Certified Professional Java Programmer. You can find more info on those exams on our website here: http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=320.
    Regarding training, perhaps live virtual training would be an option for you. You can find more information at http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=233.
    Regards,
    Brandye Barrington
    Certification Forum Moderator

  • Best practice for TM on AEBS with multiple macs

    Like many others, I just plugged a WD 1TB drive (mac ready) into the AEBS and started TM.
    But in reading here and elsewhere I'm realizing that there might be a better way.
    I'd like suggestions for best practices on how to setup the external drive.
    The environment is...
    ...G4 Mac mini, 10.4 PPC - this is the system I'm moving from, it has all iPhotos, iTunes, and it being left untouched until I get all the TM/backup setup and tested. But it will got to 10.5 eventually.
    ...Intel iMac, 10.5 soon to be 10.6
    ...Intel Mac mini, 10.5, soon to be 10.6
    ...AEBS with (mac ready) WD-1TB usb attached drive.
    What I'd like to do...
    ...use the one WD-1TB drive for all three backups, AND keep a copy of system and iLife DVD's to recover from.
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    How do I get an image of the install DVD onto the 1TB drive?
    How do I do that? (install?, ISO image?, straight copy?)
    And what about the 2nd disk (for iLife?) - same partition, a different one, ISO image, straight copy?
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)>
    I know its a lot of question but here are the two objectives...
    1. Use TM in typical fashion, to recover the occasion deleted file.
    2. The ability to perform a bare-metal point-in-time recovery (not always to the very last backup, but sometimes to a day or two before.)

    dmcnish wrote:
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    Hi, and welcome to the forums.
    You can, but you really only need a separate partition for the Mac that's backing-up directly. It won't have a Sparse Bundle, but a Backups.backupdb folder, and if you ever have or want to delete all of them (new Mac, certain hardware repairs, etc.) you can just erase the partition.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    Right.
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    I don't think so. I've never tried it, but even if it works, it will be very slow. So connect via F/W or USB (the PPC Mac probably can't boot from USB, but the Intels can).
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)
    That's actually two different questions. To do a full system restore, you don't load OSX at all, but you do need the Leopard Install disc, because it has the installer. See item #14 of the Frequently Asked Questions *User Tip* at the top of this forum.
    If for some reason you do install OSX, then you can either "transfer" (as part of the installation) or "Migrate" (after restarting, via the Migration Assistant app in your Applications/Utilities folder) from your TM backups. See the *Erase, Install, & Migrate* section of the Glenn Carter - Restoring Your Entire System / Time Machine *User Tip* at the top of this forum.
    In either case, If the backups were done wirelessly, you must transfer/migrate wirelessly (although you can speed it up by connecting via Ethernet).

  • Best Practices for Creating eLearning Content With Adobe

    As agencies are faced with limited resources and travel restrictions, initiatives for eLearning are becoming more popular. Come join us as we discuss best practices and tips for groups new to eLearning content creation, and the best ways to avoid complications as you grow your eLearning library.
    In this webinar, we will take on common challenges that we have seen in eLearning deployments, and provide simple methods to avoid and overcome them. With a little training and some practice, even beginners can create engaging and effective eLearning content using Adobe Captivate and Adobe Presenter. You can even deploy content to your learners with a few clicks using the Adobe Connect Training Platform!
    Sign up today to learn how to:
    -Deliver self-paced training content that won't conflict with operational demands and unpredictable schedules
    -Create engaging and effective training material optimized for knowledge retention
    -Build curriculum featuring rich content such as quizzes, videos, and interactivity
    -Track program certifications required by Federal and State mandates
    Come join us Wednesday May 23rd at 2P ET (11A PT): http://events.carahsoft.com/event-detail/1506/realeyes/
    Jorma_at_RealEyes
    RealEyes Connect

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • HT1553 What is the best system for a real time cloud back up of documents?  My MacBook crashed, and I lost 2 hours of writing and could not find a way to restore it.

    My MacBook Pro crashed while I was rewriting a book, lost more than an hour of work and could not find a way to restore it.  Did not have Time Machine set up, but it appears that Time Machine does not have Real Time back up and documents must be manually stored.
    I need an automatic, real time back up to keep this from happening - I'm not happy my MacBook has crashed twice now.   What is the best cloud system for Real Time backup?   Thanks to anyone who can help me, I'm not the most astutde computer guy... James

    One way would be to use Dropbox, or a similar sync service, and just keep your critical documents in the appropriate folder. Dropbox, at least, keeps a local copy of everything and syncs automatically to the cloud whenver a change is made. Dropbox is free for up to 2GB of data.
    There are also true backup services such as CrashPlan+:
    http://www.crashplan.com/consumer/crashplan-plus.html
    which provide automatic backups whenver a change is detected. It's not free, but usually such services aren't too expensive unless you need to back up a lot of data.
    Regards.

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • Best Practice for Droid Gmail Contacts with Exchange ActiveSync?

    Hi, folks.  After going through an Address Book nightmare this past summer, I am attempting to once again get my Contacts straight and clean.  I have just started a new job and want to bring my now clean Gmail contacts over to Exchange.  The challenge is creating duplicate contacts, then defining a go-forward strategy for creating NEW contacts so that they reside in both Gmail and Exchange without duplication.  Right now, my Droid is master and everything is fine.  However, once I port those contacts from Gmail onto my laptop, all hell breaks loose... Does Verizon have a Best Practice finally documented for this?  This past summer I spoke with no less than 5 different Customer Support reps and got 3 different answers... This is not an uncommon problem...

    In parallel to this post, I called Verizon for Technical Support assistance.  Seems no progress has been made.  My issue this past summer were likely a result of extremely poor quality products from Microsoft, which included Microsoft CRM, Microsoft Lync (new phone system they are touting which is horrible), and Exchange.  As a go-forward strategy, I have exported all Gmail contacts to CSV for Outlook and have imported them to Exchange.  All looks good.  I am turning off phone visibility of Gmail contacts and will create all new contacts in Exchange.

  • Best practice for replacing a package with equivalent, lots of deps

    I was having CPU issues, posted about a bit back in another thread, which brought up the fact that it might have been related to the nvidia version at that time (325.15). As a result, I switched to nvidia-beta and nvidia-utils-beta from AUR.
    Nvidia from extra is now up to 331.20, and I was thinking of switching back so that I wouldn't always be surprised after a kernel update that no screens were found (AUR packages don't tend to flag updates just because linux updated). Not a big deal, as I just have to re-build the AUR package and I'm set. Anyway, I was going to switch back to the standard nvidia packages, but am not sure what to do about the dependencies on libgl, provided by nvidia-libgl-beta (a split-package provided by nvidia-utils-beta):
    $ sudo pacman -S nvidia
    resolving dependencies...
    looking for inter-conflicts...
    :: nvidia and nvidia-beta are in conflict. Remove nvidia-beta? [y/N] y
    :: nvidia-utils and nvidia-utils-beta are in conflict. Remove nvidia-utils-beta? [y/N] y
    error: failed to prepare transaction (could not satisfy dependencies)
    :: nvidia-libgl-beta: requires nvidia-utils-beta
    $ sudo pacman -R nvidia-libgl-beta
    checking dependencies...
    error: failed to prepare transaction (could not satisfy dependencies)
    :: cairo: requires libgl
    :: freeglut: requires libgl
    :: glu: requires libgl
    :: libva: requires libgl
    :: qt4: requires libgl
    :: webkitgtk2: requires libgl
    :: xorg-xdriinfo: requires libgl
    $ sudo pacman -Rc nvidia-libgl-beta
    checking dependencies...
    :: avahi optionally requires gtk3: avahi-discover-standalone, bshell, bssh, bvnc
    :: avahi optionally requires gtk2: gtk2 bindings
    :: avahi optionally requires qt4: qt4 bindings
    :: avahi optionally requires pygtk: avahi-bookmarks, avahi-discover
    :: boost-libs optionally requires openmpi: for mpi support
    :: chromium-libpdf optionally requires chromium: default browser to use plugin in (one of the optional dependencies needs to be installed to use the library)
    :: dconf optionally requires gtk3: for dconf-editor
    :: ghostscript optionally requires gtk2: needed for gsx
    :: gvfs optionally requires gtk3: Recent files support
    :: harfbuzz optionally requires cairo: hb-view program
    :: imagemagick optionally requires librsvg: for SVG support
    :: jasper optionally requires freeglut: for jiv support
    :: jasper optionally requires glu: for jiv support
    :: jre7-openjdk optionally requires gtk2: for the Gtk+ look and feel - desktop usage
    :: libtiff optionally requires freeglut: for using tiffgt
    :: libwebp optionally requires freeglut: vwebp viewer
    :: mjpegtools optionally requires gtk2: glav GUI
    :: nvidia-utils-beta optionally requires gtk2: nvidia-settings
    :: pinentry optionally requires gtk2: for gtk2 backend
    :: pinentry optionally requires qt4: for qt4 backend
    :: smpeg optionally requires glu: to use glmovie
    :: v4l-utils optionally requires qt4
    :: wicd optionally requires wicd-gtk: needed if you want the GTK interface
    :: xdg-utils optionally requires exo: for Xfce support in xdg-open
    Packages (102): anycoloryoulike-icon-theme-0.9.4-2 arpack-3.1.2-2 bleachbit-1.0-1 cairo-1.12.16-1 chromium-31.0.1650.63-1 chromium-pepper-flash-stable-2:11.9.900.170-1
    cups-1.7.0-2 cups-filters-1.0.43-1 cups-pdf-2.6.1-2 darktable-1.4-2 dia-0.97.2-5 dropbox-2.6.2-1 emacs-24.3-4 enblend-enfuse-4.1.1-5 evince-gtk-3.10.3-1
    exo-0.10.2-2 farstream-0.1-0.1.2-3 ffmpeg-1:2.1.1-3 finch-2.10.7-4 firefox-26.0-2 flashplugin-11.2.202.332-1 foomatic-db-engine-2:4.0.9_20131201-1
    freeglut-2.8.1-1 geeqie-1.1-2 gegl-0.2.0-10 gimp-2.8.10-1 girara-gtk3-0.1.9-1 glew-1.10.0-2 glu-9.0.0-2 gmtp-1.3.4-1 gnome-icon-theme-3.10.0-1
    gnome-icon-theme-symbolic-3.10.1-1 gnome-themes-standard-3.10.0-1 gstreamer0.10-bad-plugins-0.10.23-7 gtk-engine-murrine-0.98.2-1 gtk-engines-2.21.0-1
    gtk2-2.24.22-1 gtk3-3.10.6-1 gtkspell-2.0.16-3 guvcview-1.7.2-1 hplip-3.13.11-2 hugin-2013.0.0-5 hwloc-1.8-1 impressive-0.10.3-8 jumanji-20110811-1
    libglade-2.6.4-5 libgxps-0.2.2-3 libpurple-2.10.7-4 libreoffice-base-4.1.4-1 libreoffice-calc-4.1.4-1 libreoffice-common-4.1.4-1 libreoffice-draw-4.1.4-1
    libreoffice-gnome-4.1.4-1 libreoffice-impress-4.1.4-1 libreoffice-writer-4.1.4-1 librsvg-1:2.40.1-3 libtiger-0.3.4-3 libunique-1.1.6-5 libva-1.2.1-1
    libva-vdpau-driver-0.7.4-1 libxfce4ui-4.10.0-1 libxfcegui4-4.10.0-1 lxappearance-0.5.5-1 meshlab-1.3.2-4 mpd-0.18.6-1 obconf-2.0.4-1 octave-3.6.4-6
    openbox-3.5.2-6 openmpi-1.6.5-1 pango-1.36.1-1 pangox-compat-0.0.2-1 pdf2svg-0.2.1-7 pidgin-2.10.7-4 poppler-0.24.5-1 poppler-glib-0.24.5-1
    pygtk-2.24.0-3 python2-cairo-1.10.0-1 python2-gconf-2.28.1-8 python2-opengl-3.0.2-5 qt4-4.8.5-7 qtwebkit-2.3.3-1 r-3.0.2-1 rstudio-desktop-bin-0.98.490-1
    screenkey-0.2-5 scribus-1.4.3-2 thunar-1.6.3-1 tint2-svn-652-3 truecrypt-1:7.1a-2 vlc-2.1.2-1 webkitgtk2-1.10.2-8 wicd-gtk-1.7.2.4-9 wxgtk-3.0.0-2
    wxgtk2.8-2.8.12.1-1 xfburn-0.4.3-6 xorg-utils-7.6-8 xorg-xdriinfo-1.0.4-3 xscreensaver-arch-logo-5.26-3 zathura-0.2.6-1 zathura-pdf-mupdf-0.2.5-3
    zukitwo-theme-openbox-20111021-3 zukitwo-themes-20131210-1 nvidia-libgl-beta-331.38-1
    Total Removed Size: 1756.12 MiB
    :: Do you want to remove these packages? [Y/n]
    As you might imagine, I'd prefer not to remove all of those packages just to switch my libgl providing package and then re-install.
    In digging around, I found this entry on downgrading packages without respecting dependencies.
    Is that the best method for doing what I describe above as well? Would I do something like `pacman -Rd nvidia-utils-beta` (without X running) and then install the packages from extra?

    It should be similar to switching to nouveau driver https://wiki.archlinux.org/index.php/Nouveau
    Just:
    # pacman -Rdds nvidia-beta nvidia-utils-beta
    # pacman -S nvidia nvidia-utils

  • What is best practice for using a SAN with ASM in an 11gR2 RAC installation

    I'm setting up a RAC environment. Planning on using Oracle 11g release 2 for RAC & ASM, although the db will initially be 10g r2. OS: RedHat. I have a SAN available to me and want to know the best way to utilise that via ASM.
    I've chosen ASM as it allows me to store everything, including the voting and cluster registry files.
    So I think I'll need three disk groups: Data (+spfile, control#1, redo#1, cluster files#1), Flashback (+control#2, redo#2, archived redo, backups, cluster files#2) and Cluster - Cluster files#3. So that last one in tiny.
    The SAN and ASM are both capable of doing lots of the same work, and it's a waste to get them both to stripe & mirror.
    If I let the SAN do the redundancy work, then I can minimize the data transfer to the SAN. The administrative load of managing the discs is up to the Sys Admin, rather than the DBA, so that's attractive as well.
    If I let ASM do the work, it can be intelligent about the data redundacy it uses.
    It looks like I should have LUN (Logical Unit Numbers) with RAID 0+1. And then mark the disk groups as extrenal redundancy.
    Does this seem the best option ?
    Can I avoid this third disk group just for the voting and cluster registry files ?
    Am I OK to have this lower version of Oracle 10gr2 DB on a RAC 11gr2 and ASM 11gr2 ?
    TIA, Duncan

    Hi Duncan,
    if your storage uses SAN RAID 0+1 and you use "External" redundancy, then ASM will not mirror (only stripe).
    Hence theoretically 1 LUN per diskgroup would be enough. (External redundancy will also only create 1 voting disk, hence only one LUN is needed).
    However there are 2 things to note:
    -> Tests have shown that for the OS it is better to have multiple LUNs, since the I/O can be better handled. Therefore it is recommended to have 4 disks in a diskgroup.
    -> LUNs in a diskgroup should be the same size and should have same I/O characteristica. If you now have in mind, that maybe your database one time will need more space (more disks) than you should use a disk size, which can easily be added, without waisting too much space.
    E.g:
    If you have a 900GB database then does it make sense to only use 1 lun with 1TB?
    What happens if the database grows, but only grows slightly above 1TB? Then you should add another disk with 1TB.... You loose a lot of space.
    Hence it does make more sence to use 4 disks á 250GB, since the "disks" needed to grow the disk groups can be extended more easily. (just add another 250G disk).
    O.k. there is also the possibility to resize a disk in ASM, but it is a lot easier to simple add an additional lun.
    PS: If you use a "NORMAL" redundancy diskgroup, then you need at least 3 disks in your diskgroup (in 3 failgroups) to be able to handle the 3 voting disks.
    Hope that helps a little.
    Sebastian

  • Best practice for loading from mysql into oracle?

    Hi!
    We're planning migrating our software from mysql to oracle. Therefore we need a migration path for moving the customer's data from mysql to oracle. The installation and the data migration/transfer have to run onto different customer's enviroments. So migration ways like installing the oracle gateway and connect for example via ODBC to mysql are no option because the installation process gets more complicated... Also the installation with preconfigured oracle database has to fit on a 4,6 GB dvd...
    I would prefer the following:
    - spool mysql table data into flat files
    - create oracle external tables on the flat files
    - load data with insert into from external tables
    Are there other "easy" ways of doing migrations or what do you think about the prefered way above?
    Thanks
    Markus

    Hi!
    Didn't anyone have this requirement for migrations? I have tested with the mysql select into file clause. Seems to work for simple data types - we're now testing with blobs...
    Markus

  • Best practice for redesigning a website with CSS

    I have a site I created in early 2000's in Dreamweaver back in the Macromedia days www.kid-ebooks.com. It is basic HTML using a lot of tables, some flash, etc. I have since created many other sites using CSS and Dreamweaver (Currently on CS5.5) and I now want to do a complete redesign of Kid-ebooks, using CSS. I also however want to keep a lot of existing content, links, etc. I don't know what is the best approach, whether it be to try and apply a CSS to the site, design in parallel and then switch when ready, copy and paste content? What is be recommendation for starting over, but keeping some content?
    Thanks,
    Dave

    I've used this formula with success.  Make a back-up site beforehand.
    Use Find & Replace > Source code > Tags >  to strip out all of the <table> <td> <tr> <font> <background> <bold> and any other deprecated code from the site.  Ultimately, you will be left with unstyled content, hyperlinks, & images.  Validate code and fix any errors.
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists 
    http://alt-web.com/
    http://twitter.com/altweb

  • What is best practice for calling XI Services with Web Dynpro-Java?

    We are trying to expose XI services to Web Dynpro via "Web Services".  Our XI developers have successfully generated the WSDL file(s) for their XI services and handed off to the Web Dynpro developers.
    The Java developers put the WSDL file to their local PC's and import as "Adaptive Web Services" data models.  When the application is constructed and deployed to our development box, the application abends because the J2EE server on that box cannot locate the WSDL file at runtime (it was on the developers box at, say, "C:\temp\" and that directory does not exist on the dev server).
    Since XI does not have a way of directly associating the generated WSDL file to the XI service, where is the best place to put the WSDL so it is readable at design time and at run time?  Also, how do we reconcile that we'll have 3 WSDL files for each service (one for Dev, one for QA and one for Prod) and how is the model in Web Dynpro configured so it gets the right ome?
    Does anyone have any good guide on how to do this?  Have found plenty of "how to consume a Web Service in Web Dynpro" docs on SDN, but these XI services are not really traditional Web Services so the instructions break down when it comes time to deploy.

    HI Bob,
    As sometimes when you create a model using a local wsdl file then instead of refering to URL mentioned in wsdl file it refers to say, "C:\temp" folder from where you picked up that file. you can check target address of logical port. Due to this when you deploy application on server it try to search it in "c:\temp" path instead of it path specified at soap:address location in wsdl file.
    Best way is  re-import your Adaptive Web Services model using the URL specified in wsdl file as soap:address location.
    like http://<IP>:<PORT>/XISOAPAdapter/MessageServlet?channel<xirequest>
    or you can ask you XI developer to give url for webservice and username password of server

  • What is best practice for multi camera edit with differing lengths?

    I have three cameras that took video of an engagement party. Camera A and B took it all (with some early extra material exclusive to each camera). Camera C took some, then stopped, then took more.
    So I have four sets of clips - A and B which are complete, then C then D.
    Should I create sequence 1 with A, B and C synchonised, then create sequence 2 with A, B and D synchronised, then sequence 3 with sundry early non-multi camera clips, plus sequence 1 plus sequence 2 then late non-multi camera clips?
    Or can I synchronise A, B and C, then on the same timeline synchronise A, B and D? I'm concerned that the second synchronisation will put C out of sync.
    What is the best way to approach this?
    Thanks in advance.

    A and B which are complete, then C then D.
    I think you're looking at this in the wrong way.  You have only three cameras, A, B and C, but you don't really have a D camera, as those are just other clips from camera C.  You might call them C1 and C2 if you like, but calling them D seems to be confusing the issue, as it's still only three cameras, and three shots to choose from when cutting the sequence.  (Except for the gap between C clips, where you will have only the A and B shots to choose.)
    You can absolutely sync all the clips form camera C on the same sequence as A and B.  And it will probably be easier to do so.

Maybe you are looking for