Best way of making redundant AFP/SMB server

We have bought two XServe G5 systems in the hope of setting up a load balanced pair that would provide redundant access to AFP/SMB file shares (plus web and email). We have an XServe RAID box which we where going to attach to both servers.
What have people done to get the redundancy we want. These are the options I have looked at:
1) Put XSan on the RAID and connect both servers to it and then reshare the connection. Reading the XSan I see that XSan is not good for re-sharing via AFP/SMB. Also, it seems that I would have to setup everything on each machine seperately as the settings are stored locally.
2) Split the jobs between the two servers say email/web on one and AFP/SMB on the other. Then setup IP Failover for each server to the other, so that if one server fails the other takes over it's jobs. I presume I cannot simply use a single volume for this as mounting the same volume on each of the servers will cause problems. Is that correct?
3) Put all the services on one machine and setup IP failover to fall back to the second machine as required. This removes the issue of two machines accessing the RAID array at the same time.
Any advice would be greatly accepted.
Thanks
Ian
iMac CoreDuo   Mac OS X (10.4.6)  

This may help:
About IP Failover
First off, the IP Failover feature won't magically
configure the backup server to do what the primary
one was doing. In other words, with one server
configured for file services and the other for web
and email services, if the file server failed, the
other one wouldn't just start up file services by
itself!
IP Failover is best used where one server is
configured exactly as another. One server is used,
and the backup server is not used until the first one
fails. The backup server has a separate IP address
and network connection that it uses to periodically
poll the main server. If the main server leaves the
network, then the backup server enables a second
Ethernet interface which has been configured with the
main server's IP address. (You may be able to
multi-home one Ethernet connection, but I haven't
tried that. Plus, each of your Xserves has two
Ethernet ports, so that shouldn't be a problem.)
Yes, but you are not getting good value for money out of that solution.
Looking at the IP Failover it seems to work by having the secondary server monitor the primary. Once it fails it activates that address on the secondary and then enabling the required services on there. This doesn't seem to preclude each server being the master for the others secondary, as long as the services on each master are different.
Lets say mail runs on server 1 and AFP runs on server 2. Server 2 acts as the secondary for server 1 and has a script that activates mail on server 2 when it detects failure. Similarly if server 2 fails it will migrate the ip address server 1 and then startup the AFP service. Obviously each will have to be configured with matching services on each server.
Using the Xserve RAID
If you plan on storing all of your data on the Xserve
RAID, you need to know how "full" the RAID unit is
going to be and how many RAID volumes you wish to
create so that you can decide on how to connect it to
each server.
As we're migrating from an existing server to these we have a very good idea of what the sizes and volumes should be. We have a Xserve raid with four disks all on one side. Setup as a Raid 5 array that is suffient space for all our services. We can either partition that down into multiple volumes for each server/service.
What I don't have my mind around yet is how you control which server sees which volumes. We have to have each server be able to access each volume (so it can run that service in the event of failure) but I think it's going to be an issue to have two servers access the same volume at once. Is that correct? If two servers access the same volume at the same time it's going to cause trouble?
Think of the Xserve RAID as two
devices: each one has a fibre-channel port and holds
seven hard drives. Each "group of seven" can be
connected to any other device on the fibre-channel
network. Without a fibre-channel switch, you can
connect one or two other devices (one or two other
Xserves) to the Xserve RAID.
So you have these scenarios:
Using 1-7 drives and creating one large RAID volume:
you'll use one of two RAID controllers on your
Xserve RAID. You can connect the RAID to only one
Xserve or to a fibre-channel switch in order to
connect it to more devices.
Using 2-7 drives and creating two RAID volumes:
you'll use both RAID controllers (say, 3 on one side
and 3 on the other). Each RAID controller connects
to another fibre-channel device; thus, without a
switch, you can connect one to each server or both
to the same one.
Using 8 to 14 drives and creating one large RAID
volume: you'll be filling up one RAID controller and
all or part of the second, connecting both
fibre-channel connectors to the same Xserve, unless
you use a switch.
Using 8 to 14 drives and creating two RAID volumes:
You have the same options as with 2-7 drives.
See http://www.apple.com/xserve/raid/deployment.html
for some pictures, which may help.
We have the fibre switch so each server can access all drives. I just don't know to limit what sees what, or if I need to do that.
Xsan
If you're just connecting the Xserve RAID to one or
two servers and planning to reshare the volume(s)
created on the Xserve RAID, then Xsan is not
necessary, nor should it be used.
But is it possible for two servers to use the same disk, I know it is with XSan but what about without it.
I think what you want is the ability to use the
Xserve RAID as a locally-connected volume on each
server, where share points can be defined. The
server is connected to the Xserve RAID using a
fibre-channel cable, just as a FireWire, USB, or
eSATA hard drive would be connected to the server.
As far as the server is concerned, the Xserve RAID
controller or controllers represent external hard
disks. The difference is that the Xserve RAID also
has Ethernet connectivity so that you can manage it
without having to log into your server. Once
connected, addressing using fibre-channel is
automatic; the Xserve RAID gets a WWPN address and
the RAID volume appears on the desktop of the server
- no Xsan required at all. From there, it's
perfectly safe to create share points and share them
via AFP or SMB; clients would connect using AFP or
SMB protocols with IP addressing over an Ethernet
network to which both the Xserve and the client
computers are connected. Even though the Xserve
RAID may also have an Ethernet connection to that
network, no read and write commands are sent via
Ethernet to the RAID; the server sends those via its
fibre-channel connection to the Xserve RAID as it
would with any other drive when housing share
points.
As I say it's fine for one machine at a time, what about 2.
For load balancing concerns, see Apple's File
Services Guide for recommendations. On an Xserve G5
with 2GB or more of memory, you should easily
accommodate 50 simultaneously connected users
(mixture of AFP and SMB) without a performance hit.
Depending on what your users are doing (and the
speed of the Ethernet network to which your clients
and Xserve are connected), you may be able to handle
more or less users. More RAM in the Xserve and a
Gigabit Ethernet network for the server and clients
would allow you to simultaneously balance more
clients with less of a reduction in performance.
Xsan works differently, and its requirements are
somewhat different. With Xsan, your Xserve RAID is
connected to a fibre-channel network. (In the
previous direct-to-server example, the fibre-channel
network consisted of just the Xserve and the Xserve
RAID.) In this network, all clients, at least one
Xserve, and the Xserve RAID are all connected via a
fibre-channel network. The Xserve has Xsan software
installed on it and becomes a dedicated metadata
controller. Clients must have special Xsan client
software installed for the SAN volumes to appear. In
such cases, the protocol used to mount the SAN
volumes is the Xsan client protocol, not AFP or SMB.
Although you technically can reshare an Xsan volume
via AFP or SMB, performance would take a hit. Since
the Xsan volume is writeable by all other clients on
the fibre-channel network, adding an Xserve to
reshare the SAN volume via AFP or SMB would allow
clients to connect via Ethernet (Wi-Fi, etc.), but
the server wouldn't have exclusive access to the SAN
volume.
To be honest, two Xserves and an Xserve RAID simply
aren't enough to warrant a SAN in my book. Typically
SANs are used where there are a large number of "done
servers" doing computation tasks, and they all need
to be able to have access to the same "local volume."
They're also used in cases where clients need access
to large amounts of storage set up to work like a
"local disk" on a few video production computers.
Just for comparison, Gigabit Ethernet offers
throughput bursts of up to 125MB/s (megabytes per
second), and Fibre Channel offers burst of up to
200MB/s. (Apple's 400MB/s claim only somewhat makes
sense if you're using both Xserve RAID controllers
connected to the same server.) Even so, both media
sustain around 50MB/s to 75MB/s, which is quite good.
In fact, that matches local disk performance. The
local serial ATA hard disks that are used in Power
Mac G5 models are serial ATA 1.5Gb/s (gigaBITS per
second) or 187.5MB/s at maximum bursts;I typically
see performance in the 40MB/s to 50MB/s range.
Just for fun, here are throughput speeds of several
common connectors, in megabytes per second:
Serial ATA "3.0": bursts up to 375MB/s, sustains
about 75MB/s
LVD SCSI 320: bursts up to 320MB/s, sustains about
100MB/s
Fibre Channel: bursts up to 200MB/s, sustains about
60MB/s
Serial ATA "1.5": bursts up to 187.5 MB/s, sustains
about 50MB/s
SCSI-3 LVD SCSI 160: bursts up to 160MB/s, sustains
about 60MB/s
Ultra ATA/133 (PATA): bursts up to 133MB/s, sustains
about 50MB/s
Gigabit Ethernet: bursts up to 125MB/s, sustains
about 50MB/s
FireWire 800: bursts up to 100MB/s, sustains about
50MB/s
ATA/66 (PATA): bursts up to 66MB/s, sustains about
40MB/s
USB 2.0: bursts up to 60MB/s, sustains about 30MB/s
FireWire 400: bursts up to 50MB/s, sustains about
20MB/s
10/100 "Fast" Ethernet: bursts up to 12.5MB/s
Judging by your request, I think the "no Xsan"
scenario is the one you want.
--Gerrit
I would love to take XSan out of the picture and if that means that I can only use one of the servers at a time then fine I can live with that, but, I would rather have both active and working for me even if I have to split the services between the systems.

Similar Messages

  • Which is Best way to pool DB connections on Server no-J2EE???

    I have a applicatio, which run on very different app servers (WebSpere, JRun and i-Planet)... the db conncetion problems existe whit server no-J2EE compliant, like as i-Planet.
    On J2EE servers I cant use Server DataSource to pooled connection, and server make this work transparently and good.
    But, if have to manually manage to pool connection, which is the best way?
    Befor use Poolman package... by CodeStudio, but is not effective for me...
    Now, are studying the Jackarta-Commons DBCP...
    Which is the best ?... there is other solution for this problem ?.
    thx !

    App in I'm work, make exhaustive use of db connection,
    only on startup, use a average of 130 connection
    request on less 10 secconds.
    All connection requested, when finish to use, there
    are closed, supposeddly, the closed connction are
    returned to pool, but this no happend it quickl, How do you know it doesn't happen quickly?
    Normally a returned connection would just be inserted into a list with a possible state change. That would be very quick in java. It might check the state of the connection and that is going to take longer Another pool manager might do the same though.
    then,
    if pool is able to create emmergency connections (up
    num max) always returned new connection, any one
    else, if pool can't create emmergency connection
    objects, app wait for free connection of pool... long
    delay !Of course, but any pool is going to do that.

  • What is the best way to detect loss of OPC Server connection when using DSC Tags?

    I'm using the DSC Module on a new project and I'm pretty impressed so far. The HMI Wizard has saved me quite a bit of time.
    My application is configured where the DSC Tags are connected to remote OPC Server Tags.
    The issue I'm having is that I cannot detect a loss of the OPC Server when the application is running. Read's of the front panel controls/indicators still return values and the little "connection" icon next them is still green. Even if the connection icon turned red it wouldn't help since the Front Panel is not visible when the main application is running. It is a Sub-VI that's in charge of OPC Data Interfacing. The rest of the application uses the data from the OPC Sub-
    VI.
    I cannot effect a change on the OPC Servers, so I need a method of detection when the Server is lost on my end.
    Any ideas on the best way to do this?
    Thanks,
    Jim

    Hi Jim,
    Ideally, error-reporting and -handling should be the way to handle this. However, if errors are not reported/handled as is sometimes the case with OPC, a quick-n-dirty way to do this would be to check for a "heartbeat" signal from your OPC Server. This could be a boolean tag which toggles On and Off (or a counter ticking). You then read this Tag in DSC in a slow loop using the Read Tag VI (not the front-panel control). And keep track of the Changed? output from this Read Tag VI.
    As long as the 'Changed?' output is true, you are receieving data from the OPC Server, and hence it's alive. You may add some deadband logic to wait for a specific period of time before declaring the Server's really dead!
    Hope this helps,
    K
    halid

  • Best way to spec a new Linux server from existing Windows 2003 server?

    We are looking to upgrade our servers to 11.1.2 and I am looking for some documentation/suggestions on the best way to spec the new box on a Linux platform.
    I do have the current documentation from ORACLE with regard to "Estimating Disk and Memory Requirements" but am looking for some processor suggestions as well..
    Any thoughts or ideas would be welcomed.
    Especially helpful would be anyone who is currently running in a Linux environment any pitfalls or issues you've been having with hardware or software.
    Thanks in advance.
    Adam,

    I don't think it is possible to recommend you any hardware setup without knowing your environment, budget and expertise.
    If you plan to install Enterprise Linux, like Oracle Linux, you should look into getting server hardware and not a desktop system. You may also want think about using Oracle VM virtualization products.
    Are you looking for rackmount or blade systems? it might be possible to limit your hardware decision between HP, Dell and IBM. I would personally prefer HP, but that's a matter of experience.
    You should probably find out your OS requirements. E-business products and installation instructions are usually designed for the OS version that existed at that time. If you install a newer version of the OS you may run into configuration or backward compatibility issues that may require to install additional software. You should check your OS requirements first before purchasing any hardware. The newest hardware may not support older versions of the OS.
    The HP support side shows a Linux certification matrix at: http://h18000.www1.hp.com/products/servers/linux/hplinuxcert.html which might be helpful should you decide to get HP equipment.
    You might also want to check your current Oracle licensing if you upgrade to a faster or more CPU's.

  • Best way to making DVD

    So what is the best way making DVD after there took Encore a way ?

    With Encore. They didn't take it away; they just made it end of life at CS6. This is the link for info re using Encore CS6 with CC, updated for CC 2014 and the cloud desktop install.
    http://helpx.adobe.com/encore/kb/encore-cs6-installed-cc.html
    It has links for getting the library (which is not installed with Encore) and a workflow with CC versions.
    In the cloud desktop, assuming you have PR CC 2014 installed, you are looking for this:

  • Best way of making filter in report!

    I have a report! And I need make some filter tool!
    What is the best way to do this? Is there any APEX tools to do this?
    Regards,
    Kostya

    I have a report! And I need make some filter tool!
    What is the best way to do this? Is there any APEX tools to do this?
    Regards,
    Kostya

  • Best way to configure iTunes on network server for multiple macs

    Hi all
    Currently have a Mac Mini (late 2006 Tiger) running iTunes library on my QNAPTS239pro Network attached storage. It works great.
    On Monday I am getting a new iMac and MacBook Pro (i.e. with Snow Leopard).
    What is the best way for iTunes to be configured from each Mac to use the same library on the NAS?
    Ideally, I want to add music from any of the macs (usually from CD not the store) and have a single library updated on the NAS. I don't want multiple sync'd copies - that's a back up night mare.
    Any thought appreciated.
    Kind regards
    Andrew

    it's a rather old article but click here for one possible setup scenario.
    JGG

  • Best way to move WFE to new server

    We have one web front end server and one sql server.  I want to simply move the existing web frotn end server to new hardware.  What is the easiest way to do this?  Also, the last admin did not patch the existing site to SP2.  If I
    install Sharepoint 2007 sp2 and attach it to databases that are on the original Sharepoint 2007 RTM version, will it upgrade the database schema or will this cause problems?  We only have on site colelction with about 13 GB of data, its not a complicated
    environment but there is a lot of data there.  I want to do this as simply as possible, thanks.

    The basic three steps that need to followed while moving to new hardware are. 
    Set up the new server farm
    Make sure the versions of database are the same in this step.
    Move the databases to the new server farm
    Since you don't have many webapplications, in fact just one, you could do an stsadm backup of the web application and restore it on the new web app. And then Apply SP2 along with all the other cumulative updates. 
    You will have two farms running at the same time at this point providing redundancy. 
    Shut down the old server farm
    That is probably the best approach I would prefer. You can always read the following technet article.. 
    http://technet.microsoft.com/en-us/library/cc512725(office.12).aspx
    Hope this helps.
    Thanks
    V
    V

  • What is the best way to protect .class files on server?

    I've been working on my website project for quite some time, years in fact. Now I'm at the point where I'm looking into signing up with a web host provider and testing the site live.
    I have some questions and concerns though. My site is made up of Java Servlets and JSP pages.
    My foremost concern is protecting my code. Many suggest obfuscating. If I understand the process correctly, it involves making the code harder to decompile. I also understand that obfuscating isn't a guarantee in protecting code. If someone really wants to use the code, they'll figure out a way to.
    So are there any other options available to protect my code?? I would like to consider as many options as possible.
    Once it is on the web host server, anyone can have access to it. Hope this doesn't sound like a stupid question - Microsoft Word comes with the option to password protect files from being opened, is there not by now some way to implement the same sort of feature with files uploaded to a server?
    Your input is appreciated.

    ejp wrote:
    Once it is on the web host server, anyone can have access to it.That's not true. You're off on the wrong track before you even start. You need to acquire a proper understanding of a web server and its configuration. Your problem can be solved via correct configuration of the server. Nothing to do with obfuscation.So your telling me that once I've uploaded all my .class files and everything else to my host provider, that the people running the server do not have access to it? I find this hard to believe....
    What exactly would I be looking to configure on the server??? Please explain.

  • The best way of Setting up a proxy server

    Hi all, im a student and i got asked to help set up a proxy server. The campus gave me an v20z server and now i am wondering whats the best sortware to use as an os and what other apps i should load onto it?
    Please advise

    You are right, company should provide the access to the webmail iNotes, then you can use the instruction at www.nokialotusnotes.com. However, Setting up Lotus Notes on Nokia Messaging is not easy like others email account! And in most cases your will get your iNotes on the mobile but out of Nokia Messaging, whcih means out of synchronization.

  • Is there a way of making one portal instance serve several domain names ?

    We have one Portal instance (3.0.8) that contains one Intranet, one Extranet and 5 Internet sites (Page hierarchies). After redirecting calls to the diffrent domains (sites) to their corresponding Portal page in the webserver, the URL reveals that we are actually running all these sites on the same Portal domain. I know that we can change the Portal domain by running the ssodatan script, but is it possible to have one portal instance work for several Portal domains ? I realise that this is probably far fetched, but I just had to ask... This will not create a big problem for us, it's mostly an esthetical issue.
    Morten

    Yes you can do this.
    Your machine must have multiple IPs with a different name associated with each IP.
    Look at the ssodatax scripts. This script allows you to make additional entries into the login server repository to register new names with the login server.
    Add each name using the ssodatax script and you will be fine.
    Rich

  • Whats the best way to move from mac mini server to Mac pro server?

    Hi everyone. I am currently running a mac mini server and it's maxed out. I have just purchased a Mac Pro server and I was wondering if anyone has moved there data in a similar scenario. Is it as simple as restoring a time machine backup to the new server? Any insight would most appreciated as I need this to go smoothly. Thank you.

    Apple's official Time Machine documentation says NOT to use TM to back up an entire server.
    The reason being is that it will NOT properly back up your databases.  The three in question most often used with OS X server is the Mail DB, the MySQL DB and the Open Directory.
    Database experience tells me that reason behind their not supporting Databases is the issue with reading info that's being written to or read from while TM is doing it's thing.  It's likely that if the server is under load when the time machine backs up, you're going to get an inconsistent copy.
    The ONLY apple documented GUI method of restoring a server is by using the Setup Assistant during the initial setup of a server.  (using a firewire cable to connect and having your old server in target disk mode)
    This in fact I would say is the most reliable method of migration. 
    (my 2 cents)
    Write more and let us know how the new server works on a continual basis!  I've not heard of anyone having a successful restore taken from a live server.
    -Graham

  • What's the best way to setup a media server/central storage for all of my?

    I was wondering what the best way to achieve a central media server for all my iTunes content + iPhoto's, calendar syncing and contact sharing is? This is what I currently have:
    iMac 20" Aluminum + External HD Backup (kids)
    Macbook Black (wife)
    Macbook Pro 15" (me)
    Airport Extreme 802.11n (obvious)
    TimeCapsule 1TB (wifi backup for wife/me)
    I would like to replace my PC in my office with a brand new Mac Pro Nehalem 8-Core, 8GB Ram, and 4TB, and replace my PC laptop in my living room attached to my tv, with an Apple TV.
    I want to centralize all our Photos from vacations, etc. Music, videos, movies, that are currently split up over wife's macbook, kids imac and my macbook pro onto my soon to be purchased Mac Pro.
    I want to be able to stream everything from my living room via Apple TV for when guests come over, dinner parties, etc. (plus I love apple and it keeps things clean)
    I'm currently using MobileMe to sync all of our Calendars and Contacts with my main account, which is great, but MobileMe doesn't sync to family members accounts =
    What would I need to do to centralize all this onto my future Mac Pro so that everyone has access all the time when they are home and the key here is, modify/update/change from their machines and sync it back/update it on the Mac Pro.
    Also, I'm hoping Snow Leopard has some changes to iTunes to make this a little more possible, since we're right around the corner from this release. I don't really want to spend an additional $900+ on Snow Leopard Server to have to achieve these results, but if it makes it easier, and does the job, then I guess I might. This is all speculation though, since it's not out yet. I'd like to get this all sorted and setup within the next month.
    I was considering a Drobo, they say they can throw up iTunes Server but, I appreciate everyone for reading this, and taking the time to respond!
    Thanks!
    Message was edited

    I'm in the process of setting up a smaller (and cheaper) but somewhat similar setup to what you want to do, so maybe one example might help point you in the right direction. My needs consist of a centralized location for data storage, which will include iPhoto libraries (I keep two separate ones), iTunes (which I also want served to the home theater system), something other than my laptop to play internet videos and downloaded content on my TV, all with ideally the lowest cost and energy use possible.
    My solution was the new Mini with a FW800 external drive as the server/media hub and Airport gigabit as the network hub (it also handles the backup drive).
    FW800 is fast enough to saturate a gigabit ethernet link, so I don't consider that much of a bottleneck. The Mini then has iTunes running at all times with its centralized library on it; it is hooked to the home theater via HDMI-DVI video and optical audio, so it can play music and also handle videos when desired; Front Row with the Apple remote is close enough to an AppleTV that I think it handles that well, and it's more full-featured than an AppleTV. It can further be used to display photos/slideshows/whatever on the TV for guests or such, or to surf from the couch with a wireless mouse/keyboard. You can also toss in an EyeTV for $150 and use that as a DVR if you feel like it.
    When I want to edit photos or such on my desktop, the gigabit link is fast enough that I can run iPhoto without noticing any significant slowdown. Its also usable over wireless, though I have a dangling extra network cable to plug into a laptop for full gigabit speed if need be. iTunes, of course, shares its library, which can be played from any of the computers in the house if so desired (iPhoto can do that too if you just want to display).
    If I REALLY wanted top speed (though I've even done video editing in iMovie via ethernet without issue), I could use a third party synch app (I like Sync) to mirror any of the content from the mini server to a local drive; this works fine with anything but multi-way synching, such as address books being modified in different locations. I'd probably try to set up one of those Mobile Me clone systems or use a 3rd party app if I needed to do that.
    Again, maybe this isn't powerful enough or "synched" enough in terms of local storage for your taste, but the advantage is that a Mini uses a minute fraction of the power of a Mac Pro, so you're saving a lot on electricity if the computer will be powered up at all times as a server, and it's also a lot more full-featured as a home theater media hub than an AppleTV. And, heck, the thing is about as well equipped as my old top-of-the-line G5 tower for a 5th the cost and 1/15th the power and noise.

  • Best way for Email Notifications in EWS Managed API 2.0 [ Exchange Server 2013]

    Hi ,
    I want to know best way for Email Notifications in Exchange server. My
    organisation has 
    10 users, i want to get notification if any user mailbox got any new mail.
    i need to create API,
    that was consumed by another Team in their Application.
    I am using Exchange server 2013.
    Please share Your Ideas.
    Thank you

    Take a look at EWS Notifications
    http://msdn.microsoft.com/en-us/library/office/dn458791(v=exchg.150).aspx .
    Cheers
    Glen

  • Best way to push change data from sql server to windows/web application

    i apologized that i do not know should i ask this question in this forum or not.
    i have win apps which will load all data initially from db and display through grid but from the next time when any data will change in db or any data will be inserted newly in db then only change or newly inserted data need to be pushed from db side to
    my win apps. now only sql dependency class is coming to my mind but there is a problem regarding sql dependency class that it notify client but do not say which data is updated or inserted.
    so i am looking for best guidance and easy way to achieve my task. what will be the best way to push data from sql server to win or web client.
    there is two issue
    1) how to determine data change or data insert. i guess that can be handle by trigger
    2) next tough part is how very easily push those data from sql server end to win apps end.
    so looking for expert guide. thanks

    Hello,
    Yes, you can create DML trigger on INSERT and UPDATE to get the changed data into a temp table. And then query the temp table from application.
    If you are use SQL Server 2008 or later version, you can also try to use
    Change data capture, which
    can track insert, update, and delete activity that is applied to a SQL Server table and store the changed values on the Change Table.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

Maybe you are looking for

  • Relative path with images

    Hi All - I have a strange situation with relative paths. When I use a relative path in cfinclude it works. But when I use a relative path is "src" attribute of <img>, it does not work. Can anyone suggest why could this be happening? Thanks

  • How do I eliminate the delay in reducing background noise with each new clip?

    My microphones pick up a fair amount of background nosie.  At 60% reduction with iMovie's built in editor it sounds perfect.  The only problem is that it takes about .5 seconds to go from zero reduction to 60%.  If this were only at the start of the

  • MDG in ECC6.0 ENPH5

    Hi, We are now studying MDG in ECC6.0 EHP5 to review if this new feature can meet our business user requirement. We have installed SAP Business Suite Foundation in SAP/R3 and activated the business functions MDG_FOUNDATION, MD_MATERIAL.  We can find

  • Compiler not inferring legal methods correctly?

    The following sample code:public class Tester {   public static <E extends Integer> void doit ( final E[] a ) {     System.out.println ( a[0] / 4 ) ;   public static void main ( final String[] args ) {     Tester.doit ( new Integer[] { 2 , 3 , 4 } )

  • Is iTunes required?

    Is iTunes required to load tracks onto an iPod? I'm a Windows user and have no use for iTunes otherwise.