Poor man's server redundancy?

I am helping a couple of friends who have small businesses, each with a single server.  They have become concerned about the business cost of a server outage, but have a limited budget for redundancy.  I used to work on this stuff, but have been
out of it for several years, so I'd like to solicit suggestions from you all.  Our goals are to provide quick & simple server failover at a modest cost, but not necessarily instantaneous or automatic.  I've thought of a few categories of solutions:
1. Off-line server spare - Keep another server of same HW configuration; if live server, which has pair of HDD's in RAID-1 mirror, fails, remove one of the drives from the failed server, pop it into the spare server and fire it up, with same server name
& IP address.  I wonder if this would work - the disk controller in the spare server might not accept a drive from the other server. Also this would not protect from a major electrical problem, say both drives got zapped.
1a. A variation of server spare - maintain a spare server with distinct name & IP address, synchronize data nightly. If live server fails, shut it down, then rename the spare server and give it the IP address of the failed live server.
2. Failover SW such as Double Take or StorageCraft.
3. Server cluster - probably too expensive, especially if a SAN were used.
4. Virtualization.
Appreciate any suggestions.
Bob

For simplicity, have them use a virtual server. This can be moved easily to a new server because the virtual hardware is identical. Moving a physical disk to a spare server is going to be more problematic, because there may be hardware differences. Also,
the network hardware will have different identifiers, so network configuration will have to be re-done etc. This is not insurmountable, but still could be more difficult than it has to be.
Moving a virtual machine is as simple as copying it and starting it again. Of course, you still need to backup the data regularly to prevent against accidental deletion.

Similar Messages

  • I want to Connect my Mac Mini as a Slave to a Mac pro Server while at the same time using the Mac mini's thunderbolt ports peripherals ( ie monitor, Sound Card, Hard drives) Creating a poor man's new mac Pro. Can this be Done?

    I want to Connect my Mac Mini as a Slave to a Mac pro Server while at the same time using the Mac mini's thunderbolt ports peripherals ( ie monitor, Sound Card, Hard drives) Creating a poor man's new mac Pro. Can this be Done?

    Well, I really would love the new unreleased mac pro however , I'm not sure of the expected cost , Everyone speculates from $3,000 to $8,000, in which I may have to wait a while to purchase.
    To the point .... I want fully functional thunderbolt ports to be on the current mac pro's .... wonder if anyone had workarounds yet?...  or could I chain the the current mac pro to a mac mini to make that happen?

  • How to implement poor-man's version control with TSQL queries

    I have a table called Project. Each row completely describes a project and has a username, project name, project description and other numeric parameters that contain all the data about the project.
    When multiple rows have the same username, this means a user owns multiple projects.
    Now I want to implement a poor-man's version control for my users by adding a new integer column called version. When a user wants to save a new version of his project, the version is incremented and a new row is inserted into the table.
    Some projects will have 1 version, others will have a dozen or more.
    By default, the user should see a data grid of projects where only the latest version of the project is displayed (including the version count) and all the older versions of each project are ignored.
    How do I write a TSQL query to populate this data grid (and ignore every version except the latest versions of each project)?
    Thanks
    Siegfried
    siegfried heintze

    Should this work? It prints all the rows.
    DECLARE @Projects TABLE
    ([id] int IDENTITY(1,1), [Project] varchar(1), [Version] int)
    INSERT INTO @Projects
    ([Project], [Version])
    VALUES
    ('A', 1),
    ('A', 2),
    ('A', 3),
    ('A', 4),
    ('B', 1),
    ('B', 2),
    ('B', 3),
    ('C', 1),
    ('C', 2),
    ('D', 1)
    -- DECLARE @User varchar(100)
    SELECT *
    FROM @Projects p
    WHERE
    -- UserName = @User AND
    NOT EXISTS (SELECT 1
    FROM @Projects q
    WHERE q.id = p.id
    AND q.Version < p.Version)
    siegfried heintze
    Nope you have condition wrong
    In my suggestion i've used > and you replaced it with <
    it should be this
    SELECT *
    FROM @Projects p
    WHERE
    NOT EXISTS (SELECT 1
    FROM @Projects q
    WHERE q.project= p.projects
    AND q.Version > p.Version)
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Hi, I'm doing a poor man's morph project on my motion 4 . It works perfect while on motion , but when I try to import it to tcp it will transfer the photos but not the optical flow fx. what am I doing wrong?

    Hi I don't know if I was able to send this question before but here it goes again. I'am doing a poor man's morph project on my motion 4 program, and it works great while in Motion, but once I try to import it to fcp, it will import the photos but not the optical flow fx, what am I doing wrong? I appreciate you help

    Decided to use cmd A to select all the icons on the desktop and then looked for a right click option.   There it was - the option to move to a folder.   I now have them all in this folder which tells me it holds 157 items.   Now you know why I needed to know LOL
    Thought it might be of help to someone.
    Cheers Mally

  • Bpel server redundancy

    Hi!
    in our product environment now we have 1 bpel pm server 10.1.2 and we'd like to have one more redundant server for this instance, which should work in case of crash of the first server ...
    Could you point me to the right documentation about this problem? thanks a lot,
    Tomas

    Looks like the server is unable to connect to the dehydration database.
    A standard installation would have this data source setup as part of the installation.
    "jdbc/BPELServerDataSourceWorkflow"
    This again uses the connection pool "BPELPM_CONNECTION_POOL"
    And the name of the data source is "BPELServerDataSourceWorkflow"
    There are a few checks that you can perform.
    1. Check if the dehydration database is up.
    If no then this should be up and running.
    2. Go to the EM and use {test connection} for "BPELPM_CONNECTION_POOL"
    If comes back with error then check if the dehydration database credentials has changed post installation.
    There are a few other datasources that share the same connection pool. Check if these are working too.
    Every Little Helps
    Kalidass Mookkaiah
    http://oraclebpelindepth.blogspot.com/

  • Directory Sync server redundancy.

    How is redundancy for the Directory Sync server accomplished?
    We have a load balanced pair of ADFS servers, a load balanced pair of ADFS proxies at our primary datacenter. We have additionally deployed an individual ADFS server and proxy at an alternate datacenter. The ADFS servers are configured as a farm. So if we
    need to fail over to the secondary datacenter, we just need to change DNS entries for our federation services. All this is in place and tested.
    How do we accomplish something silimar for Directory Sync?

    Implementing FIM is another investment in terms of licensing, I would rather go with DirSync ;)
    I have heard that Microsoft is planning to get rid of DirSync in future. I think I will be good with Standalone servers for a while.
    Cheers,
    Gulab Prasad
    Technology Consultant
    Blog:
    http://www.exchangeranger.com    Twitter:
      LinkedIn:
       Check out CodeTwo’s tools for Exchange admins
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Is it possible to make the ISE guest server redundant ?

    Hi,
    We've an ISE cluster of two ISE nodes.
    The ISE guest server works fine on the primairy ISE node.
    MAC address of the guest client is set in the map 'GuestDevices' after accepting the AUP policy.
    The the ISE sents the COA and the client authenticates again and is punt in the guest vlan.
    But when the primairy ISE is offline, I see the guest portal AUP page on the secondairy ISE node.
    I can accept the AUP policy, and I get an error message.
    On the secondairy ISE I see that the COA to the switch is sent, to clear the session to the primairy ISE....
    But the COA request should ask to clear the session to the secondairy ISE ( the primairy ISE is offline ).
    Should it be possible to configure the ISE guest functionality redundant in an ISE cluster?
    /SB

    The Guest portal can run on a node that assumes the Policy Services persona when the primary node with Administration persona is offline. However, it has the following restrictions:
    •Self registration is not allowed
    •Device Registration is not allowed
    •The AUP is shown at every login even if first login is selected
    •Change Password is not allowed and accounts are given access with the old password.
    •Maximum Failed Login is not be enforced
    http://www.cisco.com/en/US/docs/security/ise/1.0/user_guide/ise10_guest_pol.html#wp1126706

  • DAGs and Edge server redundancy

    We are in the process of creating an Exchange DAG and setting up redundancy between our edge servers and need to know if there will be any type of service outage as a result of any of these changes?  If so, how should we expect the process to go? 
    E.G., will there be a service outage after the DAG is created but not after the edge redundancy is set? etc.

    Hi sbusarow,
    For edge server:
    You can deploy multiple Edge Transport servers and use multiple DNS MX resource records to load balance activity across those servers. You can also use Network Load Balancing (NLB) to provide load balancing and high availability for Edge Transport servers.
    For DAG:
    You could refer to below:
    http://technet.microsoft.com/en-us/library/dd638121.aspx 
    So, if you understand how to achieve the target, you will get the information.
    Regards!
    Gavin
    TechNet Community Support

  • Quorum Server Redundancy Question

    Hi All,
    I'm just investigating my options for a new cluster configuration and was trying to find out about multiple quorum servers hosts. All the examples I have come across in the documentation have 1 physical host acting as a quorum server for an n+1 node cluster. I'm ssuming that there will be quorum issues in the event the physical host, hosting the quorum server, is down and the cluster nodes performed a reconfiguration/switch when the quorum server was unavailable.
    Is it possible to have 2 physical hosts, with quorum servers defined on each, that can then be confidured in to the cluster. Effectivly pointing at two different quorum servers for votes?

    Correct, the QS is only used if the cluster changes state, i.e. nodes leave or join. However, having more than 1 QS for a single cluster does not help. You simply lower your overall availability because there are more failure scenarios where one of these is down, leading to insufficient votes for the remaining cluster node to obtain.
    Active monitoring and prompt repair of the QS (or QD) is the right approach.
    Tim
    ---

  • When dedicated RAID controllers are too expensive, the poor man should...

    So the Disk Setup Guide on the PPBM7 Tweakers page states that for 5+ drives, RAID 3/5 (with dedicated controller) is the way to go... But that's a BIG price jump from a 4 drive system.
    As of now my disks are:
    C: Sandisk Extreme 120GB
    D: WD RE4 1TB
    E: WD RE4 1TB
    F: WD Green 1TB
    It's not fast enough, and I need more room on the media drive (it's 50% full, but will keep filling and degrading quickly). I also want a bigger F: as a dedicated backup.
    I was thinking:
    C: Same. OS, Programs, Page.
    D + E: 2xRE$ in RAID 0 (onboard controller) Media, Projects
    F: 160GB SSD for Cache/Preview
    G: 2GB Backup of media/projects RAID 0
    Now, I know the risks of RAID 0 for projects (and really for my setup losing media would be just as catastrophic), but 1) they're enterprise drives which should be a bit more reliable and 2) I run a backup everyday.
    Am I making sane decisions?
    P.S.
    My entire cache/preview amounts to 15GB, so it seems a royal waste of space to run the 2 RE4's in RAID 0 for the cache. 120+GB SSD can be had for ~$65 ebay.

    Eric - Thank you! I've never heard of putting the cache on the OS drive, but that would indeed save me some money and effort if it's not likely to degrade performance.
    Bill and RJ - I apologize for wording it unclearly (plus a typo), but what I was trying to say was G: 2TB Backup of media/projects RAID 0 meaning G is a single 2TB drive acting as a dedicated backup of the media drive (which is a 2-disk RAID0). I am clear on the fact that RAID 0 has no redundancy, and doubles the risk of data loss.
    RJ - I understand that even enterprise drives have a chance of failure, and perhaps in my circumstances the same chance as a regular drive, but I've been using these drives individually for a good while, have error and health checked them, so they're not duds. I would backup my media from the single drive where it now resides, setup the RAID 0, then restore the media to the new RAID 0 media drive so there'd be no chance (excluding lighting, bad luck or the wrath of the almighty) of losing everything mid-backup.
    So, it appears the cheapest solution for me would be:
    C: (120GB SSD) OS, Programs, Page. Cache, Previews
    D+E: (2TB Spanned) Media, Projects
    F: Single 2TB disk, backup of media/projects
    Exports on D/E or F - it doesn't really matter to me. I don't mind waiting, and they generally go straight on the internet post-export.

  • Poor man's port scan blocker

    working with cisco IOS on 3750's at the access level, Nexus 7K's at the core. 
    I need to find a cheap but relatively harmless way to block port scans.  We have not typically had to do this, most people on the internal network behave themselves.  But we have a programmer bent on proving she's an "ethical hacker" and frankly I haven't got time for this nonsense. 
    I would just shut down her physical port but she runs these 'tools' from a vm-server and there are other hosts running on the same physical NIC so I can't just shut off that port.
    I was looking into CBAC but I need to be very careful how I craft the ACL so as not to cause legitimate traffic to cease.  That would be an RPE.  Has anyone seen a detailed write up on how to proceed?  The training I took just sort of 'touched' on it...kind of like "here's this other feature" but didn't really delve deeply into specifics.  I did a search on google and was overwhelmed.  The first couple of articles I located, probably just by cooincidence,  were written for folks already steeped in the spy vs. spy world and so were way over my KB threshold.  As I said, I can't afford to make a mistake here.
    Anyone have some tips on where I can get started on this?  Thanks so much in advance.

    Hi
    Well there are several ways you can handle this.
    but lets first make it the way it is supposed to be handled.
    1) is she breaking any IT policy ? If yes then let HR deal with the offending programmer, just make sure that they have enough proof to swat the offender hard.
    If the answer is no, no policy of the company is broken then frankly I doubt that it is your responsibility to fix the problem, wich in this case is that the IT policy is lagging behind what is desireable.
    Lets ignore the above part and check on what you asked for.
    First of all since  the machine is a vm machine in an esx host you will have problems to halt traffic simply because not all traffic does leave the ESX host.
    So what can you do ?  Is the ip address static or dynamic ?
    If it is a static ip address then you can easily write an ACL that allows what she is supposed to be able to do from that machine and then block the rest from that particular machine and then allow everything else.
    Since you did not have an ACL from the beginning this should only impact her ability to scan.
    If the 3750 software is quite new you can setup an ACL with a connection to an EEM and TCL script that IF she starts to scan you can block her address via adding a new acl or the switch sends you an email or anything you can imagine inbetween.
    If the ipadress is DHCP then you can either choose to lock it down to a specific address in the dhcp scope or you can setup something that lets you know what it is and sets a scripted acl.
    So what other things can you do ?
    You could set a MAC address access-list and shut down the Mac address passing through the switch.
    You can do alot of other things like poisoning the arp address table of the machine, and make things not work the way she wants it. duplicate ip addresses and mac addresses or maybe duplicate windows name.
    But that sort of thing can backfire and to be honest, that does not sound like the doings of a person who are in charge of the network and so on..
    I would go for the first alternative ie make sure that what she is doing is not ok according to policys, let her know that it is not ok and if she persists in her doings turn her over to the HR department.
    Good luck

  • Poor performance after server port to SUN

    Hi all,
    porting our C++ corba server to SUN was not very successful, because the performance of the SUN version was 15 times bad then the performance on Windows with the same source code and some reference tests.
    I have attached our build options and the configuration of our develepment equipment.
    We have done this port also on earlier versions and we see also, that the server was slower, but only the factor 1.5 to 2, so I was wondering why I have such a big difference in this release.
    What we have changed between the releases was, that we use the boost libraries (V 1.32) and therefore we compile and link now with stlport4 library. We changed also from Forte 6.2U2 to SunStudio 10.
    First I thought, that it is the old development equipment, what make the diffrence, but also our newest available machine (SunOS sudecgn0032 5.10 Generic_118833-23 sun4v sparc SUNW,Sun-Fire-T1000) show the same runtime performance. Any idea? Are there some tools that can help to analyse, if there is perhaps a structural difficulty?
    regards
    Arno
    Here are the information about our development environment:
    Machine:
    SunOS sun4 5.8 Generic_108528-24 sun4u sparc SUNW,Ultra-80 (4*450MHz)
    Compilerversion:
    CC: Sun C++ 5.7 Patch 117830-08 2006/07/12
    Example for a compiler call:
    CC -I.. -I../poet_code -I/km/iona/asp/6.2/include -I/km/poet/inc -I/km/libs/flexlm/machind -I/km/sqstest_plato/PlatoServer/Interface -I/km/sqstest_plato/PlatoServer/Basics -I/km/sqstest_plato/PlatoServer/XMLBase -I/km/sqstest_plato/PlatoServer/XMLBase/poet_code -I../../bison++ -I/km/sqstest_plato/libs/xerces/src -I/km/sqstest_plato/libs/xalan/src -library=stlport4 -D_ASSERTE=assert -features=extensions -features=rtti -w -DNDEBUG -D_GARBAGE_COLLECTOR +d -mt  -D_APP_SERVER   -c ../ApplicationServer.cpp
    Link command:
    CC ApplicationServer.o AppServCopyInfo.o AppServPersistentLocator.o AppServTxnManager.o AppServTxnShell.o BaseHelper.o BulkManipulationsManager.o CheckTestCaseRefManip.o cmdlexer.o cmdparse.o Dispatch.o FilterParser.o FlagOutput.o ImportHelper.o ImportInfoCheck.o ImportParser.o ManagerLocator.o ParserNodes.o PEmbeddedVariables.o PSActionWord.o PSActionWordTemplate.o PSActivity.o PSActivityElement.o PSActivityTemplate.o PSActivityType.o PSAppServBase.o PSAttachment.o PSBasePlus.o PSCall.o PSCPO.o PSDataStructure.o PSDataStructureTemplate.o PSEnumElement.o PSEnumeration.o PSEnumType.o PSFilterDef.o PSLinkedAttachment.o PSOrder.o PSOrderTemplate.o PSParameter.o PSParameterTemplate.o PSPathBase.o PSPlannedActivity.o PSReportDef.o PSRequirement.o PSRequirementFolder.o PSRequirementItem.o PSScheme.o PSSE_IsTestedIn_TC.o PSSystemelement.o PSSystemelementFolder.o PSSystemelementItem.o PSTCDataAssignment.o PSTcsContainer.o PSTemplate.o PSTemplateFolder.o PSTest.o PSTestCase.o PSTestStep.o PSTextTemplate.o PSTsCondition.o PSTsConditionTemplate.o PSUDATemplate.o PSummary.o PSWorkspace.o PSXsltContainer.o PTR00HASH.o PUDAttribute.o RefManip.o SActivityFilter.o SAppServBaseFilter.o SBulkManipulations.o SCursor.o SFilter.o SParameterFilter.o SReport.o SRequirementFilter.o SSystemFuncFilter.o STestCaseFilter.o STestFilter.o STestSchemeFilter.o STestStepFilter.o STestViewFilter.o SWorkspaceManager.o TcsSAXHandler.o udalexer.o UDATemplateParser.o Validation.o VisibleRef.o VisibleRefManip.o Persistent.o XmlBasePersistent.o -mt -library=stlport4 -L/km/iona/asp/6.2/lib -lit_art -lit_poa -lit_ifc -lit_naming -lit_location -lit_iiop -lit_csi -L/km/poet/runtime/lib -lpt95Fbs -lpt95Fex -lpt95Fin -lpt95Fkn -lpt95Foq -lpt95Fsc -lpt95Ftm -L/km/sqstest_plato/PlatoServer/Basics/boost/lib -lboost_thread-sw-mt-1_33_1 -lboost_regex-sw-mt-1_33_1 -lboost_date_time-sw-mt-1_33_1 -L/km/sqstest_plato/PlatoServer/Interface/Release -lstubs -lskell -L/km/sqstest_plato/PlatoServer/Basics/Release -lbasic -L/km/sqstest_plato/libs/xerces/lib/solaris -lxerces-c -L/km/sqstest_plato/libs/xalan/lib/solaris -lxalan-c -lxalanMsg -L/km/sqstest_plato/PlatoServer/Interface/Release -lstubs -lskell -L/km/sqstest_plato/PlatoServer/Basics/Release -lbasic -L/km/sqstest_plato/PlatoServer/XMLBase/Release -lxmlbase -L/km/sqstest_plato/PlatoServer/libs/ReleaseInfo/Release -lReleaseInfo -L/km/sqstest_plato/PlatoServer/Basics/zlib/Release -lz -lsocket -lnsl -lpthread -o ./ApplicationServer

    The programs was ported also in earlier versions 1,5 year ago we use forte 6.2 and the performance was o.k.
    Possibly the program design was based on Windows
    features that are inappropriate for Unix. So the principal design didn't change, the only thing is, that we switch to the boost libraries, where we use the thread, regex, filesystem and date-time libraries
    Have you tried any other Unix-like system? Linux, AIX, HPUX,
    etc? If so, how does the performance compare to
    Solaris?Not at the moment, because the order is customer driven, but HP and Linux is also an option.
    Also consider machine differences. For example, your
    old Ultra-80 system at 450Mhz will not keep up with a
    modern x86 or x64 system at 3+ GHz. The clock speed
    could account for a factor of 6. That was my first thought, but how I have wrote in an earlier post, the performance testcase need the same time on a 6x1GHz (some Sun-Fire-T1000) machine
    Also, how much realmemory does the sparc system have? 4 GB! And during the testrun the machine use less than 30% of this memory.
    If the program is not multithreaded, the additional
    processors on the Ultra-80 won't help. But it is!
    If it is multithreaded, the default libthread or libpthread on
    Solaris 8 does not give the best performance. You can
    link with the alternative lwp-based thread library on
    Solaris 8 by adding the link-time option
    -R /usr/lib/lwp (for 32-bit applications)
    -R /usr/lib/lwp/64 (for 64-bit applications)The running application use both, the thread and the pthread library can that be a problem? Is it right, that the lwp path include only the normal thread library?
    Is there a particular reason why you are using the
    obsolete Solaris 8 and the old Sun Studio 10?Because we have customer which do not upgrade? Can we develop on Solaris 10 with SunStudio 11and deploy on 5.8 without risk?
    regards
    Arno

  • Server redundancy

    Hi!
    I'm currently working on a router based on Arch.
    The thing that I want to accomplish is to have two machines working redundantly.
    They've got the same hardware etc, so the deal is to keep the configuration files up2date (updated through various scripts dynamically), and when one machine dies, the other should take over, having updated configuration files and, obviously, the same IP as the first one.
    Anyone got hints where to start out? Had a look at keepalived.org, seems promising (except for the 1+ year silence), but not sure how well it works.
    I'm looking for a solution that will also work for other types of servers, databases, www, dns, mail etc etc... if possible.
    Any ideas, tips, hints are welcome : )
    /Diddi
    Last edited by diddi (2009-01-20 20:18:46)

    lets see...
    relayd+carp (openbsd)
    proprietary vrrp (cisco stuff)
    heartbeatd
    keepalived (lvs, linux-ha)
    using a frontend proxy like ha-proxy (layer 3/4 or layer 7)
    whackamole (build on spread toolkit)
    or a proprietary frontend LB like coyotepoint or fat-pipe (h5 i think)
    a portable ip block with multiple routes broadcast through BGP (think same ip in different locations). often used with dns servers (opendns does this for instance)
    likely you want a combination of some components of the above....
    Last edited by cactus (2009-01-23 18:02:26)

  • Lhave loked phone but lhave not any mony to unloked it what i do i love apple but i m so poor man

    what l do i m so poor bur i love apple phone but its loked

    i have access to one of the free sites and gave me some of the data for two
    It is open to UK networks Orange or T-Mobile or Three
      I hope to be useful information for those who wanted Tkdyam a helping hand to open the two, I also reported poor I do not have Visa or money to open two However I am of the most loving of Apple and their products, which respects its users and provide them with the best services I hope I need help, I honestly

  • ICloud Server Redundancy

    There's very little information out there about how iCloud works behind the scenes. I hear about the giant datacenter in North Carolina, as well as others. I certainly expect data storage on RAID and networks and servers arranged so no single device or hardware failure should cause a problem. But what if something more catastrophic happens? A single natural disaster in North Carolina could render all kinds of local redundancy irrelevant. Does anyone know if we have any actual geographic diversity on our iCloud data?

    I never said anything was anyone's fault. Try reading. Of course the servers should not be down for 2 days. Of course it should work properly. You think Apple planned it like that? But clearly there was a major problem that took considerably longer to fix than anyone, including Apple, expected. None of us here know what exactly went wrong. Perhaps most of us wouldn't even understand it if Apple described the problem in great detail. Fixing networks and servers is not as simple as fixing the tyre on a car.
    These things happen. GoDaddy (one of the worlds largest webhosts) had an outage some days ago that took down milions of commercial and personal websites and email. No technology company is immune to such technical failures. Sometimes they take considerable time to identify the problem and further time to fix it.
    If you are not using a free, consumer-level email service for business purposes, then my comment does not apply to you.
    However, if you need an email service that gives guaranteed service levels, and guaranteed uptime, use one. There are plenty out there. iCloud is not that service.

Maybe you are looking for

  • BI Integration to CRM

    Hi All, Can you please give the links where i can get much more information on CRM integration to BI and how the data goes from CRM To BI Server. Thanks in advance Wishes, dp

  • C3-01 The Worst phone ever nokia released....

    C3-01 The Worst phone ever nokia released....bcz of no poor software,does not provide basic functions where other cheap phone do, touch screen Problem...and so on....

  • Servlet \ IE Address Bar

    Hi, I have a page in which users fill out a form and then ithey press the send button. The send button generates a request to a servlet via the GET method and passes it all the arguments. Now the user is taken to another page (LETS call it 'PAGE A').

  • Where to maintain project manager and Project details.

    Hi Guys,              Whats the transaction or table to maintain project manager and project details. Cheers S Kumar

  • 2 Sales order types to post to 2 value fields

    Hi, Previously, 2 sales order types ZSA4-samples med. & ZSA5-samples adv. posts to one value field which is VV310 Local mktg mgt.  Business users now requires these sales order types to post to 2 new separate value fields.   So,  in Controlling I ass