Best Practices for Virtual Hosting

I am providing hosting for a few churches besides our own and had some question regarding best practices.
My primary domain say is churchhosting.com with an A record "mail.churchhosting.com" pointing to my static ip. It also has a MX record to "mail.churchhosting.com" and a PTR record set by my ISP to "mail.churchhosting.com".
For the other sites I am hosting say "church1.com" and "church2.com", should I set their MX record to "mail.churchhosting.com" or have an A record "mail.church1.com" pointing to my static ip with a MX record to "mail.church1.com." and likewise with the "church2.com" domain.
What would the best practice be and the potential problems that either might present with the server?

Am I doing this correctly? I want to know how many SSL certs I need under the following:
In a similar situation, I think. I have multiple virtual domains setup. For example, company.com, company1.com, company2.com.
Mail Services General Settings show the host name at company.com. All the company.com, company1.com and company2.com are listed under Advanced > Hosting as virtual domains.
All these domains are on a single IP address. The DNS is hosted at godaddy and all MX records for company.com, company1.com, and company2.com are '0 @ company.com'.
All the CNAME records for mail.company.com, mail.company1.com, and mail.company2.com point to "@" and all the A records point to the same IP address. There is only one server.
I think what this means is that I've set it up so that all mail.xxx.com is routed to mail.company.com for all virtual domains. I want to use SSL certs for mail. How many will I need under this arrangement? Is it just one for mail.company.com or is it 3, one for each virtual domain?
Thanks.

Similar Messages

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best Practice for trimming content in Sharepoint Hosted Apps?

    Hey there,
    I'm developing a Sharepoint 2013 App that is set to be Sharepoint Hosted.  I have a section within the app that I'd like to be Configuration-related, so I would like to only allow certain users or roles to be able to access this content or even see
    that it exists (i.e. an Admin button, if you will).  What is the best practice for accomplishing this in Sharepoint 2013 Apps?  Thusfar, I've been doing everything using jQuery and the REST api and I'm hoping there's a standard within this that I
    should be using.
    Thanks in advance to anyone who can weigh in here.
    Mike

    Hi,
    According to
    this documentation, “You must configure a new name in Domain Name Services (DNS) to host the apps. To help improve security, the domain name should not be a subdomain
    of the domain that hosts the SharePoint sites. For example, if the SharePoint sites are at Contoso.com, consider ContosoApps.com instead of App.Contoso.com as the domain name”.
    More information:
    http://technet.microsoft.com/en-us/library/fp161237(v=office.15)
    For production hosting scenarios, you would still have to create a DNS routing strategy within your intranet and optionally configure your firewall.
    The link below will show how to create and configure a production environment for apps for SharePoint:
    http://technet.microsoft.com/en-us/library/fp161232(v=office.15)
    Thanks
    Patrick Liang
    Forum Support
    Please remember to mark the replies as answers if they
    help and unmark them if they provide no help. If you have feedback for TechNet
    Subscriber Support, contact [email protected]
    Patrick Liang
    TechNet Community Support

  • Best practice for licence server for RDS Farm & Certificate errors

    Hello,
    I am in the process of creating an RDS farm using Server 2008 R2.  I have three Session Hosts and a Connection Broker.
    I have a set of 10 user CALs available and also another 20 on our current RDS server which will need migrating once we go live with the farm.
    I understand the User CALs need to be installed on another Server 2008 R2 and I am wondering what is best practice.  We are running on an entirely virtual environment and it would be simple enough to create another server and install the CALs on there. 
    The only issue with that is that I would need to create a replica of this new machine for DR purposes, but this would take up valuable space which may not be necessary.
    We are planning on creating replicas of one of the Session hosts and the broker for DR, so I am guessing I would need to install some CALs on the Session Host which is going to be replicated.
    There are a few options and I am just wondering what is the best way to go about things.
    Also, as an aside, I am getting an annoying certificate error each time I log a test user onto the RDS farm - I think this is because I am using the DNS alias of the RDS Farm to log on. Is there an easy way to get around this, other than the 'Do not show
    this message again'. I have been doing some research and the world of Certificates is very confusing!!
    Thanks,
    Caroline
    C.Rafferty

    Hi Caroline,
    Firstly for your License related issue, you can perform the step on any VM or can create the new VM as replica for RDSH server also. But please be sure that you have installed RD License server on it, activate it and then install RDS CAL on it. But be safe
    if possible don’t install RD License server with RDCB, please make that out of it as little away. As you can also install RD License server with AD or make replica of that and install RDL on that.
    Best practices for setting up Remote Desktop Licensing (Terminal Server Licensing) across Active Directory Domains/Forests or Workgroup
    http://support.microsoft.com/kb/2473823
    What’s the specified certificate error which you are receiving?
    If you're going to allow users to connect externally and they will not be part of your domain, you would need to deploy certificates from a public CA. In meantime you can refer blog for getting insight for certificate case.
    Certificate Requirements for Windows 2008 R2 and Windows 2012 Remote Desktop Services
    http://blogs.technet.com/b/askperf/archive/2014/01/24/certificate-requirements-for-windows-2008-r2-and-windows-2012-remote-desktop-services.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best practices for implementing OIM

    We plan on putting OIM servers behind LB (hardware). When I develop OIM client I am required to specify OIM endpoint(s) via property java.naming.provider.url. In case of LB I'd specify a virtual host there. The question is what is the best practice for configuring LB - timeout, persistence, monitoring? I don think LB vendor is relevant, but just in case, I have a choice of F5 BigIP and Citrix Netscaler.
    My understanding is that Java class tcUtilityFactory is supposed to be instantiated once (in a web client) and maintain the connection, but LB will close the connection after timeout is exceeded. So another question is if I want to use LB I have to take care of rebuilding connection when it is expired, or open/close connection every time tcUtilityFactory is needed. Any advice will be appreciated.
    Thanks,
    Alex

    No i was not going to sync timeouts - just let it close connections after say, 5 min of inactivity. The reason is that performance data is horrible - from my desktop environment, initialization takes almost 9 sec, while reading data from OIM - only 150 milliseconds. I can't afford more than .5 sec on the whole OIM operation, as we are talking about customer experience.
    Thanks,
    Alex

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best Practice for link to WebdynPro page in welcome page

    Hi Experts,
        I am new in SAP Portal. I need some guidance from you guys. I have a requirement to create welcome page which is JSP and has a link to a WebdynPro page. I have to put the url in JSP file. So i do not know what kind of URL i should put in the JSP. 
    The problem is if i put the url which i can see in the address bar like 'http://DevServer/WebDynPro/ApplcationA', when i transport it to another server ,for example Production,. The real url might be change to 'http://ProdServer/WebDynPro/ApplcationA'. It may cause the link in JSP can not be worked.
        I would like to ask you the best practice for this case. What url? What configuration?
    Thank you in advance,
    Noppong Jinbunluphol
    P.S. For the JSP, i create it in portal application dc.

    Dear Noppong,
    You can do it with multiple ways like :-
    1. Get the current host name and make complete URL with using host name for the webdynpro iview.
    request = (IPortalComponentRequest) this.getRequest();
    HttpServletRequest req = request.getServletRequest();
    StringBuffer strURL = req.getRequestURL();
    2. Create the KM Document or Link for webdynpro Iview OR Create the WPC Web Page for the webdynpro ivew
    Refer to [http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm|http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm]
    [http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm|http://help.sap.com/saphelp_nw70/helpdata/en/06/4776399abf4b73945acb8fb4f41473/frameset.htm]
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/en/ff/681a4138a147cbabc3c76bde4dcdbd/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/en/ff/681a4138a147cbabc3c76bde4dcdbd/content.htm]
    Hope it will helps
    Best Regards
    Arun Jaiswal

  • Best practice for RDGW placement in RDS 2012 R2 deployment

    Hi,
    I have been setting up a RDS 2012 R2 farm deployment and the time has come for setting up the RDGW servers. I have a farm with 4 SH servers, 2 WA servers, 2 CB servers and 1 LS.
    Farm works great for LAN and VPN users.
    Now i want to add two domain joined RDGW servers.
    The question is; I've read a lot on technet and different sites about how to set the thing up, but no one mentions any best practices for where to place them.
    Should i:
    - set up WAP in my DMZ with ADFS in LAN, then place the RDGW in the LAN and reverse proxy in
    - place RDGW in the DMZ, opening all those required ports into the LAN
    - place the RDGW in the LAN, then port forward port 443 into it from internet
    Any help is greatly appreciated.
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    The deployment is totally depends on your & company requirements as many things to taken care such as Hardware, Network, Security and other related stuff. Personally to setup RD Gateway server I would not prefer you to select 1st option. But as per my research,
    for best result you can use option 2 (To place RDG server in DMZ and then allowed the required ports). Because by doing so outside network can’t directly connect to your internal server and it’s difficult to break the network by any attackers. A perimeter
    network (DMZ) is a small network that is set up separately from an organization's private network and the Internet. In a network, the hosts most vulnerable to attack are those that provide services to users outside of the LAN, such as e-mail, web, RD Gateway,
    RD Web Access and DNS servers. Because of the increased potential of these hosts being compromised, they are placed into their own sub-network called a perimeter network in order to protect the rest of the network if an intruder were to succeed. You can refer
    beneath article for more information.
    RD Gateway deployment in a perimeter network & Firewall rules
    http://blogs.msdn.com/b/rds/archive/2009/07/31/rd-gateway-deployment-in-a-perimeter-network-firewall-rules.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Practice for the database owner of an SAP database.

    We recently had a user account removed from our SAP system when this person left the agency.  The account was associated with the SAP database (he created the database a couple of years ago). 
    I'd like to change the owner of the database to <domain>\<sid>adm  (ex: XYZ\dv1adm)  as this is the system admin account used on the host server and is a login for the sql server.  I don't want to associate the database with another admin user as that will change over time.
    What is the best practice for database owner for and SAP database?
    Thanks
    Laurie McGinley

    Hi Laura
    I'm not sure if this is best practise or not, but I've always had the SA user as the owner of the database. It just makes it easier for restores to other systems etc.
    Ken

  • Best practice for running multiple sites on 1 CF install?

    Hi-
    I'm setting up a new hosting environment (Windows Server 2008 Standard 64 bit VPS  configuration, MySQL, IIS 7, CF 9)
    Has anyone seen any docs or can anyone suggest best practices for configuring multiple sites in this environment? At this point I'm thinking simple is best, one new site in IIS for each client (domain) and point it to CF.
    Given this environment, is anyone aware of any gotchas within the setup of CF 9 on IIS 7?
    Thank you in advance,
    Rich

    There's nothing wrong with that approach. You can run as many IIS sites as you like against a single CF install.
    As for installing CF on IIS 7, I recommend that you do the following: install CF 9 without connecting it to IIS, then installing the 9.0.1 upgrade and any hotfixes, then connecting CF to IIS using the web server configuration utility. This will keep you from having to install the IIS 6 compatibility layer that's needed with CF 9 but not with CF 9.0.1.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/

  • Best Practice for Conversion Workflow

    Hello,
    I'm converting video files from our "home grown" virtual media reserve to iTunes U. Some of the files are in RM format, some are already compressed .mov's (not H.264) and some I have the original DV files for.
    Anyone out there have a best practice for converting these file types for posting to iTunes U? I have Final Cut Studio (Compressor), QT Pro and Squeeze available to me.
    Any experience you have with this would be helpful.
    Thanks,
    Jeana

    For converting old files to a podcast compatible video and based on the machine you have, consider elgato turbo.264. It is a fairly priced "co-processor" for video conversion. It is comprised of an application and a small USB device with a encoder chip in it. In my experience, it is the fastest way to create podcast video files. The amount of time that you will save will pay for the device quickly (about $100). Plus it does batch conversion of any video that your system currently plays through QuickTime. it has all the necessary presets and you can create your own. It has a few minor limitations such as not supporting (at this moment) enhanced podcasting features such as chapter markers and closed-captions but since you have old files for conversion, that won't matter.
    For creating new content, the workflow varies a lot. Since you mention MP3s, I guess you are also interested in audio files. I would stick with GarageBand, especially if you are a beginner plus it supports enhanced podcasts.
    In any case the most important goal is to have the simplest and fastest way to go from recording to publication. The less editing the better. To attain that, the best methods will require the largest investments. For example, for video production the best way is to produce the content live so when you finish recording it is only a matter of encoding and publishing. that will require the use of a video switcher that can ingest at least one video camera and a computer output to properly capture presentation material. That's the minimum. there are several devices that can do this for you. Some are disguised PCs and some connect to a PC for tapeless recording. You can check the Tricaster, which I like but wish it was a Mac and not a Windows Xp PC. Other routes may include video mixers from manufacturers like Edirol, Pansonic or Sony connected to a VTR or directly to a Mac for direct-ti-disc capture. I f you look at some of the content available in iTunes U, you will see what I explain here. This workflow requires preparation and sufficient live support but you will have your material ready for delivery almost immediately after the recorded event. No editing required. Finally, the most intensive workflow is to record everything separately and edit it later, which is extremely time consuming.

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • Best practice for real time requierement

    All,
    I am trying to find out what is the best practice for reporting against real time data ?
    is it while using
    1 - webi against universe/bex query on the top of hybrid cubes ?
    2 - Crystal report directly against the ECC data ?
    3 - using another solution such as Data Fedrator ? or something different ?
    I am looking to know if anyone got such req and also to share their experience 
    did they get some huge challenge against hybrid cubes ?
    Thanks in advance for your help
    Philippe

    Well their first requierement was to get real time data .. if i am in Xcelsius and click refresh then i want it to load my last data ..
    with live office , i can either schedule a crystal report and get the data delayed or use the option from live office to make iterfresh as right now .. is that a correct assumption ?
    I was talking about BW, just in case they are willing to change the requierement to go from Real time to every 5 min
    Just you know we are also thinking of the following option:
    1 - modify the virtual provider on the  CRM machine to get all the custom fields needed for the Xcelsius Dashboard
    2 - Build some interactive report on the top of these Virtual Provider within CRM
    3 - get the link to this report , it is one of the Report feature within CRM
    4 - design and build your dashboard on the top of it
    5 - EXport your swf file to the cRM web ui
    we are trying to see which one is the best one
    Philippe

Maybe you are looking for

  • [Data Services]Calling stored procedure from scripts in parrallel workflows

    Hello. I am using Data Services XI .3.2 with Oracle Database 11g Situation is like this: I have a workflow that has a script that using function sql() performs a simple select.  Example: sql('DW_CTL','select 1 from dual'); If I call this workflow 4 t

  • How to create index  on re-direct URLs ?

    Hi , I have all my documents in a Document Management System, where the public URL of the documents are different from the actual URL of the document. there is a re-direct setup that takes care of forwarding the http request. for eg. the public URL i

  • I just need to cancel my adobe export to PDF and get my refund

    I Want to cancel my subscription. It does not do what I want it to do.

  • Creating multiple Excel worksheet

    Gurus, I am downloading data into Excel from my web page by using the following: Response.setContentType("application/vnd.ms-excel"); HtmlWriter out = aResponse.getWriter(); out.print( I am able to view all the data in one Excel worksheet. Does anyon

  • Photosmart 7520 won't print color

    I just purchased my Photosmart 7520 tonight.  I can not get the color or photo black cartridge to print.  The ink levels are showing as dropping but the paper/photo paper comes out blank!  I can print a plain black & white document with no color, but