VDI best practices

Hi,
Is there any documentation that outlines best practices when implementing VDI, the install guides are great but they are just that; install guides...I'm looking for real life examples, worked out issues and such; case studies with instructions if you will. I've done the Google searches, but as with most things like this there are bits an pieces of the puzzle but none that I've found have start to finish examples.
For instance, where would I go to look for documentation for the best way to convert an environment of about 100 or so Windows desktops (all together around 4-6 different templates), so that I have a near zero IT workload maintaining 4-6 different installs vs. 100 hardware desktops. Do I still need a WUS server...etc.
My goal is near zero IT at least for the desktops. Has anyone gotten close?
As an example; Creating templates - install guides work fine; but what is the best way to configure a template so that each clone gets registered (assuming a flexible pool is the best way to go) with AD, or does one have to go to each cloned machine and add it to the MS domain?
What issues are there integrating all of the normal MS stuff with the VDI environment.
I'll contribute what I find as I go along but if there is anyone out there with links to docs; I'm sure I'm not the only one that would appreciate it...thanks
ron

Hey Ron
Regarding the VDI servers, we could probably get by with less.. only a few months ago, we had only 3 and we were getting by. Then one day I took one down for maintenance and things got a little dicey for the users... but we were eeking by. Then we had some crazy network issue which I am blaming on our switches and so we lost one of the remaining two, leaving us with only one for 300+ thin clients. It didn't work out well... basically the system was unusable. So on that day I vowed to have enough servers to have one or two down for maintenance, and still have failover capacity. Luckily we were blessed with extra servers, so I doubled the server count and now I can take one down for maintenance or testing, and we still have plenty of space for any random failures, and the users will still be able to work fine. Hard lessons learned.
We are an Oracle hardware shop as well, so all 6 of those servers are some of their older blades... Sun/Oracle x6250 with 2x quad core 2.5ghz procs, and maxed out on ram at 64GB. In the VDI admin/install guide, they do a pretty good job of helping you size your VDI servers for the number of users. Pay attention to the number of users and the number of available cores in your servers. RAM is plentiful and cheap these days, so that's generally not an issue on the core VDI (sun ray) servers. And Oracle's ALP protocol the thin clients use is pretty optimized so network traffic is relatively low compared to other VDI vendors out there.
We are a VMware shop, so instead of vbox virtualization, we are doing VMware. Those are older blades from HP - hp bl460 g1 I believe. They are 2x dual core 2.2ghz and only 32gb of ram each. Our virtual desktops are stored on our SAN. Honestly we probably have plenty of extra space in this arena too... according to what VMware tells me, we could turn off close to half the blades and still be fine. This cluster is dedicated to VDI -- all our other virtual servers for the rest of the business live on other vmware clusters on separate hardware.
We maintain basically two types of desktop images... one, the dynamic desktop I referred to in an earlier post. Users log in, use the machine, and they get reset back to a clean state when they log out. There is nothing persistent and we have preloaded all the apps we know they will use. We have about 400 of this type sitting around available for us. The apps our clinical people use are pretty lightweight, so we get by fine with 512mb of ram and 1vcpu on XP SP3. Our second desktop image is a "static" desktop... basically it's assigned to a user and it remains theirs permanently until we blow it away. These are reserved for special people who use special software that is not preloaded on the dynamic desktop image. The more we try to expand use of VDI, the more we end up handing these out... we just have too many types of software and don't want to clutter up our clean little clinical desktop image. That image is XP SP3 again with 1GB of ram and 1vcpu. They also get a bigger 25GB hard drive to give them plenty of space for their special crapplications.
Our biggest bottleneck on the virtual desktop side is SAN I/O. Unfortunately we're forced to use full clones of these desktops rather than "linked cloning" or what you get with vbox and zfs which make much better use of disk space and I/O. I think we currently have most of this squeezed onto about 10 600gb 15krpm fibre channel drives and this is a bare minimum. We recently had an assessment that said we need to probably triple the number of spindles to get the proper I/O. This seems to be a trend in virtualization lately... space is not a problem with modern drives. The problem is that you can squeeze 60 virtual desktops on the space of one hard drive, which is a bad idea when you consider the performance you're going to get. Oh, and the ONLY way we have made this work thus far is by fine tuning our antivirus settings on the virtual desktops to not scan anything coming off the disk (which is clean because it was clean when I built the template). Before we did that, things were crawling and the SAN was doing 3x the I/O.
Again, read the install/admin guide if you haven't yet... I'm pretty sure they give some basic guidelines for storage sizing and performance which we should have read closer early on.
If you have other questions you think you'd like to talk about offline, you can send me an email at my personal address and we'll set something up - dwhitener at gmail dot com. Otherwise, keep the questions coming here and I'll give out whatever info I can.

Similar Messages

  • Best practice to run Microsoft Endpoint Protection client in VDI environment

    We are using Citrix XenDesktop VDI environment. Symantec Endpoint Protection client (VDI performance optimised) has been installed on the “streamed to the clients” virtual machine image. Basically, all the files (in golden image) have been “tattooed” with
    Symantec signature. Now, when the new VM starts, Symantec scan engine simply ignores “tattooed” files and also randomise scan times. This is a rough explanations but I hope you’ve got the idea.
    We are switching from Symantec to Microsoft Endpoint Protection and I’m looking for any information and documentation in regards best practice for running Microsoft Endpoint Protection clients in VDI environment.
     Thanks in advance.

    I see this post is a bt old but the organization I'm with has a very large VDI deployment using VMware. We also are using SCEP 2012 for the AV.
    Did you find out what you were looking for or did you elect to take a different direction?
    We install SCEP 2012 into the base image and manage the settings using GPO and the updates for defs are through the normal route.
    Our biggest challenge is getting alert message from the client.
    Thanks

  • Best practice to run BOBJ server

    Is that a best practice to install the BOBJ (BOE) server in Netweaver stack..
    may or may not use BI
    may or may not use Netwever Portal
    Have said above, I would appreciate best solution for running BOBJ server..
    Thanks-gopal

    I see this post is a bt old but the organization I'm with has a very large VDI deployment using VMware. We also are using SCEP 2012 for the AV.
    Did you find out what you were looking for or did you elect to take a different direction?
    We install SCEP 2012 into the base image and manage the settings using GPO and the updates for defs are through the normal route.
    Our biggest challenge is getting alert message from the client.
    Thanks

  • Windows 2012 R2 File Server Cluster Storage best practice

    Hi Team,
    I am designing  Solution for 1700 VDi user's . I will use Microsoft Windows 2012 R2 Fileserver Cluster to host their Profile data by using Group Policy for Folder redirection.
    I am looking best practice to define Storage disk size for User profile data . I am looking to have Single disk size of 30 TB to host user Profile data .Single disk which will spread across two Disk enclosure .
    Please let me know if if single disk of 30 Tb can become any bottle neck to hold user active profile data .
    I have SSD Writable disk in storage with FC connectivity.
    Thanks
    Ravi

    Check this
    TechEd session,
    the
    Windows Server 2012 VDI deployment Guide (pages 8,9), and 
    this article
    General considerations during volume size planning:
    Consider how long it will take if you ever have to run chkdsk. Chkdsk has gone significant improvements in 2012 R2, but it will still take a long time to run against a 30TB volume.  That's down time..
    Consider how will volume size affect your RPO, RTO, DR, and SLA. It will take a long time to backup/restore a 30 TB volume. 
    Any operation on a 30TB volume like snapshot will pose performance and additional disk space challenges.
    For these reasons many IT pros choose to keep volume size under 2TB. In your case, you can use 15x 2TB volumes instead of a single 30 TB volume. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • AD Sites and Services and Best Practices

    Hey All,
    I am new to OES, but not new to AD. I am in an environment in which DSfW was recently setup to support VDI testing.
    I notice that there is no configuration under AD Sites and Services. We have multiple sites, with DCs setup at each site. The consequence of not having Sites and Services configured is that machines/users in site "A" are logging in through site "B" domain controllers. Obviously, this is not ideal nor best practice. Secondly, this leads me to wonder how the domain controllers are replicating since I do not see NTDS entries in Sites and Service MMC for the domain controllers, yet I do see that AD data is replicating by comparing databases (simply adding a new user on one DC I see it added on the secondary DCs). So I know it's replicating, but apparantly not using AD schema?
    One other question I have about DSfW is regarding the migration from a mixed environment to a full AD environment. We are deploying AD primarily due to VDI initiatives, and currently only testing this. Looking further down the road for planning purposes I have to wonder if it's possible to stand up a 2008 R2 server, join it to the domain, dc promo it, FSMO transfer, then decommossion the DSfW systems. This would leave us with purely Windows DC environment for authentication. Is this something some people have done before? Is it a recommended best path for migrating? Cause I also see others creating a second AD environment, then building the trusts between DSfW's domain and the "new" domain (assuming these are not in the same forrest). That would be less than ideal.
    Thanks in advance for any responses...

    Originally Posted by jmarton
    DSfW does not currently support "sites and services" but it's on the
    roadmap and currently targed for OES 11 SP2.
    Excellent! I feel sane now :) I can live with this, as long as it's expected/normal.
    It sounds like you need sites and services, but once that's in DSfW,
    why migrate from DSfW to MAD if DSfW works for your VDI initiative?
    You are correct. I am simply planning and making sure all the options are in play here.
    I would rather not get too deeply reliant on DSfW if it will make any future possible migration more difficult. Otherwise, DSfW is extremely convenient....I am impressed actually.
    I also believe there may be a way we can control the DC used for specific "contexts" (or OUs as Microsoft calls them). So if I have a group of users in a particular OU that reside at a particular branch I think I should be able to set their preferred domain controller....and if so, that means sites & services becomes nearly irrelevent. I would be ineterested to talk to people who are using DSfW with multiple sites in play.

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best practice on sqlite for games?

    Hi Everyone, I'm new to building games/apps, so I apologize if this question is redundant...
    I am developing a couple games for Android/iOS, and was initially using a regular (un-encrypted) sqlite database. I need to populate the database with a lot of info for the games, such as levels, store items, etc. Originally, I was creating the database with SQL Manager (Firefox) and then when I install a game on a device, it would copy that pre-populated database to the device. However, if someone was able to access that app's database, they could feasibly add unlimited coins to their account, unlock every level, etc.
    So I have a few questions:
    First, can someone access that data in an APK/IPA app once downloaded from the app store, or is the method I've been using above secure and good practice?
    Second, is the best solution to go with an encrypted database? I know Adobe Air has the built-in support for that, and I have the perfect article on how to create it (Ten tips for building better Adobe AIR applications | Adobe Developer Connection) but I would like the expert community opinion on this.
    Now, if the answer is to go with encrypted, that's great - but, in doing so, is it possible to still use the copy function at the beginning or do I need to include all of the script to create the database tables and then populate them with everything? That will be quite a bit of script to handle the initial setup, and if the user was to abandon the app halfway through that population, it might mess things up.
    Any thoughts / best practice / recommendations are very appreciated. Thank you!

    I'll just post my own reply to this.
    What I ended up doing, was creating the script that self-creates the database and then populates the tables (as unencrypted... the encryption portion is commented out until store publishing). It's a tremendous amount of code, completely repetitive with the exception of the values I'm entering, but you can't do an insert loop or multi-line insert statement in AIR's SQLite so the best move is to create everything line by line.
    This creates the database, and since it's not encrypted, it can be tested using Firefox's SQLite manager or some other database program. Once you're ready for deployment to the app stores, you simply modify the above set to use encryption instead of the unencrypted method used for testing.
    So far this has worked best for me. If anyone needs some example code, let me know and I can post it.

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

  • Best Practice for Image placement and Anchored Frames for use in Robohelp 9

    Hi,
    I'm looking for the best practices in how to layout my images in Framemaker 10 so that they translate correctly to Robohelp 9.  I currently have images inside of Anchored frames that "Run into" the right side of my text. I've adjusted the size of the anchored frame so that my text flows correctly around the image. Everything looks good in Framemaker! Yeah! The problem is that when I link my Framemaker document to Robohelp, the text does not flow around my image in the same manner. On a couple of Robohelp screens the image is running into the footer. I'm wondering if I should be using tables in Framemaker in order to get the page layout that I'm looking for. Also, I went back and forth...is this a Framemaker question or is this a Robohelp question. Any assistance would be greatly appreciated.

    I think Jeff is meaning this section of the RoboHelp forums:
    http://forums.adobe.com/community/robohelp/robohelp_framemaker

Maybe you are looking for