BFILE: need advice for best practice

Hi,
I'm planning to implement a document management system. These are my requirements:
(0) Oracle 11gR2 on Windows 2008 server box
(1) Document can be of type Word, Excel, PDF or plain text file
(2) Document will get stored in DB as BFILE in a table
(3) Documents will get stored in a directory structure: action/year/month, i.e. there will be many DB directory objects
(4) User has read only access to files on DB server that result from BFILE
(5) User must check out/check in document for updating content
So my first problem is how to "upload" a user's file into the DB. My idea is:
- there is a "transfer" directory where the user has read/write access
- the client program copies the user's file into the transfer directory
- the client program calls a PL/SQL-procedure to create a new entry in the BFILE table
- this procedure will run with augmented rights
- procedure may need to create a new DB directory (depending on action, year and/or month)
- procedure must copy the file from transfer directory into correct directory (UTL_FILE?)
- procedure must create new row in BFILE table
Is this a practicable way? Is there anything that I could do better?
Thanks in adavance for any hints,
Stefan
Edited by: Stefan Misch on 06.05.2012 18:42

Stefan Misch wrote:
yes, from a DBA point of view...Not really just from a DBA point of view. If you're a developer and you choose BFILE, and you don't have those BFILE's on the file system being backed up and they subsequently go "missing" i would say you (the developer) are at fault for not understanding the infrastructure you are working within.
Stefan Misch wrote:
But what about the posibility for the users to browse their files?. This would mean I had to duplicate the files: one copy that goes into the DB and is stored as BLOB and can be used to search. Another copy will get stored on the file system just to enable the user to browse their files (i.e. what files where created for action "offers" in february 2012. The filenames contain customer id and name as well as user id). In most cases there will be less that 100 files in any of those directories.
This is why I thought a BFILE might be the best alternative as I get both: fast index search and browsing capability for users that are used to use windows explorer...Sounds like it would be simple enough to add some metadata about the files in a table. So a bunch of columns providing things like "action", "Date", "customer id", etc.... along with the document stored in a BLOB column.
As for the users browsing the files, you'd need to build an application to interface with the database ... but i don't see how you're going to get away from building an application to interface with the database for this in any event.
I personally wouldn't be a fan of providing users any sort of access to a production servers file system, but that could just be me.

Similar Messages

  • Advice for Soon-to-be MacPro Owner. Need Recs for Best Practices...

    I'll be getting a Quad Core 3 Ghz with 1GB of RAM, a 250Gig HD, the ATI X1900 card. It will be my first mac after five years (replacing a well-used G4 Tibook 1Ghz).
    First the pressing questions: Thanks to the advice of many on this board, I'll be buying 4GB of RAM from Crucial (and upgrading the HD down the road when needs warrant).
    1) Am I able to add the new RAM with the 1G that the system comes with? Or will they be incompatible, requiring me to uninstall the shipped RAM?
    Another HUGE issue I've been struggling with is whether or not to batch migrate the entire MacPro with everything that's on my TiBook. I have so many legacy apps, fonts that I probably don't use any more and probably have contributed to intermittent crashes and performance issues. I'm leaning towards fresh installs of my most crucial apps: photoshop w/ plugins, lightroom, firefox with extensions and just slowly and systematically re-installing software as the need arises.
    Apart from that...I'd like to get a consensus as to new system best practices. What should I be doing/buying to ensure and establish a clean, maintenance-lite, high-performance running machine?

    I believe you will end up with 2x512mb ram from the Apple store. If you want to add 4gb more you'll want to get 4x1gb ram sticks. 5gb ram is never an "optimal" amount but people talk like it's bad or something but it's simply that the last gig of ram isn't accessed quite as fast. You'll want to change the placement so the 4x1 sticks are "first" and will be all paired up nicely so your other two 512 sticks only get accessed when needed. A little searching here will turn up explanations for how best to populate the ram for your situation. It's still better to have 5 gigs where the 5th gig of ram isn't quite as fast than 4. They will not be incompatible but you WILL want to uninstall the original RAM, then put in the 4gigs into the optimal slots then add the other two 512 chips.
    Do fresh installs. Absolutely. Then only add those fonts that you really need. If you use a ton of fonts I'd get some font checking app that will verify them.
    I don't use RAID for my home machine. I use 4 internal 500gig drives. One is my boot, the other is my data (although it is now full and I'll be adding a pair of external FW). Each HD has a mirror backup drive. I use SuperDuper to create a clone of my Boot drive only after a period of a week or two of rock solid performance following any system update. Then I don't touch it till another update or installation of an app followed by a few weeks of solid performance with all of my critical apps. That allows me to update quicktime or a security update without concern...because some of those updates really cause havoc with people. If I have a problem (and it has happened) I just boot from my other drive and clone that known-good drive back to the other. I also backup my data drive "manually" with Superduper.
    You will get higher performance with Raid of course, but doing that requires three drives (two for performance and one for backup) just for data-scratch, as well as two more for boot and backup of boot. Some folks can fit all their boot and data on one drive but photoshop and many other apps (FCP) really prefer data to be on a separate disk. My setup isn't the absolute fastest, but for me it's a very solid, low maintenance,good performing setup.

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Looking for best practice on application scope beans

    Hey – a portal newbie here. I’ve got some application scope beans that need to be initialized on startup. First thought was to create a servlet that would set the bean. Then I saw the GlobalApp setting, but I think that looks like it is more session scope than application… Looking to be corrected here if I am wrong.
    Is there a place where these type of things traditionally happen? Read only, so no cluster worries (I think) Using WLP 8.1 SP4 and looking for best practices. Thanks for the help!

    To support "code sharing" you need an integrated source code control system. Several options are out there but CVS (https://www.cvshome.org/) is a nice choice, and it's completely free and it runs on Windows, Linux, and most UNIX variants.
    Your next decision is on IDE and application server. These are usually from a single "source". For instance, you can choose Oracle's JDeveloper and Deploy to Oracle Application Server; or go with free NetBeans IDE and Jakarta Tomcat; or IBM's WebSphere and their application server. Selection of IDE and AppServer will likely result in heated debates.

  • Looking for best practice on J2EE development environment

    Hi,
    We are starting to develope with J2EE. We are looking for best practice on J2EE development environment. Our concern is mainly on code sharing and deployment.
    Thanks, Charles

    To support "code sharing" you need an integrated source code control system. Several options are out there but CVS (https://www.cvshome.org/) is a nice choice, and it's completely free and it runs on Windows, Linux, and most UNIX variants.
    Your next decision is on IDE and application server. These are usually from a single "source". For instance, you can choose Oracle's JDeveloper and Deploy to Oracle Application Server; or go with free NetBeans IDE and Jakarta Tomcat; or IBM's WebSphere and their application server. Selection of IDE and AppServer will likely result in heated debates.

  • Looking for best practice / installation guide for grid agent for RAC

    I am looking for best practice / installation guide for grid agent for RAC, running on windows server.
    Thanks.

    Please refer :
    MOS note Id : [ID 378037.1] -- How To Install Oracle 10g Grid Agent On RAC
    http://repettas.wordpress.com/2007/10/21/how-to-install-oracle-10g-grid-agent-on-rac/
    Regards
    Rajesh

  • Looking for best practice white paper on Internet Based Client Management

    Looking for best practice white paper on Internet Based Client Management for SCCM 2012 R2.
    Has anyone implemented this in a medium sized corporate environment? 10k+ workstations.  We have a single primary site, SQL server and 85 DP's. 

    How about the TechNet docs: http://technet.microsoft.com/en-us/library/gg712701.aspx#Support_Internet_Clients ?
    Or one of the many blog posts on the subject shown from a web search: http://www.bing.com/search?q=configuration+manager+2012+internet+based+client+management&go=Submit+Query&qs=bs&form=QBRE ?
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Need advice for deploy adf web fusion application created in Jdev11gTp4

    hello,
    need advice for deploy adf web fusion application created in Jdev11gTp4
    and it will be nice if you have helper sites
    thanks
    greenApple

    Is there something specific in TP4 that you want to use TP4 - as John suggests, it might be an idea to use the full production release (11g). As for resources for information you can check out
    [Jdev Home|http://otn.oracle.com/products/jdev] this page contains links to the developers guides and various how tos etc etc. The follownig page is also useful and is focused more to those who are less familiar with Java
    [JDev for Forms|http://otn.oracle.com/formsdesignerj2ee]
    Hope this helps and maybe if you can be more specific we can better guide you.
    Regards
    Grant

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • Swtich with 2 wireless routers (configuration for best practice/advice?)

    HI folks,
    I have a gigabit switch, and 2 wireless G routers.  I'll leave the model numbers out as it's fairly irrelevant - all linksys.
    Router 1 is used as a router only (due to location in basement)
    Router 2 is used for wireless only
    My current network setup:
    DSL MODEM (accessed on 192.168.2.1 - can not be changed) > Router 1(192.168.1.1)
    Router 1 > Switch (i believe it can't be changed 192.168.2.12 - no webgui)
    Switch > everything else including Router 2
    Everything works except Router 2 - can't connect to it wired or wirelessly until connected directly to a pc.
    Is my setup wrong
    and/or is there a best practice?
    Many thanks!!!

    What is the model number of the switch?
    Normally a switch that cannot be changed does not have an IP address.  So if your switch has an address (you said it was 192.168.2.12)  I would assume that it can be changed and that it must support either a gui or have some way to set or reset the switch.
    Since Router1 is using the 192.168.1.x  subnet , then the switch would need to have a 192.168.1.x  address (assuming that it even has an IP address), otherwise Router1 will not be able to access the switch.
    I would suggest that initially, you setup your two routers without the switch, and make sure they are working properly, then add the switch.  Normally you should not need to change any settings in your routers when you add the switch.
    To setup your two routers, see my post at this URL:
    http://forums.linksys.com/linksys/board/message?board.id=Wireless_Routers&message.id=108928
    Message Edited by toomanydonuts on 04-07-2009 02:39 AM

  • Need advice about best characterset for XMLDB

    Hi,
    Oracle 9.2.0.5 Windows 2000
    Please, give me an advice about best character set
    configuration for XML DB.
    During installation Oracle istallator suggests
    charset =AL32UTF8 for multilingual data and ncharset=
    AL16UTF16.
    Is it good settings for database, which will be
    used for usual multilingual data and XML DB ?
    Thanks,
    Viacheslav

    Yes, we strongly recommend the use of AL32UTF8 for XML DB.

  • Advice on Best practice for inter-countries Active Directory

    We want to merge three active directories with on as parent in Dubai, then child in Dubai, Bahrain and Kuwait. The time zones are different and sites are connected using VPN/leased line. With my studies i have explored two options. One way is to have parent
    domain/forest in Dubai and Child domain in respective countries/offices; second way is to have parent and all child domains in Dubai Data center as it is bigger, while respective countries have DCs connected to their respective child domains in Dubai. (Personally
    i find it safer in second option)
    Kindly advise which approach comes under best practice.
    Thanks in advance.

    Hi Richard
    Mueller,
    You perfectly got my point. We have three difference forests/domain in three different countries. I asked this question becuase I am worried for problems in replications. 
    And yes there are political reasons due to which we want to have multiple domains under one single forest. I have these following points:
    1. With multiple domains you introduce complications with trusts 
    (Yes we will face complications that is why  I will have a VM where there will be three child domains for 3 countries in HQ sitting right next to my main AD server which have forest/domain -  which i hope will help in fixing replication problems)
    2. and
    accessing resources in remote domains. (To address this issue i will implement two additional DCs in respective countries to make the resources available, these RODCs will be pointed toward their respective main domains in HQ)
    As an example:- 
    HQ data center=============
    Company.com (forest/domain)
    3 child domain to company.com
    example uae.company.com
    =======================
    UAE regional office=====================
    2 RODCs pointed towards uae.company.com in HQ
    ==================================
    Please tell me if i make sense here.

  • I need advice (for the best way to exporting uncompressed files)

    Using Final Cut Pro to animation(2D) project(30fps), I've experienced a problem exporting uncompressed TGA image sequence files.
    Here's the workflow I use:
    File> Export> QT Conversion> Image Sequence> Options> TGA, uncompressed 8bit, millions of color, 30 fps> OK
    After exporting a files, I supposed to get 1MB each(file size), but I got less than 1MB each(file size). It tells us that those are compressed files. The size of files is random. when I use Avid Nitris there is no problem but it is different with the Final Cut Pro.
    Do you know what is the best way to exporting uncompressed TGA image sequence files out of FCP? (for Best Quality Output)
    Thank you in advance.
    Jasmine

    Patrick,
    I'm sorry it has taken me so long to reply you. I can't post your request, because Avid is not here. So I will post another example, and I have another question. This is the nearset approach to problem. See below images.
    I use QuickTime 7.1 and Final Cut Pro 5.1.1 in the PowerMac G5 2.0 Ghz.
    1. File > Export > QT Converstion > Image Sequence > Options > TGA, 30, Best Depth
    I supposed to get 1MB each, but I got less than 1MB each. It tells us that those are compressed files, and shows up low quality images. (besides, The size of files is random.)
    2. File > Export > QT Converstion > Image Sequence > Options > TIFF, 30, Best Depth (Compression: None)
    Finally, I was a great success See the files size: all 1MB each. I can get the result that I want/___sbsstatic___/migration-images/migration-img-not-avail.png
    OK, now question, I will tell you.
    Why have a different result? (TGA vs TIFF)
    'TIFF options' have a compression option, but 'TGA options' haven't it. So 'TGA image sequence' can't make uncompressed files. For what reason? Do you know any particular reason?
    : TGA Options haven't compression option.
    : TIFF Options have a compression option.
    What's the 'Little Endian'? I don't know this option: Little Endian.
    Could you explain that for me, please? Lay it all out for me, please..
    Thank you for your kindness,
    Jasmine

  • Need advice for future design and hardware I should purchase now.

    I was wondering if someone could assist me in making a decision on where I should take the future design of my network. Currently the design has no redundancy and I've been told recently that my company has a bit of money to spend and that it needs to spend it within the next 2 weeks. I am fairly new to the company so haven't been able to really think about future redundant designs nor have I studied much about designing networks. There are about 200-300 people that may be using my network at once, all users aren't at the location in question but they may be asking for resources from our servers.
    I've included a basic design of the "core" of my network and would like any suggestions for creating redundancy in the future and also optimizing the way data travels to our servers. Do people generally have redundant Layer 3 switches for the core in small networks such as mine? I will be replacing the 2811 since it only has 100Mbps connections and was thinking, perhaps replace this with a Layer 3 switch with the plan to have another identical Layer 3 switch installed to offer redundancy in the future.
    Also, would it be a good idea to move the servers into a vlan on the core? Thanks for any advice on what I should be purchasing now with plans for redundancy being implemented over a year.  -Mark

    40k Can go pretty quick depending on the scope. Your server farm especially should be dual-homed capable of surviving link, hardware, routing, and software failure.
    It's going to be best practice your server farm be in it's logical subnet, so failover mechanism can be controlled routing protocols, as opposed to FHRP's such as HSRP/VRRP/GLBP. Especially since you adjust routing timers to sub-second convergence.
    Budget will be the primary limitation (as it always it) but ideally dual 6500's running VSS and FWSM would be the ideal way. Data centers should be designed with high availability in mind, hence the need to 2x devices.
    Depending on the size of the SAN/Virtual infrastructure Nexus might want to be considered but you will chew up 40k before you know it.
    Also make sure the server farm is scaled properly. Your server farms should be oversubscribed in a much higher ratio compared to your access layer.
    CCNP, CCIP, CCDP, CCNA: Security/Wireless
    Blog: http://ccie-or-null.net/

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

Maybe you are looking for

  • Problem with Arabic characters

    Hi: I don't know if this is the correct place to post the question, but here it goes... I have an SQL 2005 database, connected via a Linked Server to an Oracle Database. I have a table in SQL that contains arabic characters, and I need to insert it i

  • WS_EXCEL is Obsolete in ECC 6.0

    Hi Guys, Could you plese suggest me, which is similar function module in ECC 6.0. Thanks, Gourisankar.

  • Free goods - system picking wrong free good

    Dear All, I have defined free goods record in tcode vbn1 for the combination of 1) plant and material 2) division and material We have maintained different free goods for different record. While creating sales order system is picking free goods from

  • Manual Bank Statement Clearing.

    Dear All, While processing BRS through FF67, a batch input session in being created.  When I execute this session, system is prompting to enter Business area and Profit center for each line item as we have given Business area and Profit center as man

  • Calling HTTP from ABAP

    Hi , I want to call HTTP link from ABAP. On initial research , I found that I can do the task in 2 ways:--- 1. by using class cl_http_client, 2. by using program RSHTTP20 . Is it the correct information I got. Out of the 2 methods, which method can I