Create Test Environment - Best Practice

HI all..
Please, someone know which is the best practice  to create a environment test from our company DB for testing purpose?
OUR SCENARIO:
SAP B1 9.1 PL 6
MSSQL 2008 R2
Integration service activated
WINDOWS 2008 R2 Ent Edition for client machine and server machine
Thank's in advance
--LUCA

Hi Manish..
I would like to have a copy of our company DB for testing purpose..
It's not important integration service.
We would like to test 9.1 PL 06 BoM new features without modify the rpoduction environment...
Thank's
--LUCA

Similar Messages

  • Environment best practices

    In my experience with other Integration tools, it has been a best practice to deploy an instance of the product per environment.  For example, I would expect to have Oracle SOA Suite deployed for a development enviornment, an Int Test environment, a System Test environment, and production.  In each of those environments exists a version of the corporate applications that are in various states of readiness as well.
    The question has been raised at this client - Why do we need to do that and could we not just stand up a single ESB instance for all non-prod and an ESB instance for prod.
    What is the experience of others on this forum around this topic - best practice, pros/cons, things to watch out for, etc.

    Thats a bad idea. You must have at least 3 environments.
    Developers need a space to deploy code and test. You need Test ideally a COPY of a production to smoke out any weirdness in the code.  The Test environment should resemble Prod in all aspects i.e hardware,memory, software versions etc.

  • Creating Image Captions: Best Practice?

    Hello-
    I'm currently using RoboHelp to create an HTML-based internal user guide for a software product. This guide has many images and figures in it and each will need an accompanying caption. Each caption must appear centered directly beneath the accompanying image.
    I need some guidance on two issues, please:
    1. What is the best practice for inserting a caption along with an image?  I currently cannot find any tool or which enables a RoboHelp user to insert an image into a document and designate a caption to be displayed with that image.
    2. Each image will be uniquely numbered (Figure 1, Figure 2, etc.). It may be necessary, as the document is drafted and redrafted, to remove images from and insert other images into the document. How can the numbering of images be set up so that the removal or addition of an image will automatically reset the numbering of all affected images accordingly?  I would very much like to avoid resetting dozens of image number-designations manually.
    Thank you-
    Michael

    Hi there
    As far as the captions go, I would normally include the caption as part of the image itself. If you feel you must keep the text separate, I'm guessing you will need to resort to tables. One column and two rows.
    As for the numbering, that one will be pretty tough. Keep in mind that in the world of HTML, people don't typically consume content in a linear fashion as they once did when all we had was print.
    Someone may pop in here with some solution I've been unaware of. But presently, I'm unaware of a way to force that type of numbering.
    Cheers... Rick

  • Reason for creating New Version Best practice

    Hello SAP Gurus,
    We know we are creating new version of the DIR when it is required to make a change in the original keeping in tact the Older file also for reference,
    we want to know when a new version of the document is getting created how can we specify, why and what made us to create this new Version, we would like to capture this reason in SAP so the user can easily come to know why this Version is created,
    can you please let us know, where actually are we capturing this details while creating the new version,
    i know there are lot of free entry fields like Generic object Text field, Long text field, change the description, place the text in the WR log field of the new Version,
    which is the best practice so that the user can easily come to know the reason why this new version has been created,
    may be they should see this reason details in CV04N search as well, or immediately when opening the new version.
    Thanks and regards
    Kumar

    Hello Kumar
    Hope following explanation will clarify your questions
    We have to classify our documents broadly in two category  1) Controlled document --means which under goes various level of  / Inspection / review / approval etc with in Organisation e.g. Engg Drawing , Standard Operating Procedure for business purpose, best way for this is use Engineering Change Management (ECM)Functiuon and when you make any up versions, you can track / write key docu ment history in ECM 
    also when you do up version let say in Micosoft word , you can maintain simple table to track what is purpose of change, in what location you are making chnage and its justification.
    For Engineering drawing of 2D /3D we can use Redlining functions to createt the Lawyers to know reviwers observation so that next version thopse are taken care
    2) Uncontrolled Document---Generally they are less significant from document Change Control point of view
    Hope this is useful
    Regards
    GIRISH

  • Use MS Account or Organization Account to create Azure Account - Best Practice?

    Hi, I see that it is now possible to create Azure accounts as an org: http://azure.microsoft.com/en-us/documentation/articles/sign-up-organization/. Previously, you needed a MS Account. I also note that http://msdn.microsoft.com/en-us/library/azure/hh531793.aspx
    says “Use organizational accounts for all administrative roles”. Note the word "all" which I guess includes the main admin account itself. Is this now considered MS's best practice for organizations? I have to admit that at the moment, I can't see
    what difference it really makes in practice. Any thoughts?
    TIA.
    Mark

    Hi,
    "Mark as answer" means that the post could help you, of course, we hope our posts could give you some help, if not, please feel free unmark, if you still have issues with this topic, we welcome to post again, Thanks for your understanding.
    For this topic, as mention by Bharath Kumar P, we usually use Microsoft account for single user, you could try to sign up a one-month free trial at:
    http://azure.microsoft.com/en-us/pricing/free-trial/, here are Azure Free Trial FAQ:
    http://azure.microsoft.com/en-us/pricing/free-trial-faq/, if you have any questions, please feel free to let me know.   
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Creating Master Image Best Practices

    Greetings!
    Soon, I will be sending a master disk to a fulfillment company for mass replication.
    I have already sold about 350 copies of my DVD (produced with Encore), which I burn on my computer (Alienware, Windows 7), and then package and ship to my customers.
    I
    I've had one guys who said it "skipped really bad", I've had a couple customers who get an "invalid or unknown file" error when they tried to play it on a DVD player, and one guy who couldn't play on a computer because of a "missing DVD decoder".
    4 or 5 errors out of 350 isn't bad, but I'm wondering if there's anything I need to know, in order to make my NTSC DVD as universally viewable as possible?
    I know not to exceed an 8mb/s bitrate, but other than that, is there anything I should know or do?
    Thank you!
    Pat

    I did indeed use Taiyo-Yudens. Since I bought 500 blanks, I got a good price on solid media.
    I'm mostly wondering if there's a software 'best practice' I need to follow, since my fulfillment center will be handling most of the heavy lifting with the hardware and media.
    Thanks, video pros..!
    Pat

  • Test granularity - best practice

    We're struggling with a couple issues.  We want to write our LabVIEW in a generic fashion such that they simply perform a task, and unless it's a very simple task, they don't make the determination of pass/fail - we want to leave that upto TestStand.  We don't want any of our LabVIEW tests to have hard-coded ranges in them.  This brings up a number of questions.  Since we (mostly) want TestStand to ochestrate the parameter ranges, test sequence, pass/fail, etc.- Is there a best/suggested method of capturing test scenarios.  Obviously, at some point, you have to identify the parameters/ranges for the various tests.  Is it best to hard-code them into the TestStand sequence file associated with a particular LRU - OR - Is there a nice XML, .ini, .txt format that TestStand can read to populate a series of local variables.  Hopefully someone understands my babble and can provide a starting point.

    TestStand provides the property loader step type that you can use to modify limits for a particular board type. Whether you use that or hard code limits into a test sequence is really application specific. If you have an LRU (or what I would call a UUT or Unit Under Test) that has a large number of different varieties, it would often be simpler to use the property loader. Doing this, you would only have to write a single sequence file. For a new board, you would then just create a new .txt, .xls, etc. and distribute that. I don't happen to be in that sort of situation. The vast majority of boards that I have to test are unique and the limits are fixed in the sequence file. I do have several board types where stuffing options make certain whole tests optional and I find it more convenient to use pre-conditions in the sequence file for that situation. For example, perform this step if serial number prefix = 'abc'.
    You are correct in removing any pass/fail criteria from the LabVIEW code and letting TestStand do that work but the actual mechanics of how TestStand does it should be approached on a case-by-case basis. It helps to be in touch with the product development teams to see what other flavors of a particular product are planned. The extra overhead of loading limits from an external file would not be justified for something that is a one-off and you are trying to optimize the test time.

  • Network users in mixed desktop and portable environment: Best Practices

    Hello,
    When using Server 4.0 in a network that includes both Mac desktops and Macbook devices, what is the recommended setup for network users and their Home directories?
    The environment I'm building would best be suited for a "Local Only" Home folder setup due to the frequent use of large files, such as iPhoto stores, by many users. However, how does that function with the portables? When on the network, the user has all files accessible locally, but once off site, the user is not available. I know that a user may be converted into a Mobile Account so that it is available both on and off the network. However, my understanding is that this is more suited for syncing Home folder, i.e. when the Home folder is on a network share. Will a mobile account also function this a "Local Only" Home folder setup?
    Essentially, I'm using the server to manage network users, not to serve files for the Home directories.
    Thank you.

    Bump?

  • Bundle testing and best practices, bonus introduction

    Hi all, I'm new here and to ZenWorks, please be gentle
    While I'm aquainted with puppet and reasonably proficient with ansible, I've just taken on a number of Windows desktops. We're using ZCM to deploy software, and I'm testing to get aquainted. Things don't quite work as I expect; for example, I fought with manually launching QuickTasks to my test workstation manually for hours yesterday before I realized I needed to manually refresh the workstation after updating my bundle, I couldn't just push.
    So while I'm reading throught the documentation, I thought I'd say hello and ask for some general advice. In particular, I had trouble identifying why my bundle had failed, and where to look for information about the failures. Including a script then expecting STDERR from that script to tell me about the failure seems like an impossible luxury at this point For now, I'd like to just experiment.
    What is your workflow for testing new bundles?

    Immanetize,
    > I fought with manually launching
    > QuickTasks to my test workstation manually for hours yesterday before I
    > realized I needed to manually refresh the workstation after updating my
    > bundle, I couldn't just push.
    Yes, refresh is a timed process. When testing, I always do a zac ref
    bypasscache on the testing workstation.
    > So while I'm reading throught the documentation, I thought I'd say hello
    > and ask for some general advice. In particular, I had trouble
    > identifying why my bundle had failed, and where to look for information
    > about the failures. Including a script then expecting STDERR from that
    > script to tell me about the failure seems like an impossible luxury at
    > this point For now, I'd like to just experiment.
    > What is your workflow for testing new bundles?
    It depends. If it fails, then I first look at the ZCM agent log.
    Anders Gustafsson (NKP)
    The Aaland Islands (N60 E20)
    Have an idea for a product enhancement? Please visit:
    http://www.novell.com/rms

  • Create test env from production system

    Hi,
    Please sorry,
    I am new in this area of Oracle Applications 11.5.10 and I need help in couple of questions.
    We have production system with Oracle Applications 11.5.10, APPS tier on 32bit Red Hat Linux EE 4 server, and 10.2.0.3 DB tier on 64bit Red Hat Linux EE 4 server.
    We get the 2 new servers for test Oracle Applications 11.5.10 environment with same versions of OS like production.
    First test 32bit for APPS,
    Second test 64bit for DB tier.
    Now we need to create test environment, same like production.
    Questions>
    1) What is the best way to create same env in this situation (32bit app tier, 64bit database tier). I suppose that cloning using rapid clone procedure isn't possible on this clear machines.
    2) If I need to install Oracle Applications 11.5.10 software first on both machines, is it possible to use 32bit rapidwiz installer (existing in stage) to create test db tier on this 64bit node, or not.
    I have read in the document Oracle Applications 11.5.10 - Installation Update Notes for Linux x86
    You can only install Oracle Applications on an x86-64 architecture server if the operating system is 32-bit Linux or Windows. If your operating system is 64-bit, contact your operating system vendor to obtain a 32-bit operating system before installing Oracle Applications.
    3) How to now from which stage production is created. When I try to create the stage for my test environment using perl command perl /mnt/cdrom/Disk1/rapidwiz/adautostg.pl
    I get these options:
    1 - to choose Oracle Applications
    2 - to choose Oracle Applications with NLS
    3 - to choose Oracle Database technology stack (RDBMS)
    4 - to choose Oracle Applications database (Databases)
    5 - to choose Oracle Applications technology stack (Tools)
    6 - to choose APPL_TOP
    7 - to choose National Language Support (NLS) Languages
    Because I haven't seen directory oraNLS is this 1 good choose for stage.
    Thanks, and sorry because I am new in this area.
    Regards
    Edited by: user12009428 on Sep 30, 2010 12:12 PM

    Hi,
    1) What is the best way to create same env in this situation (32bit app tier, 64bit database tier). I suppose that cloning using rapid clone procedure isn't possible on this clear machines. Use Rapid Clone.
    Rapid Clone Documentation Resources, Release 11i and 12 [ID 799735.1]
    FAQ: Cloning Oracle Applications Release 11i [ID 216664.1]
    2) If I need to install Oracle Applications 11.5.10 software first on both machines, is it possible to use 32bit rapidwiz installer (existing in stage) to create test db tier on this 64bit node, or not.Type "linux32 bash" -- See this thread for details.
    How to install 11i on Red Hat Linux 64 bit
    Re: How to install 11i on Red Hat Linux 64 bit
    You can only install Oracle Applications on an x86-64 architecture server if the operating system is 32-bit Linux or Windows. If your operating system is 64-bit, contact your operating system vendor to obtain a 32-bit operating system before installing Oracle Applications. What is the database version?
    To migrate the database from 32-bit to 64-bit you need to follow the steps in these docs (depends on your version).
    Using Oracle Applications with a Split Configuration Database Tier on Oracle 9i Release 2 [ID 304489.1]
    Using Oracle Applications with a Split Configuration Database Tier on Oracle 10g Release 2 [ID 369693.1]
    Using Oracle EBS with a Split Configuration Database Tier on 11gR2 [ID 946413.1]
    3) How to now from which stage production is created. When I try to create the stage for my test environment using perl command perl /mnt/cdrom/Disk1/rapidwiz/adautostg.pl
    I get these options:
    1 - to choose Oracle Applications
    2 - to choose Oracle Applications with NLS
    3 - to choose Oracle Database technology stack (RDBMS)
    4 - to choose Oracle Applications database (Databases)
    5 - to choose Oracle Applications technology stack (Tools)
    6 - to choose APPL_TOP
    7 - to choose National Language Support (NLS) Languages
    Because I haven't seen directory oraNLS is this 1 good choose for stage.oraNLS is only required when you want to install additional languages in addition the base English one. If you have no installed languages you can skip this one.
    Please run md5sum as per (MD5 Checksums for 11i10.2 Rapid Install Media [ID 316843.1]) to verify the integrity of the stage area directory before you run Rapid Install.
    Thanks,
    Hussein

  • Best Practice Question on Heartbeat Network

    After running 3.0.3 a few weeks in production, we are wondering if we set up our Heartbeat /Servers correctly.
    We have 2 servers in our Production Server pool. Our LAN, a 192.168.x.x network, has the Virtual IP of the Cluster (heartbeat), the 2 main IP addresses of the servers, and a NIC assigned to each guest. All of this has been configured on the same network. Over the weekend, I wanted to separate the Heartbeat onto a new network, but when trying to add to the pool I received:
    Cannot add server: ovsx.mydomain.com, to pool: mypool. Server Mgt IP address: 192.168.x.x, is not on same subnet as pool VIP: 192.168.y.y
    Currently, I only have one router that translate our WAN to our LAN of 192.168.x.x. I thought the heartbeat would strictly be internal and would not need to be routed anywhere and just set up as a separate VLAN and this is why I created 192.168.y.y. I know that the servers can have multiple IP addresses, and I have 3 networks added to my OVM servers. 192.168.x.x, 192.168.y.y and 192.168.z.z. y and z are not pingable from anything but the servers themselves or one of the guests that I have assigned that network to. I can not ping them directly from our office network, even through the VPN which only gives us access to 192.168.x.x.
    I guess I can change my Sever Mgt IP away from 192.168.x.x to 192.168.y.y, but can I do that without reinstalling the VM server? How have others structured there networks especially relating to the heartbeat?
    Is there any documentation/guides that would describe how to set up the networks properly relating to the heartbeat?
    Thanks for any help!!

    Hello user,
    In order to change your environment, what you could do is go to the Hardware tab -> Network. Within here you can create new networks and also change via the Edit this Network pencil icon what networks should manage what roles (i.e. Virtual Machine, Cluster Heartbeat, etc). In my past experience, I've had issues changing the cluster heartbeat once it has been set. If you have issues changing it, via the OVM Manager, one thing you could do is change it manually via the /etc/ocfs2/cluster.conf file. Also, if it successfully lets you change it via the OVM Manager, verify it within the cluster.conf to ensure it actually did your change. This is where that is being set. However, doing it manually can be tricky because OVM has a tendency to like to revert it's changes back to its original state say after a reboot. Of course I'm not even sure if they support you manually making that change. Ideally, when setting up an OVM environment, best practice would be to separate your networks as much as possible i.e. (Public network, private network, management network, clusterhb network, and live migration network if you do a lot of live migrating, otherwise you can probably place it with say the management network).
    Hope that helps,
    Roger

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • What is the best practice to deploy the SharePoint site from test to production environment?

    We are beginning to start a new SharePoint 2010 and 2013 development projects, soon developing new features, lists, workflows, customizations to the SharePoint site, customization to list forms and would like to put good practice (that will help in deployment)
    in place before going ahead with development.
    What is the best way to go about deploying my site from Development to Production?
    I am using Visual Studio 2012 and also have Designer 2013...
    I have already read that this can be done through powershell, also through visual studio and also via designer. But at this point I am confused as to which are best practices specifically for lists, configurations; workflows; site customizations; Visual studio
    development features; customization to list forms etc. You can also provide me reference to links/ebook covering this topic.
    Thanks in advance for any help.

    Hi Nachiket,
    You can follow below approach where the environments has been built in similar fashion
    http://thesharepointfarm.com/sharepoint-test-environments/
    if you have less data then you can use  http://spdeploymentwizard.codeplex.com/
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/b0bdb2ec-4005-441a-a233-7194e4fef7f7/best-way-to-replicate-production-sitecolletion-to-test-environment?forum=sharepointadminprevious
    For custom solutions like workflows etc you can always build the WSP packages and deploy across the environments using powershell scripts.
    Hope this helps.
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful
    Hi, can you answer me specifically with regards to the foll:-
    lists
    configurations
    workflows
    site customizations like changes to css/masterpage
    Visual studio webparts
    customization to list forms
    Thanks.

  • Best Practice - Hardware requirements for exchange test environment

    Hi Experts,
    I'm new to exchange and I want to have a test environment for learning, testing ,batches and updates.
    In our environment we have co-existence 2010 and 2013 and I need to have a close scenario on my test environment.
    I was thinking of having an isolated (not domain joined) high end workstation laptop with (quad core i7, 32GB RAM, 1T SSD) to implement the environment on it, but the management refused and replied "do it on one of the free servers within the live production
    environment at the Data Center"... !
    I'm afraid of doing so not to corrupt the production environment with any mistake by my configuration "I'm not that exchange expert who could revert back if something wrong happened".
    Is there a documented Microsoft recommendation on how to do it and where to do so to be able to send it to them ??
    OR/ Could someone help with the best practice on where to have my test environment and how to set it up??
    Many Thanks
    Mohamed Ibrahim

    I think this may be useful:
    It's their official test lab set up guide.
    http://social.technet.microsoft.com/wiki/contents/articles/15392.test-lab-guide-install-exchange-server-2013.aspx
    Also, your spec should be fine as long as you run the VMs within their means.

  • Hi, we need to create the test environment from our production for oracle AP Imaging. we have soa,ipm,ucm and capture managed servers in our weblogic. can anyone tell me what is the best way to clone the environment, can I just tar the weblogic file syste

    Hi, we need to create the test environment from our production for oracle AP Imaging. we have soa,ipm,ucm and capture managed servers in our weblogic..
    Can anyone tell me what is the best way to cloning the application from different environment, the test and production are in different physical server.
    Can I just tar the weblogic file system and untar it to the new server and make the necessary changes?
    Can anyone share their experiences and how to with me?
    Thank in advance.
    Katherine

    Hi Katherine,
    yes and no . You need as well weblogic + soa files as the database schemas (soa_infra, mds...).
    Please refer to the AMIS Blog: https://technology.amis.nl/2011/08/11/clone-your-oracle-fmw-soa-suite-11g/
    HTH
    Borys

Maybe you are looking for

  • Problem with exporting back to LR from PS

    I exported a photo to PS as always for and edit but when I was done it didn't go back to LR. I can only open from PS. How do I get it back to LR and why did this happen?

  • What does "Other" mean for iPhone storage?

    My other section in the iPhone keeps on growing every week. Now it's 4.7 GB, more than apps, audio, video, photo... how do I understand what's in there? Any way to see the detail?

  • Can't accept End User License Agreement

    I am being prompted to accept the End User License Agreement for Adobe Reader but I don't know how to accept it. I've uninstalled and installed Reader, that didn't fix it. I'm using Chrome on my iMac Yosemite.

  • How to initialize the new internal hard disk in LapTop ?

    Folks, Hello. My LapTop model is HP Pavilion dv4-2045dx entertainment notebook whose host operating system is Windows 7. Because the current hard disk capacity is not enough, I need to replace it with a new hard disk with bigger capacity. After take

  • Deleting scenario from Integration Repository.

    Dear All, I was having one scenario in the Dev server which was having all the objects in it like namespace, DT, MT, MI, MM & IM. After  testing now I want to delete the complete scenario from Repository. I have deleted all the objects like DT, MT, M