Need info on DDL and Data propagation

Greetings Guys,
     I have a requirement for moving data and DDL from lower environment to production, this would be done on some frequency say weekly for deploying the latest code for the application, I would like to know what are the possible techniques
and tools available in SQL Server. I am not sure if I can set up replication for all the tables in the database because in case of restoring the database from backup I suspect fixing the replication problem itself will become a big thing. Currently we use
merge statements for moving data between the environments and redgate sql compare API for moving DDLs. Let me know if there are any other ways to do this. 
Environment:
SQL SERVER 2008 R2
WINDOWS 2008 R2
With regards,
Gopinath.
With regards, Gopinath.

You can also create a SSDT database project and publish the changes using it
see
http://www.techrepublic.com/blog/data-center/auto-deploy-and-version-your-sql-server-database-with-ssdt/
http://blogs.msdn.com/b/ssdt/archive/2013/08/12/optimizing-scripts-for-faster-incremental-deployment.aspx
http://schottsql.blogspot.in/2012/11/ssdt-publishing-your-project.html
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page

Similar Messages

  • SQL Dev 1.5.4+: Scripting DDL and data dumps?

    In SQL Dev 1.5.4, can I script a DDL and data dump? If not, what about 2.0? If not 2.0, has anyone requested this functionality so I can vote for it? I find it frustrating that, while doing a Database Export, I can't even pre-declare (e.g. save) the set of objects I want to dump; sometimes, you want to selectively dump, and it's a pain to hunt and peck and select just those you want to dump. Easy if you want to dump everything but 2-3 objects in a large schema. Not so easy if you only need, say, 20 out of 100 objects to be dumped (e.g. for domain or configuration tables--some subset of the whole schema).
    I'm really enjoying SQL Developer 1.5.4 by the way. Despite it's flaws, I'm pretty happy with it. Looking forward to 2.0 and beyond. Good work SQL Dev team.
    Thanks very much.
    Dana

    They're all command line tools, so they can all be wrapped up in a batch or shell script. Bummer you can't access them... Hope you find a better solutionThanks K. I should be getting Oracle 10g Express Edition on my desktop soon--critical because we don't have full access to the Development instance. It's like putting changes through a straw over to the DBAs. I'm not sure why Development is locked-down to developers, but that's the way it is.
    Any chance that Oracle 10g Express Edition comes with scriptable data pump binaries? Will still need authorization, but maybe that's one way to go. I hate trying to write my own Data Pump in Python or any other language. It's seems a bit absurd to me but I suppose there are reasons.
    Dana

  • BUG: Export DDL and Data fails for mixed case table/column names

    Hi there,
    I have found a bug in SQL Developer. See details below.
    Description:
    When "Export DDL and Data) function is used on a table/columns not named in UPPERCASE, sql generated by SQL Developer is invalid.
    Steps to reproduce:
    - open SQL Developer, connect to DB
    - make a table named "lowerCase" (in double quotes, so it won't be automatically changed to capital letters)
    - you may also add some columns, for example "lowerCol1", "UpCol2", ALLUPCOL3
    - add some data rows to the table
    - choose Tools -> Export DDL and Data
    - check exporting of tables and data, on "filter" tabs choose your "lowerCase" table
    - press "Apply"
    Error:
    Generated SQL contains invalid INSERTs: mixed-case table and columns are referenced without obligatory double quotes, which yields an error when generated script is executed (see below, relevant line is underlined)
    -- DDL for Table lowerCase
    CREATE TABLE "DBO_HT"."lowerCase"
    (     "lowerCol1" VARCHAR2(100),
         "UpCol2" VARCHAR2(100),
         "ALLUPCOL3" VARCHAR2(100)
    -- DATA FOR TABLE lowerCase
    -- FILTER = none used
    -- INSERTING into lowerCase
    Insert into lowerCase (lowerCol1,UpCol2,ALLUPCOL3) values ('lc','uc','auc');
    -- END DATA FOR TABLE lowerCase
    Remarks
    SQL Developer: version 1.2.1, build MAIN-32.13
    Oracle DBs: 9.2 & Express
    OS: Windows 2000 Professional
    If you need any more details/testing, let me know. I'd really appreciate a quick patch for this issue...
    Alternatively, do you know of any other simple way of copying a single database (it's called a schema in Oracle, right?) from one computer to another? Possibly something so simple like detaching->copying->reattaching mdf (data) files in SQL Server... I thought that this "Export DDL&Data" function will do, but as you can see I couldn't use it.
    I just need a simple solution that works - one operation on source to stuff, get the resulting files to other computer and one operation to have it running there... I think that such scenario is very basic, yet I just can't achieve it and I am simply not allowed to spend more time on it (read: our test project fails, my company rejects my "lobbying" and stays with MSSQL :/ )
    Thanks a lot & bye

    Thanks for your reply.
    ad. 1)
    You're right. I just wanted to give some very short feedback on my experiences with SQL Developer, so I didn't think starting new threads would be necessary, but as I was writing it became much bigger than I initially planned - sorry about that. I will make proper threads as soon as possible. Having "Edit post" button on this forum would also be useful.
    ad. 2)
    Generally, you're right - in most cases it's true that "switching DBMS is a major commitment" and "you will produce terrible code" if you don't learn the new one.
    However, I think that you miss one part of market here - the market that I think Express is also targeted on. I'd call it a "fire&forget databases" market; MySQL comes to mind as possibly most common solution here. It's the rather small systems, possibly web-accessed, whose data-throughput requirements are rather modest; the point is to store data at all, and not necesarily in fastest way, because given the amount of data that is used, even on low-end hardware it will work well enough. What's important here is its general ease of use - how easy is to set up such system, connect and access data, develop a software using it, how much maintenance is needed, how easy this maintenance is, how easy are the most common development tasks as creating a DB, moving a DB from test to production server etc. There, "how easy" directly translates to "how much time we need to set it up", which translates to "how much will the development will cost".
    Considering the current technology, switching the DBMS in such systems is not necesarily a major commitment and believe me that you will not produce terrible code. In many cases it's as simple as changing a switch in your ORM toolkit: hibernate.dialect = Hibernate.Dialect.OracleDialect vs MySQLDialect vs MsSql2005Dialect
    Therefore, in some part of market it's easy to switch DBMS, even on project-by-project basis. The reason to switch will appear when other DBMS makes life easier => development faster. From that point of view, I can understand my colleagues giving me an embarassing look and saying "come on, I won't read all these docs just to have db copied to test server". And it doesn't mean "they are not willing to learn anything new", it's just that they feel such basic task should have self-explaining solution that doesn't require mastering any special knowledge. And if they get such simple solutions somewhere else, it costs them nothing to change the hibernate dialect.
    I think Oracle did the great job with introducing the Express to this "fire&forget" market. The installation is a snap, it just works out of the box, nothing serious to configure, opposite to what I remember from installing and working on Oracle 9 a few years ago. In some places it's still "you need to start SQL*Plus and enter this script", but it's definitely less than before. I also find the SQL Developer a great tool, it can do most of what we need to do with the DB, it's also much better and pleasant to use over Oracle 9 tools. Still, a few basic things still require too much hassle, and I'd say taking your schema to another machine is one of them. So I think that, if you do it well, the "schema copy wizard" you mentioned might be very helpful. If I was to give any general advice for Express line of DB/tools, I'd say "make things simple" - make it "a DB you can't see".
    That's, IMHO, the way to attract more Express users.

  • Need Info on RDA-enabled data source based on FM

    Hi,
    I need Info on RDA-enabled data source based on Function Module.
    How to implement it?
    Thanks & Regards,
    Rashmi.

    Hi Rashmi
    Check this link
    http://help.sap.com/saphelp_nw70/helpdata/EN/52/777e403566c65de10000000a155106/frameset.htm
    [under tab Tranferring Transaction Data from Source Systems (RDA)]
    http://help.sap.com/saphelp_nw70/helpdata/EN/3f/548c9ec754ee4d90188a4f108e0121/frameset.htm
    Regards
    Jagadish

  • Exporting Application w/ DDL and data

    I can export an Application.
    I can export the DDL for that application.
    I Cannot_ export the data for that application unless I am not following the proper procedure.
    Note that I did use "with supporting objects" option.
    Anyone know why ?
    Thanks in advance,
    Anon
    Edited by: Anon on Aug 2, 2010 11:20 AM
    Edited by: Anon on Aug 2, 2010 11:27 AM

    Supporting Objects allow you to add ddl and dml to your application so that you can export as one file and then import and install in one step. To get your data, I would suggest using SQL Developer and extracting your data as insert statements. That can then be loaded into your supporting objects (after your ddl).
    Here is a link to a 3.2 tutorial that explains more about Supporting Objects.
    -- Sharon

  • Need info about HR realted data in webdynpro

    Hi experts,
           I am very new to hr module to use in webdynpro abap.
    I want to extract HR data in webdynpro view.
    Where should I give Logical Database PNP?
    The ifnotypes key word is not accepting in Webdynpro coding?
    Please give me all the info which you can give.

    Hi Srinivas
    We cannot use LDB in WebDynpro Programming.
    Atlernate to fetch the data from infotypes is Create class SE24 and one method in this call HR_READ_INFOTYPE to get the data from whatever infotype you want..
    Aftertaht call the class of method by creating one reference object to that class...
    Thanks and Reagrds
    Tulasi Palnati

  • 9i report services need info. on CGI and RWSERVLET

    We are trying to use the reports 9i against forms 6i. I need to info. on how we can differentiate the two types of service(whether it is CGI or rwservlet).
    Setup1:
    1. Run rwserver -install server=TESTCGI
    2. This creates the service and the testcgi.conf file
    3. Make the compatibility to 6i in rwservlet.properties
    4. In tnsnames specify the host=host1 port=1949
    (Is this that all that I need to CGI?)
    Setup2:
    1. Run rwserver -install server=TESTSERVLET
    2. This creates the service and the testservlet.conf file
    3. Make the service manual so that it doesn't startup on its own.
    4. Make the compatibility to 6i in rwservlet.properties
    5. Also specify server_name=TESTSERVLET
    6. Now start the service by typing http://host:8888/reports/rwservlet
    (This sometimes start the service(I can see an icon) and is this the CGI service that is started by the servlet??. Sometimes it can't start the service giving port binding error. If I start the service TESTSERVLET by going to the services window and type the url http://host:8888/reports/rwservlet/showenv works fine.
    I don't get any port bind error.
    7. Is this case also do I need a tnsnames entry? If so what should be the port?
    8. Is servlet internally calls CGI and is this how it works?
    9. Why is that it can't start the service automatically when started using servlet sometimes?
    I am little confused about both the architecture..Any help will be appreciated.

    See comments in line:
    1. Is report server and the listener(the one usually on
    port 1949) are different processes altogether?9i Reports Server will start up a proxy server in a separate process, if
    <compatible version="6i"/>
    is set in server config file <server_name>.conf. The proxy server, which is running in a different process, will listen on the port.
    2. If I issue the command, rwserver server=TESTSERVER(I
    have no tnsnames entry for this server at this moment
    in the tnsnames.ora file in the 9iR2 home) does it
    start the report server alone? or the report server and
    the listener process? rwserver server=TESTSERVER will start 9i TESTSERVER, but only start the proxy server if <compatible version="6i"/> is set. In the case that <compatible version="6i"/> is set, but no tnsnames entry for this server, the proxy will not be able to start, because it does not no which port to listen to.
    3. If the above command starts the listener process by
    itself which port does it use? (We had the option to
    specify the port in previous releases, but not anymore
    I guess. It used to be something like <executable>
    port=<portno> in previous releases)Again, the port is defined in tnsnames.ora.
    4. I assumed that specifying the entry for the report
    server in the tnsnames.ora before starting the server
    would start the listener process in the appropriate
    port.
    a. I made an entry in tnsnames.ora as SUPPORTSERVER
    port=1949.
    b. I ran the command rwserver server=SUPPORTSERVER
    c. Did a netstat -a on the machine. Didn't see any
    process listening on port 1949.
    So my question is how do we specify the listening port?
    where do we specify the listening port? If in
    tnsnames.ora why I didn't see any listener process on
    1949?Specifying the port in tnsnames.ora itself will not start the proxy server, you will need to uncomment <compatible version="6i"/> in $OH/reports/conf/<server_name>.conf. So make sure this and port number are both set correctly.
    5. How do I determine what client Iam using? Is it like
    below:
    a. If I use only the following url
    http://<web_server>.<domain_name>:<port>/<alias>/rwservlet?<parameters>
    am I making use of servlet/oc4j? Does this mean servlet
    has to be used only as a url?Yes, you are running report using Reports servelt. A servlet in its nature needs to be invoked in a web environment.
    b. If I use the following url
    http://<web_server>.<domain_name>:<port>/<alias>/rwcgi.exe?<parameters>
    am I making use of cgi? Does this mean the cgi has to
    be used only as a url?Yes, you are running report using CGI. A cgi program in its nature needs to be invoked in a web environment.
    However, you don't have to run them from url. You can have your own program call Reports servlet/cgi, then return the result.
    c. If for RUN_REPORT_OBJECT it uses ZRC, am I not using
    servlet/oc4j? Since the document says, the servlet is
    more stable than CGI, I want to use the servlet/oc4j.
    Does this mean I should use only the above url and not
    the RUN_REPORT_OBJECT?RUN_REPORT_OBJECT will not go through servlet. You can use show_document call to send request to servlet.
    Hope this helps,
    -Jeff

  • Variant to data: Need info regarding type of data

    Hello,
    What I want to do is very basic. I will acquire the image using CCD camera. The
    image is a raw data type: 2D array of type long (4 bytes per element) or integer (2 bytes per element). I can save the image even as .tiff file. Using Variant To Data, I am converting the image so that labview can handle it (I am not sure whether I really need this) before I write that data to some file and save it. I have to define data type for Variant To Data. I tried to define the data type. However, I am confused. Would anybody give me some suggestion?
    Thanks a lot,
    Dushyant

    Hrm, you'll need to be more specific about your actual question in order for us to give you a good answer, but, without more information, I'd say you should just define the type as a 2-d array of either longs or words. (i.e. get an array constant, place a numeric constant inside of it, right click the numeric constant, and select Representation->Long or Representation->Word then wire the constant to the Type input of the Variant to Data).
    If you're dealing with variants, I'd guess you are probably calling an ActiveX control or server to communicate with your camera?
    Regards,
    Ryan K.

  • Need Info on sysidtools and /etc/.sysIDtool.state

    I am trying to locate someone or some document that can explain how sysidnet, sysidsys, and the associated tools behave, as well as the /etc/.sysIDtool.state file that controls them.
    In particular, what, specifically, does each line in /etc/.sysIDtool.state control?
    How can I stop sysidnet from querying DNS and reconfiguring the ethernet interface?
    Suninstall seems to use syidnet and sysidsys to collect its sysidconfig parameters. Can I make suninstall use different IP parameters for the installed Solaris than those that are in effect during the install?
    I would be happiest with some documentation on the intended function of the sysidtool suite, with an explanation of each switch in /etc/.sysIDtool.state . I have seen the comments in the sysIDtool.state file, but they leave some obvious questions unanswered.
    This inquiry is languishing as hotline case number 62646363.
    Thank you,
    .mike

    Hi DivRoz,
    SQL Server Business Intelligence includes PowerPivot, Power View, SQL Server Analysis Services, SQL Server Integration Services, SQL Server Reporting Services, SQL Server Master Data Services, SQL Server Data Warehousing, Data Mining, and more. For more
    information about the BI forums, please refer to the following forum:
    https://social.technet.microsoft.com/Forums/en-US/home?forum=sqlmds%2Csqlintegrationservices%2Csqlreportingservices%2Csqlanalysisservices%2Csqlkjpowerpivotforexcel%2Csqldatawarehousing%2Csqldatamining%2Csqlkjpowerpointforsharepoint&filter=alltypes&sort=lastpostdesc
    As to the recommended books and resources, please refer to the following documents:
    Books Online for SQL Server 2014
    SQL Server Business Intelligence Tips and Tricks
    SSIS
    http://bi-polar23.blogspot.com/2007/12/books-books-books.html
    http://technet.microsoft.com/en-us/library/ms141026.aspx
    http://www.sqlis.com/sqlis/
    SSRS
    http://social.msdn.microsoft.com/Forums/en/sqlreportingservices/thread/9796a72f-c991-4d67-86fa-c178fcf5e051/
    http://social.msdn.microsoft.com/Forums/en-US/sqlreportingservices/thread/fb259210-f2ee-4111-91ce-186e37ff64ad
    SSAS
    http://www.ssas-info.com/analysis-services-books
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Need Help Witgh OMF and DAT SPDIF

    Hi,
    I need to digitise a recording of a ballet performance (symphony orchestra & ambiance). The recording was done on DAT and the Final Cut edit will be OMF. I will get OMF and audio files + DAT tape.
    My questions are:
    1. If I hire a timecoded DAT player, how can I sync it up to Logic?
    2. I use an MBox V1 and it has SPDIF. Can I use that to sync?
    3. Will Logic import OMF from Final Cut?
    and finally...
    4. Can I 'render' the final mix and save that as both OMF as well as have a stereo bounce?
    Any advice would be greatly appreciated.
    ken

    I'm sure there will be others with more knowledge of such things...
    ...but one way would be to use RandomAccessReadWriter()
    HTH
    Poot

  • Need info about CPU and HDD for Tecra 8200

    Hi Guys im new and have just bought the above laptop. However i want a faster CPU and bigger Harddrive (10GIG at the mo after 40 GIG if possible) and my CPU is Intel Pentium III 750MHZ. Please could someone point me in the right direction of where i can get a 40 gig h/drive for the right sort of money and what is the fastest / suggest CPU i can get / go for.
    Thanks for your help .

    Hi
    The fact is that the Tecra 820 was delivered with different CPUs:
    PIII-M (Mobile) 750MHz;
    PIII-M (Mobile) 850MHz;
    PIII-M (Mobile) 900MHz;
    PIII-M (Mobile) 1.0GHz
    In this case the fastest CPU which you can use is a PIII 1GhZ.
    However the CPU changing is not easy and you should change anything if you have no experience.
    Important: You will loose the warranty if you open the notebook.
    Furthermore in my opinion you need also a high performance cooling module because the new CPU will produce more warmness.
    I have also found a information that this unit was delivered with 10GB; 20GB; 30GB HDDs. I think you will have no problems to use a 40GB HDD. The compatible one you can order from the Toshiba service partner.

  • Need info about LAN and WLAN cards on Satellite L300-12P

    Hi,
    Can anyone tell me which network-adapter and which WLAN-adapter is used in the
    Satellite L300-12p notebook and in the L300-17I notebook?
    The best info would be the PCI\VEN and DEV -ID shown in Windows device-manager.
    Can anyone help me?
    Greetings
    Mike

    Hi
    The LAN adapter is a Realtek RTL8102 Family PCI-E Fast Ethernet NIC (NDIS 6.0)
    The WLan is a Realtek RTL8187B Wireless 802.11 b/g 54Mps USB 2.0 Network Adapter
    Hope this helps you because other details are not known to me.
    Greets

  • Help me I need info on viruses and spyware for iphones

    A weird and suspicous ad popped up on my phone and locked it till I pressed okay after I imeaditly shut my phone down is there a possibl I could have got a virus or spyware if so how could I know I had it and how can I remove it.o was on putlocker when the add came up.

    Firefox does not use .po files for localization. Please see https://wiki.mozilla.org/L10n:Localization_Process for info about localization. https://groups.google.com/forum/#!forum/mozilla.dev.l10n is a mailing list for localizers.

  • Tools - Export DDL (and data) mishandles timestamps

    I used SQLDeveloper to try to do a quick export of some test data. Here is a sample of part of an insert statement created by the export data option:
    to_timestamp('2005-12-31 00:00:00.0','DD-MON-RR HH.MI.SS.FF AM')
    This clearly will not work as the timestamp was retrieved in a format different than the format used in the to_timestamp() function. I tried to fix it, but I found than the utility switched from format to format. Some insert statements were consistent, some formatted the actual data differently, others used different format strings. It looked like I went to each developer and told them to write some code to create insert statements for their tables. Each one came back using a different format. Some were able to code valid SQL statements while others weren't able to create valid SQL. Unfortunately this was all done by one click. I cannot fathom how you all created such a mess because not only is it wrong (sometimes) but it is inconsistent. Actually I would guess someone created a hash table of available formats and randomly picked from the hash table.

    There is actually no error we are using dbms_metadata to see if there are contraints or ref_contraints on the table and this is the way it tells us there are not. We should be capturing this message better and not sending it to the user and I will fix that in the next release.
    However, this error should not be affecting your export if it is or you see missing information from the generated export file please reply.

  • Need info on CRM and APO fro  basis admin prespective

    Hi.
    I am a SAP R3 administrator and i am going to change my job to a company where they are using APO Live  and CR<
    Can you guys giving me some information like what a basis person do or thing like routine task/programs/performance tunning/debugging he does or need to do to support CRM / APO live chache environment .
    Thanks
    Amit

    plz let me know if u got any info

Maybe you are looking for