Request database

Hello,
I have a resquest to do from a database for these fields
VBRK(FKART,ERDAT,VBELN)
MAKT(MAKTX)
VBPA(NAME1)
VBRP(VRKME, FKIMG, POSNR, VBELN)
i have to do this select using the vbeln from a select-options
and to store result in internal table.
it is possible to join 5 tables and how can i do this.
best regards

here is the test program for the joins..
tables:mara,marc,mard.
data:begin of itab occurs 0,
     matnr like mara-matnr,
     mtart like mara-mtart,
     meins like mara-meins,
     werks like marc-werks,
     pstat like marc-pstat,
     labst like mard-labst,
     lgort like mard-lgort,
     maktx like makt-maktx,
     end of itab.
     select-options:s_matnr for mara-matnr.
     start-of-selection.
     select a~matnr
            a~mtart
            a~meins
            b~werks
            b~pstat
            c~lgort
            c~labst
            into corresponding fields of table itab
            from mara as a inner join marc as b   on amatnr = bmatnr
          inner join mard as c on amatnr = cmatnr
                          and bwerks = cwerks
                          where a~matnr in s_matnr.
  if sy-subrc = 0.
    loop at itab.
    select single maktx from makt into itab-maktx where matnr = itab-matnr.
modify itab.
    write:/ itab-matnr,itab-mtart,itab-meins,itab-werks,itab-pstat,itab-lgort,itab-labst,itab-maktx.
    endloop.
    endif.

Similar Messages

  • How to Request Database Schema from a Linked DB

    How does one Request Database Schema to be added to your workspace when that Schema is in a linked DB ?

    You can't do that.
    Scott

  • Crystal Report requests Database logon if SAP BW is datasource

    Hi all
    I experience the following:
    1. SSO from SAP Portal to BOE works: I can view Sample Crystal Reports based on the BO-iView-template without additional login
    2. SSO from SAP Portal to SAP BW works: a) I can view custom BW Web queries without additional login and b) the connection tests on the system object in SAP Portal run fine
    3. I assume SSO between BOE and BW works because I can view Xcelsius dashboards based on BW queries within SAP Portal without additional login
    Nevertheless I am presented a login screen ("The report you requested requires further information. -
    Database Logon") if I try to view a crystal report based on a BW query, no matter if I start the report from InfoView, SAPGUI or SAP Portal. If I enter my SAP credentials in "Databse logon"-window I can see the report runs fine.
    SAP Authentication in BOE seems to work (I can import users & roles). In all 3 systems I have at least Admin or equivalent permissions. If I change the Database Configuration for this report within InfoView ("Use SSO") it does make any change (I also read somewhere a reply from I. Hilgefort that this is not necessary if BW publisher service is installed correctly).
    We use SAP BW 701.06, BOE XI31FP3.4 and SAP Portal 701SP6
    I publish the report from Crystal Reports 2008 SP2 with the SAP-Menu -> Save Report.
    I logon to InfoView-App with my SAP-BW credentials.
    Any help appreciated!
    BR
    Kanan

    Real solution is different, my other post is only a workaround:
    1. SNC settings in CMC were not correct (Certificate path, path to sapcrypto.dll)
    2. Relevant Serices have to run under local username
    3. Username has to be added to PSE
    See here: https://websmp130.sap-ag.de/sap/support/notes/1364536

  • ESS Leave Request database - Archiving or Maintenance

    Hi
    I understand the leave request detals are stored in the following tables
    PTREQ_HEADER (Request Header)
    PTREQ_ITEMS (Request Items)
    PTREQ_ACTOR (Request Participant)
    PTREQ_NOTICE (Note for Request)
    PTREQ_ATTABSDATA (Request Data for Attendances/Absences)
    Over a period of time, the size must build up. IN an organization of 200,000 employees, there must be a need for maintaining the size of this database to manageable limits. I could not find any documentation for archiving or deleting the entries in this databse. there is a report rptarqdbdel, but according to documentation this is a trouble shooting tool and must be used for individual cases only and with extreme caution.
    Can any one please advise as to whether there are any tools/best practices for maintaining the size of this databse periodically? As there are several tables involved one would like to avoid having to write a custom program for this.

    there is no standard archiving procedure for leave request tables, You need to however use pu22 etc
    usually no deletion should be done as it might lead to inconsistencies.
    So best option is to use the report Rptarqdbdel which will ensure consistency is maintained while deleting the records
    you can delete the old records using this.
    You need to be careful while purging the data from these tables

  • 15105: User uname requesting database creation is not the instance admin

    Migrating from timesten 6.0.8 to 11.2 seems to have introduced restrictions on creation and destroying of datastores - now can only be done by the instance administrator - is this a hard restriction ? - is there anyway to allow anyone in the admin group to do this ? - we have a number of users within a development group that up until now have been able to run these commands, and now seems that need to use ssh script execution or similar which is more config/setup

    There have been many, many functionality and behaviour changes between TimesTen 6.0.8 and 11.2.1. Some of these occurred in 7.0 and many more in 11.2.1. In general it is not trivial to upgrade from 6.0 (or 7.0) to 11.2.1. without considering the impact of these changes. I'd recommend that you carefully study the TimesTen 11.2.1 and 7.0 release notes and 'behaviorchanges.txt' documents to see details of everything that has changed. I have also summarised the changes in a whitepaper which i am happy to send you.
    With regard to your specific question; no, it is not possible to override this behaviour. This change is part of the new security infrastructure in TimesTen 11.2.1.
    regards,
    Chris

  • Problem rename sharepoint 2010 search service application admin database

    Hi all,
    i have a problem that hopefully someone has an answer to.  i am not too familiar with sharepoint so please excuse my ignorance.
    we have sharepoint 2010 on a windows 2008r2 server.  everything seems to work fine.  but as you know, the default database names are horrendous.  i have managed to rename all of them, except for the "search service application" admin
    database.
    the default is: Search_Service_Application_DB_<guid>
    the other 2 databases (crawl and property) were renamed without a problem.
    we are following the article from technet on how to rename the search service admin db (http://technet.microsoft.com/en-nz/library/ff851878%28en-us%29.aspx).  it says to enter the following command:
    $searchapp | Set-SPEnterpriseSearchServiceApplication -DatabaseName "new database name" -DatabaseServer "dbserver"
    however, i get an error about identity being null.  no big deal, i add the -Identity switch and the name of my search service application.  but the real problem comes the error it throws:
    Set-SPEnterpriseSearchServiceApplication : The requested database move was aborted as the associated search application is not paused.
    At line:1 char:54
    + $searchapp | Set-SPEnterpriseSearchServiceApplication <<<<  -Identity "Search Service Application" -DatabaseName "SharePoint2010_Search_Service_Application_DB" -DatabaseServer "dbserver"
        + CategoryInfo          : InvalidData: (Microsoft.Offic...viceApplication:
       SetSearchServiceApplication) [Set-SPEnterpriseSearchServiceApplication], I
      nvalidOperationException
        + FullyQualifiedErrorId : Microsoft.Office.Server.Search.Cmdlet.SetSearchS
       erviceApplication
    when i look at the crawling content sources, i see "Local SharePoint Sites" and it's status is Idle.  i even looked at this article on how to pause the search to no avail.  (http://technet.microsoft.com/en-us/library/ee808864.aspx)
    does someone know how i can rename my Search Service Applcation Admin database properly?  or at least "pause" that service so i can rename it?
    thank you all in advanced

    If you want to have no guids for your search admin db, i recommend you check out this script :)
    just delete your search service application (assuming you have just started)
    Alpesh Nakar's Blog
    Alpesh
    Just SharePoint Just SharePoint Updates
    SharePoint Conference Southeast Asia
    Oct 26-27 2010 Contributing Author
    SharePoint 2010 Unleashed
    MCTS: SharePoint 2010 Configuration
    MCITP: SharePoint 2010 Administrator

  • Deleting/dropping/removing database using Enterprise Manager  (12c)

    Hi,
    I have run some searches and review the documentation... but I haven't found any way to drop/remove/delete databases using 12c EM.
    We have multi-instance environments (one physical server with several instances/database). And from time to time we have to remove one or more instances when they are decommissioned.
    So far we are using DBCA, but I wonder if there is any way to do the same with EM "out of the box ...
    Could someone point me in the right direction?
    Thanks in advance,
    Francisco Palomares

    Hello Francisco,
    the only "out of the box" methods to delete/remove a database would be:
    - Create a "User defined procedure" to do the job - Checkout the "Lifecycle Management" guide in the documentation
    - When using the Self Service Portal as part of the Cloud Management feature, users would be able to request for a database to be created. When requesting for a database to be created the user can also identify for what period he/she would need the database. Meaning that you can indicate the moment a database need to be decommissioned. In that case not only a Job will be scheduled to create the requested database, but also a job will be scheduled to remove the database after the indicated time frame has passed. Check for this the Cloud Management Guide.
    - You could of course also think about doing this using a custom made script that you could schedule from the EM job system.
    Well enough material to think about ;-)
    Regards
    Rob
    http://oemgc.wordpress.com

  • Using Single Datasource to Access Multiple Databases

    Hi,
    We would like to know the pros and cons of accessing multiple
    databases through a single datasource, versus accessing each
    database through its own datasource. Our environment includes
    multiple web servers w/ the latest version of ColdFusion MX 7,
    clustered through a load balancer. Each web server has 800+ dsns
    pointing to different SQL databases on the same SQL server. We have
    noticed that the ColdFusion administrator is taking a long time to
    display or verify all datasources and sometimes it even times out.
    Another problem is that sometimes the neo-query file gets corrupted
    (for unknown reasons) which results in the deletion of one, or
    more, or all datasources on the web server.
    Because of the issues above we are researching the
    possibility of removing most of the datasources, and then accessing
    each database through a single bridge datasource. In that regard we
    plan to change our queries by inserting the sql db name and user in
    front of each table in the query such as:
    <cfquery name="query" datasource="single_dsn_name">
    select * from [#dbname#].dbo.tableName
    </cfquery>
    In the example above, obviously #dbname# would be a variable
    that will hold the name of the requested database. The code above
    would similarly apply to queries using, update, insert and join
    words.
    Are there any limitations or negatives from scalability,
    performance, and reliability perspective in implementing the above
    scenario versus having one datasource for each database.
    Also, if there is a better way of accomplishing this, we
    would love to hear about it.

    Here is my opinion, because I work with both schemas. The
    main advantage to use one datasource for all DBs in a SQL Server is
    the simplicity of administration.
    But the main disadvantage is security, because you are using
    a single user to access all DB in a server, you don't have
    isolation, and a user that knows your schema can access data of
    other DBs that he sould not be authorized.
    Another issue is is a user must access 2 differents DB with
    different permissions (a DB only read and the other read/write),
    you'll have to create another datasource, user, etc for it.
    But the decision depends in the enviroment. If you are a
    hosting company, I would use 1 datasource for user or DB. If the
    servers and DBs are of the same company, I could use one datasource
    for each SQL server.
    Best regards

  • FEATURE REQUEST: After Effects style Placeholders and Proxies

    If you feel the needs in efficient proxy workflow, welcome to the club and don't hesitate to submit feature request.
    Here is the adjusted to less than 2000 symbols feature request text:
    ***After Effects style Placeholders and Proxies*** 
    It would be nice if PrPro offered AE style Placeholders and Proxies workflow. There are cases, where they are extremely useful, e.g.:
    1. Huge modern formats and resource hungry codecs. Not all machines can easily handle 2k or 4k footages, some can't get real time playback even with AVCHD. Not to mention that CinemaDNG importer was discontinued partly because of inability to get real time playback in PrPro. The issue can be resolved via rendering previews, but that's not always the most efficient workflow.
    2. In multicam editing issues mentioned above increase dramatically. Rendering preview for every camera angle is simply impossible. Although PrPro offers offline clips workflow, an editor can't easily see if the clip is currently linked to a source footage or a proxy, and switching between sources and proxies involves several steps every time one needs to switch: selecting assets, making them offline, right-clicking again to link to other media, locating files on a disk; while in After Effects it's just one click once a proxy is set.
    3. Adobe Dynamic Link. PrPro communicates with AE projects via single instance of headless AE, which creates a bottleneck and entails the need to render DI for complex comps. Although AE allows to set DIs as proxies and, hence, enjoy the best of both worlds, instantly switching between DI and dynamically linked comp and no needs to replace anything in PrPro timeline, with hundreds of dynamically linked comps PrPro timeline becomes unresponsive and takes forever to render (for my rig test 30 min sequence built out of 935 dynamically linked comps, which are just source footages in their own comps, hence, the equivalent to rendered DIs set to proxies, takes around 27 hours to render, while 30 min sequence built out of the same 935 source footages renders in real time). Meanwhile, PrPro doesn't currently allow to link offline dynamically linked comps to rendered DIs.

    As the person who maintains the feature request database for After Effects, I can confirm that the number of times an item has been requested is a major factor when we consider what to work on next. So are other factors, such as how hard the feature is to implement and maintain, how much testing is involved (often the larger concern than programming time), whether the request conflicts with something else that we are already working on, and so on. It is true that we consider not just raw numbers of requests but the details of who is making the requests---e.g., whether the requests are coming from animators or compositors, beginners or experts.
    I can only speak for the After Effects team, but the Premiere Pro team works much the same way (which is unsurprising, since we are in the same group and have overlapping team members).
    We also try to give some visibility into the most requested items with posts like this:
    http://blogs.adobe.com/aftereffects/2012/12/top-feature-requests-for-after-effects-in-2012 .html

  • Performance optimization during database selection.

    hi gurus,
    pls any explain about this...
    Strong knowledge of performance optimization during database selection.
    regards,
    praveen

    Hi Praveen,
    Performance Notes 
    1.Keep the Result Set Small 
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    •     There are no more physical I/Os in the database than necessary
    •     No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    •     The CPU usage of the database host is minimize
    •     The network load is reduced, since only the data that is required by the application is transferred to the application server.
      Minimize the Amount of Data Transferred 
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers 
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead 
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system. 
    Reduce the Database Load 
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    •     Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    •     Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    •     Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1.     An ABAP program requests data from a buffered table.
    2.     The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3.     If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4.     The database server passes the data to the application server, which places it in the table buffer.
    5.     The data is passed to the program.
    When you change a buffered table, the following happens:
    1.     The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2.     All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3.     Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    •     Tables that are read very frequently
    •     Tables that are changed very infrequently
    •     Relatively small tables (few lines, few columns, or short columns)
    •     Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    •     The BYPASSING BUFFER addition in the FROM clause
    •     The DISTINCT addition in the SELECT clause
    •     Aggregate expressions in the SELECT clause
    •     Joins in the FROM clause
    •     The IS NULL condition in the WHERE clause
    •     Subqueries in the WHERE clause
    •     The ORDER BY clause
    •     The GROUP BY clause
    •     The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes 
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    •     Establishing and terminating connections between the work process and the database.
    •     Access to database tables
    •     Access to R/3 Repository objects (ABAP programs, screens and so on)
    •     Access to catalog information (ABAP Dictionary)
    •     Controlling transactions (commit and rollback handling)
    •     Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server 
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running.  A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
           1.      The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
           2.      The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
           3.      While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
           4.      After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
           5.      While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    •        A dialog step from a program is assigned to a single work process for execution.
    •        The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    •        A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • EHP5 and leave request

    Hi
    I have seen the new functionality around mass approval of leave requests in EHP5.  Unfortunately only on paper. My question is:
    The list that is presented to the manager: Does it represent records from the leave request database, or does it represent individual workflow task - or a combinaton of the 2?
    Thanks in advance
    Kirsten

    yes it does involve WF also and also read the sent status of the requests from leave request database and displays it to intended approver.
    ie
    http://help.sap.com/erp2005_ehp_05/helpdata/en/e6/c4fb025b4542fcaa6241e9
    e5060323/frameset.htm.
    For leave request application to be called , we specify the parameters
    in transaction SWFVISU through which the application is called when
    processed from universal worklist.
    I request you to use standard leave request task TS21500003 and please
    replace the existing visualization parameter DYNPARAM from
    WI_ID=${item.externalId} to
    LRF_REQUEST_ID=${item.REQUESTID}. This can be done from transaction
    SWFVISU.

  • Requesting second schema for my workspace

    Hi,
    I am having a workspace on apex.oraclecorp.com. The workspace has one DB Schema which was created while requesting the workspace.
    Now I want to have second DB schema in my workspace. But I couldn't find any option to create a new DB schema in the workspace.
    Is there any process to request second DB schema in a workspace?
    Thanks & Regards,
    Harish

    Hi,
    You can not create new schemas by your self.
    I do not know if this works and Oracle will give you another schema
    Try request schema from
    Home>Administration>Manage Services>Request Database Schema
    Br, Jari

  • What went wrong ?

    Dear all,
    I`m trying to figure out what went wrong with my database.
    During the higher load the users were suddenly getting DBIF_RSQL_SQL_ERROR, no matter what they wanted to do. When checking system I found many messages like:
    Database error -1000 at OPC access to table REPOLOAD
    > POS(1) Too many lock requests
    > Include ??? line 0000.
    Run-time error "DBIF_REPO_SQL_ERROR" occurred
    Run-time error "DBIF_RSQL_SQL_ERROR" occurred
    Database error -1000 at OPC
    > POS(1) Too many lock requests
    Database error -1000 at OPC
    > POS(1) Too many lock requests
    Database error -1000 at OPC
    > POS(1) Too many lock requests
    Database error -1000 at EXE
    > POS(1) Too many lock requests
    According to documentation I found error -1000:
    1000: Too many lock requests
    Explanation
    There are too many locks or lock requests.
    User Request
    You can try to repeat the SQL statement at a later time or cancel the transaction. If this situation occurs frequently, then the MAXLOCKS general database parameter is too small and should be increased
    Database kernel showed following messages:
    2009-12-07 16:04:18  8126 ERR     5 Catalog  Catalog update failed,IFR_ERROR=-1000,DESCRIPTION=UNDEFINED
    2009-12-07 16:05:13  8170 ERR     5 Catalog  Catalog update failed,IFR_ERROR=-1000,DESCRIPTION=UNDEFINED
    2009-12-07 16:10:13  8182 ERR     5 Catalog  Catalog update failed,IFR_ERROR=-1000,DESCRIPTION=UNDEFINED
    2009-12-07 16:15:09  8229 ERR     5 Catalog  Catalog update failed,IFR_ERROR=-1000,DESCRIPTION=UNDEFINED
    2009-12-07 16:15:13  8123 ERR     5 Catalog  Catalog update failed,IFR_ERROR=-1000,DESCRIPTION=UNDEFINED
    So do you think it is just about MAXLOCKS parameter or also with catalog cache sizing could be wrong ?
    Version : MaxDB 7.6.6.7.
    Thank you for your comments.
    Pavol

    > Database error -1000 at OPC
    > > POS(1) Too many lock requests
    > 1000: Too many lock requests
    > Explanation
    > There are too many locks or lock requests.
    > 2009-12-07 16:04:18  8126 ERR     5 Catalog  Catalog update failed,IFR_ERROR=-1000,DESCRIPTION=UNDEFINED
    > So do you think it is just about MAXLOCKS parameter or also with catalog cache sizing could be wrong ?
    > Version : MaxDB 7.6.6.7.
    Hi there,
    it's really just about the MAXLOCKS setting.
    As you seem the "Catalog update failure"-message also points out error no. -1000.
    It's the same "too many lock requests" error as before.
    Having a MAXLOCKS value of 1.000.000 or even 3.000.000 is not unusual with MaxDB!
    regards,
    Lars

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • Fatal error while executing the DQS installer on SQL Server 2014

    Hi all.
    I am receiving the following error when attempting to install DQS on the following platform:
    Windows Server 2012
    Microsoft SQL Server 2014 - 12.0.2000.8 (X64)   Feb 20 2014 20:04:26   Copyright (c) Microsoft Corporation  Enterprise Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: )
    (no entries in event viewer)
    Error is:
    [4/15/2015 8:45:04 AM] Fatal error while executing the DQS installer.
    Microsoft.Ssdqs.Proxy.ImportExport.ImportExportException: Error occurred in a server proxy call during the import/export process. ---> Microsoft.Ssdqs.Proxy.ImportExport.ImportProcessFailedException: System.NullReferenceException: Object reference not set
    to an instance of an object.
       at System.Security.Cryptography.CryptoStream..ctor(Stream stream, ICryptoTransform transform, CryptoStreamMode mode)
       at Microsoft.Ssdqs.ImportExportManagement.ImportExport.ImportExportReader..ctor(Stream stream)
       at Microsoft.Ssdqs.ImportExportManagement.ImportExport.ImportExportManager.Import(Stream input)
       at Microsoft.Ssdqs.ImportExportManagement.Calibrator.ImportKnowledgebaseCalibrator.Calibrate(IMasterContext masterContext, CalibrationMode calibrationMode, ConfigurationDomParameter calibratorConfiguration)
       at Microsoft.Ssdqs.Core.Service.Calibration.Impl.ExecuteCalibratorFlow.Process(IMasterContext context)
       --- End of inner exception stack trace ---
       at Microsoft.Ssdqs.Proxy.ImportExport.ImportAsProcessManager.WaitUntillProcessCompletes(Int64 processId, Int64 knowledgebaseId, ImportExportCancellationToken cancelToken)
       at Microsoft.Ssdqs.Proxy.ImportExport.ImportAsProcessManager.KnowledgebaseImport(String kbName, String kbDescription, String fileName, ImportExportCancellationToken cancelToken)
       at Microsoft.Ssdqs.DqsInstaller.Logic.Actions.LoadOutOfTheBoxDataAction.Execute()
       at Microsoft.Ssdqs.DqsInstaller.Logic.ActionExecuter.ExecuteAllActions()
       at Microsoft.Ssdqs.DqsInstaller.Logic.Installer.Main(String[] args)
    Thanks for any advice you can give.
    Full log below:
    Microsoft (R) DQS Installer Command Line Tool
    Copyright (c) 2014 Microsoft. All rights reserved.
    [4/15/2015 7:58:20 AM] DQS Installer started. Installation log will be written to C:\Program Files\Microsoft SQL Server\MSSQL12.DW01\MSSQL\Log\DQS_install.log
    [4/15/2015 7:58:20 AM] Setting the collation to default value: SQL_Latin1_General_CP1_CI_AS
    [4/15/2015 7:58:20 AM] Using instance: DW01, catalog: DQS.
    [4/15/2015 7:58:20 AM] Executing action: Validate collation argument.
    [4/15/2015 7:58:20 AM] Action 'Validate collation argument' finished successfully.
    [4/15/2015 7:58:20 AM] Executing action: Check whether system reboot is pending.
    [4/15/2015 7:58:20 AM] Action 'Check whether system reboot is pending' finished successfully.
    [4/15/2015 7:58:20 AM] Executing action: Create data quality event source.
    [4/15/2015 7:58:20 AM] Action 'Create data quality event source' finished successfully.
    [4/15/2015 7:58:20 AM] Executing action: Request Database Master Key password from user..
    Microsoft (R) DQS Installer Command Line Tool
    Copyright (c) 2014 Microsoft. All rights reserved.
    [4/15/2015 8:39:59 AM] DQS Installer started. Installation log will be written to C:\Program Files\Microsoft SQL Server\MSSQL12.DW01\MSSQL\Log\DQS_install.log
    [4/15/2015 8:39:59 AM] Parsing DqsInstaller command line arguments.
    [4/15/2015 8:39:59 AM] Setting the catalog to default value: DQS
    [4/15/2015 8:39:59 AM] Setting the collation to default value: SQL_Latin1_General_CP1_CI_AS
    [4/15/2015 8:39:59 AM] Using instance: DEV01, catalog: DQS.
    [4/15/2015 8:39:59 AM] Executing action: Validate collation argument.
    [4/15/2015 8:39:59 AM] Action 'Validate collation argument' finished successfully.
    [4/15/2015 8:39:59 AM] Executing action: Check whether system reboot is pending.
    [4/15/2015 8:39:59 AM] Action 'Check whether system reboot is pending' finished successfully.
    [4/15/2015 8:39:59 AM] Executing action: Create data quality event source.
    [4/15/2015 8:39:59 AM] Action 'Create data quality event source' finished successfully.
    [4/15/2015 8:39:59 AM] Executing action: Request Database Master Key password from user..
    [4/15/2015 8:41:05 AM] Action 'Request Database Master Key password from user.' finished successfully.
    [4/15/2015 8:41:05 AM] Executing action: Approve removal of data quality services previous schema.
    [4/15/2015 8:41:05 AM] Action 'Approve removal of data quality services previous schema' finished successfully.
    [4/15/2015 8:41:05 AM] Executing action: Load Installation Scripts.
    [4/15/2015 8:41:05 AM] Extracting script to: C:\Users\SqlServiceAcct\AppData\Local\Temp\3mo2vwbu.xch
    [4/15/2015 8:41:06 AM] Extracting script: db\create_core_db_objects.sql
    [4/15/2015 8:41:06 AM] Extracting script: db\create_logic_db_objects.sql
    [4/15/2015 8:41:06 AM] Extracting script: db\create_transient_db_objects.sql
    [4/15/2015 8:41:06 AM] Extracting script: db\static_data.sql
    [4/15/2015 8:41:06 AM] Extracting script: db\drop_dq_databases.sql
    [4/15/2015 8:41:06 AM] Extracting script: db\db_version.sql
    [4/15/2015 8:41:06 AM] Extracting script: helper\DeleteSchemaDs.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\drop_all_assemblies.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\create_databases.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\drop_databases.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\drop_all_tables.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\drop_database_properties.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\master_create.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\master_recreate.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\register_assemblies.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\register_assemblies_tsql.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\register_dq_assemblies.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\create_service_broker_objects.sql
    [4/15/2015 8:41:06 AM] Extracting script: sql\drop_service_broker_objects.sql
    [4/15/2015 8:41:06 AM] Extracting script: db\set_dq_databases_single_user.sql
    [4/15/2015 8:41:06 AM] Extracting script: db\set_dq_databases_multi_user.sql
    [4/15/2015 8:41:06 AM] Extracting script: db\upgrade_all_versions.sql
    [4/15/2015 8:41:06 AM] Extracting script: recreate_schema.bat
    [4/15/2015 8:41:06 AM] Extracting script: upgrade_schema.bat
    [4/15/2015 8:41:06 AM] Extracting script: upgrade_version_tables.bat
    [4/15/2015 8:41:06 AM] Extracting script: drop_databases.cmd
    [4/15/2015 8:41:06 AM] Extracting script: register_dlls.cmd
    [4/15/2015 8:41:06 AM] Extracting script: unregister_dlls.cmd
    [4/15/2015 8:41:06 AM] Extracting script: assembly_paths_retail.txt
    [4/15/2015 8:41:06 AM] Extracting script: DQS_Data.dqs
    [4/15/2015 8:41:06 AM] Extracting script: DefaultKbs.xml
    [4/15/2015 8:41:06 AM] Total scripts extracted: 31
    [4/15/2015 8:41:06 AM] Action 'Load Installation Scripts' finished successfully.
    [4/15/2015 8:41:06 AM] Executing action: Create data quality schema.
    [4/15/2015 8:41:06 AM] Script: 'recreate_schema.bat CMTSQLSVR04\DEV01 DQS SQL_Latin1_General_CP1_CI_AS'
    [4/15/2015 8:41:06 AM] .  Trying to connect using Windows Authentication and db name...
    [4/15/2015 8:41:06 AM] Run create_databases.sql to create Data Quality Service databases if they do not exist.
    [4/15/2015 8:41:07 AM] 
    [4/15/2015 8:41:07 AM]  --> Running File: create_databases.sql
    [4/15/2015 8:41:07 AM] ---[ Creating databases ]---
    [4/15/2015 8:41:07 AM] Changed database context to 'master'.
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --> Running File: drop_dq_database.sql
    [4/15/2015 8:41:10 AM] ---[ Dropping Data Quality Databases ]---
    [4/15/2015 8:41:10 AM] Creating DQS databases (DQS_MAIN, DQS_PROJECTS, DQS_STAGING_DATA)...
    [4/15/2015 8:41:10 AM] Configuring DQS databases
    [4/15/2015 8:41:10 AM] Configuration option 'clr enabled' changed from 0 to 1. Run the RECONFIGURE statement to install.
    [4/15/2015 8:41:10 AM] Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
    [4/15/2015 8:41:10 AM] Configuration option 'xp_cmdshell' changed from 0 to 1. Run the RECONFIGURE statement to install.
    [4/15/2015 8:41:10 AM] Creating Master Key for database DQS_MAIN...
    [4/15/2015 8:41:10 AM] Creating Module Signing Certificate for database DQS_MAIN...
    [4/15/2015 8:41:10 AM] Exporting Certificate from DQS_MAIN
    [4/15/2015 8:41:10 AM] Importing Certificate into DQS_PROJECTS
    [4/15/2015 8:41:10 AM] Configuration option 'xp_cmdshell' changed from 1 to 0. Run the RECONFIGURE statement to install.
    [4/15/2015 8:41:10 AM] Creating DQS roles
    [4/15/2015 8:41:10 AM] Run master_recreate.sql to recreate Data Quality Service Main and Projects Databases and drop all tables including known temporary tables.
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --> Running master_recreate.sql
    [4/15/2015 8:41:10 AM] ---[ Recreate DQS Main and Projects Databases ]---
    [4/15/2015 8:41:10 AM] Changed database context to 'DQS_STAGING_DATA'.
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --> Running File: drop_database_properties.sql
    [4/15/2015 8:41:10 AM] ---[ Dropping Database Properties]---
    [4/15/2015 8:41:10 AM] Changed database context to 'DQS_PROJECTS'.
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --> Running File: drop_all_tables.sql
    [4/15/2015 8:41:10 AM] ---[ Dropping all SSDQS Database objects and KB schemas ]---
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all Symmetric Keys ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all Certificates ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all FK constraints ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all views ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all tables and synonyms ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all database triggers ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all stored procedures in Knowledge Base schemas ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all types in Knowledge Base schemas ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --[ Drop all Knowledge Base schemas ]
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --> Running File: drop_database_properties.sql
    [4/15/2015 8:41:10 AM] ---[ Dropping Database Properties]---
    [4/15/2015 8:41:10 AM] Changed database context to 'DQS_MAIN'.
    [4/15/2015 8:41:10 AM] 
    [4/15/2015 8:41:10 AM]  --> Running File: drop_all_tables.sql
    [4/15/2015 8:41:10 AM] ---[ Dropping all SSDQS Database objects and KB schemas ]---
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all Symmetric Keys ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all Certificates ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all FK constraints ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all views ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all tables and synonyms ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all database triggers ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all stored procedures in Knowledge Base schemas ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all types in Knowledge Base schemas ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --[ Drop all Knowledge Base schemas ]
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --> Running File: drop_database_properties.sql
    [4/15/2015 8:41:11 AM] ---[ Dropping Database Properties]---
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --> Running master_create.sql
    [4/15/2015 8:41:11 AM] ---[ Creating and Populating Main DQS DB ]---
    [4/15/2015 8:41:11 AM] Changed database context to 'DQS_STAGING_DATA'.
    [4/15/2015 8:41:11 AM] Changed database context to 'DQS_PROJECTS'.
    [4/15/2015 8:41:11 AM] Changed database context to 'DQS_MAIN'.
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --> Running File: create_core_db_objects.sql
    [4/15/2015 8:41:11 AM] ---[  Creating Certificates  ]---
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM] ---[  Creating Symmetric Keys  ]---
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM] ---[  Creating Tables  ]---
    [4/15/2015 8:41:11 AM] Creating Table:     A_TEST   
    [4/15/2015 8:41:11 AM] Creating Table:     A_TEST2   
    [4/15/2015 8:41:11 AM] Creating Table:     A_FLOW   
    [4/15/2015 8:41:11 AM] Creating Table:     A_FLOW_ANSWER   
    [4/15/2015 8:41:11 AM] Creating Table:     A_EXECUTION_UNIT
    [4/15/2015 8:41:11 AM] Creating Table:     A_CODE_GROUP   
    [4/15/2015 8:41:11 AM] Creating Table:     A_CODE_MEMBER   
    [4/15/2015 8:41:11 AM] Creating Table:     A_CONFIGURATION   
    [4/15/2015 8:41:11 AM] Creating Table:     A_PROCESS   
    [4/15/2015 8:41:11 AM] Creating Table:     A_SERVICE_BROKER_TASK   
    [4/15/2015 8:41:11 AM] Creating Table:     A_KNOWLEDGEBASE   
    [4/15/2015 8:41:11 AM] Creating Table:     A_KNOWLEDGEBASE_AUDIT   
    [4/15/2015 8:41:11 AM] Creating Table:     A_KNOWLEDGEBASE_ACTIVITY   
    [4/15/2015 8:41:11 AM] Creating Table:     A_PROFILING_ACTIVITY_ARCHIVE   
    [4/15/2015 8:41:11 AM] Creating Table:     A_CLIENT_SESSION   
    [4/15/2015 8:41:11 AM] Creating Table:     A_REFERENCE_DATA_PROVIDER   
    [4/15/2015 8:41:11 AM] Creating Table:     A_REFERENCE_DATA_PROVIDER_SCHEMA   
    [4/15/2015 8:41:11 AM] Creating Table:     A_REFERENCE_DATA_CACHE   
    [4/15/2015 8:41:11 AM] Creating Table:     A_REFERENCE_DATA_CACHE_SUGGESTION   
    [4/15/2015 8:41:11 AM] Creating Table:     A_REFERENCE_DATA_CACHE_SUGGESTION_PARSED   
    [4/15/2015 8:41:11 AM] Creating Table:     A_REFERENCE_DATA_AUDIT   
    [4/15/2015 8:41:11 AM] Creating Table:     A_IMPORTED_PROJECT   
    [4/15/2015 8:41:11 AM] Creating Table:     A_SPELLER_DICTIONARY_VALUE   
    [4/15/2015 8:41:11 AM] Creating Table:     A_SPECIAL_CHARACTER_RULE   
    [4/15/2015 8:41:11 AM] Creating Table:     A_PERSISTENT_CACHE   
    [4/15/2015 8:41:11 AM] Creating Table:     A_TEMPLATE_OBJECTS   
    [4/15/2015 8:41:11 AM] Creating Table:     A_TEMPLATE_FOREIGN_KEYS   
    [4/15/2015 8:41:11 AM] Creating Table:     A_TEMPLATE_COLUMNS   
    [4/15/2015 8:41:11 AM] ---[ Creating Views ]---
    [4/15/2015 8:41:11 AM] Creating View:      V_A_FLOW
    [4/15/2015 8:41:11 AM] Creating View:      V_A_KNOWLEDGEBASE
    [4/15/2015 8:41:11 AM] Creating View:      V_A_KNOWLEDGEBASE_AUDIT
    [4/15/2015 8:41:11 AM] Creating View:     V_A_REFERENCE_DATA_PROVIDER   
    [4/15/2015 8:41:11 AM] Changed database context to 'DQS_PROJECTS'.
    [4/15/2015 8:41:11 AM] Changed database context to 'DQS_MAIN'.
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --> Running File: static_data.sql
    [4/15/2015 8:41:11 AM] ---[ Inserting Data ]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert Cleansing configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert Logging settings section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert Logging settings section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert Association configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert DataValueService configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert Association rules configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert IndexService configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert Knowledgebase configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert Notification configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert Interactive Cleansing configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Reference Data configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert client paging configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Insert client process sampling configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Matching configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Profiling configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Process configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[FileStorageManager configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[ImportExportManager configuration section]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Reference Data Add provider]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Reference Data Add provider schema]---
    [4/15/2015 8:41:11 AM] ---[Add Special characters for term normalization]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] 
    [4/15/2015 8:41:11 AM]  --> Running File: db_version.sql
    [4/15/2015 8:41:11 AM] ---[ Running version related work ]---
    [4/15/2015 8:41:11 AM] Changed database context to 'DQS_MAIN'.
    [4/15/2015 8:41:11 AM] Creating Table:     A_DB_VERSION_INFO   
    [4/15/2015 8:41:11 AM] Creating Table:     A_DB_VERSION   
    [4/15/2015 8:41:11 AM] Creating Table:     A_DB_VERSION_UPGRADE_SCRIPTS   
    [4/15/2015 8:41:11 AM] Creating Table:     A_DB_VERSION_UPGADE_SCRIPT_PATH   
    [4/15/2015 8:41:11 AM] Creating view:     V_DB_VERSION 
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (0 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (0 rows affected)
    [4/15/2015 8:41:11 AM] ---[Add DB Version information]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (0 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (0 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (0 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Add DB Version upgrade scripts]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Add DB Version upgrade scripts path]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] ---[Add DB Version upgrade scripts path]---
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] (1 rows affected)
    [4/15/2015 8:41:11 AM] Changed database context to 'DQS_PROJECTS'.
    [4/15/2015 8:41:11 AM] Creating Table:     A_DB_VERSION in PROJECTS database 
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM]
    [4/15/2015 8:41:11 AM] Action 'Create data quality schema' finished successfully.
    [4/15/2015 8:41:11 AM] Executing action: Register data quality assemblies and stored procedures.
    [4/15/2015 8:41:11 AM] Script: 'register_dlls.cmd CMTSQLSVR04\DEV01 DQS'
    [4/15/2015 8:41:12 AM] .  Trying to connect using Windows Authentication and db name...
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Running File: drop_all_assemblies.sql
    [4/15/2015 8:41:12 AM] ---[ Dropping all SSDQS Executable Objects ]---
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Drop all SSDQS Types
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Drop all SSDQS Assemblies
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Drop all SSDQS Schemas
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Drop all CLR Assemblies
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Drop DQ startup stored procedure DQInitDQS_MAIN
    [4/15/2015 8:41:12 AM] Changed database context to 'master'.
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Running File: drop_service_broker_objects.sql
    [4/15/2015 8:41:12 AM] ---[ Dropping all SSDQS Service Broker Objects ]---
    [4/15/2015 8:41:12 AM] Changed database context to 'DQS_MAIN'.
    [4/15/2015 8:41:12 AM] Dropping Service Broker Services.
    [4/15/2015 8:41:12 AM] Dropping Service Broker Queues.
    [4/15/2015 8:41:12 AM] Dropping Service Broker Contracts.
    [4/15/2015 8:41:12 AM] Dropping Service Broker Messages.
    [4/15/2015 8:41:12 AM] Completed - Service Broker objects dropped.
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Running register_assemblies.sql
    [4/15/2015 8:41:12 AM] ---[ Registering Assemblies ]---
    [4/15/2015 8:41:12 AM] Changed database context to 'DQS_MAIN'.
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]  --> Registering Assemblies
    [4/15/2015 8:41:12 AM]      The following Warnings are benign to DQSInstaller and may be ignored.
    [4/15/2015 8:41:12 AM] 
    [4/15/2015 8:41:12 AM]      * Register .NET dependency assemblies
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'system.management, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'system.configuration.install, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted
    environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'system.runtime.serialization, version=4.0.0.0, culture=neutral, publickeytoken=b77a5c561934e089, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted
    environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'smdiagnostics, version=4.0.0.0, culture=neutral, publickeytoken=b77a5c561934e089, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'system.windows.forms, version=4.0.0.0, culture=neutral, publickeytoken=b77a5c561934e089, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'system.drawing, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'accessibility, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'system.runtime.serialization.formatters.soap, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the
    SQL Server hosted environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:25 AM] Warning: The Microsoft .NET Framework assembly 'microsoft.jscript, version=10.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:26 AM] Warning: The Microsoft .NET Framework assembly 'system.management.instrumentation, version=4.0.0.0, culture=neutral, publickeytoken=b77a5c561934e089, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server
    hosted environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:27 AM] Warning: The Microsoft .NET Framework assembly 'system.messaging, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:27 AM] Warning: The Microsoft .NET Framework assembly 'system.directoryservices, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted
    environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.enterpriseservices, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=amd64.' you are registering is not fully tested in the SQL Server hosted
    environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.runtime.remoting, version=4.0.0.0, culture=neutral, publickeytoken=b77a5c561934e089, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted
    environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.web, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=amd64.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'microsoft.build.framework, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted
    environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.xaml, version=4.0.0.0, culture=neutral, publickeytoken=b77a5c561934e089, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.runtime.caching, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'microsoft.build.utilities.v4.0, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server
    hosted environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.directoryservices.protocols, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server
    hosted environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.design, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.drawing.design, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.web.regularexpressions, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted
    environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'microsoft.build.tasks.v4.0, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted
    environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM] Warning: The Microsoft .NET Framework assembly 'system.serviceprocess, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:44 AM]      * Register Microsoft.Practices assemblies
    [4/15/2015 8:41:47 AM] Warning: The Microsoft .NET Framework assembly 'microsoft.csharp, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:47 AM] Warning: The Microsoft .NET Framework assembly 'system.dynamic, version=4.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment
    and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.
    [4/15/2015 8:41:47 AM] 
    [4/15/2015 8:41:47 AM]      The preceding Warnings are benign to DQSInstaller and may be ignored.
    [4/15/2015 8:41:48 AM] 
    [4/15/2015 8:41:48 AM]  --> Running register_dq_assemblies.sql
    [4/15/2015 8:41:48 AM] ---[  Registering Data Quality assemblies ]---
    [4/15/2015 8:41:49 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.infra, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:50 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.core, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:51 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.dataservice, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:51 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.referencedata, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:52 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.index, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:52 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.associationrules, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:53 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.datavalueservice, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:54 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.domainrules, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:54 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.cleansing, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:55 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.association, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:55 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.flow, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:56 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs.matching, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:57 AM] Warning: The SQL Server client assembly 'microsoft.ssdqs, version=12.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil.' you are registering is not fully tested in SQL Server hosted environment.
    [4/15/2015 8:41:57 AM] Resource file was not copied
    [4/15/2015 8:41:57 AM] 
    [4/15/2015 8:41:57 AM]  --> Running register_assemblies_tsql.sql
    [4/15/2015 8:41:57 AM] ---[ Registering Assemblies for TSQL]---
    [4/15/2015 8:41:57 AM] Changed database context to 'master'.
    [4/15/2015 8:41:57 AM] Changed database context to 'DQS_MAIN'.
    [4/15/2015 8:41:57 AM] 
    [4/15/2015 8:41:57 AM]      * Clear the Code Member Table A_CODE_MEMBER.
    [4/15/2015 8:41:57 AM]
    [4/15/2015 8:41:57 AM] (0 rows affected)
    [4/15/2015 8:41:57 AM] 
    [4/15/2015 8:41:57 AM]      * Clear the Code Group Table A_CODE_GROUP.
    [4/15/2015 8:41:57 AM]
    [4/15/2015 8:41:57 AM] (0 rows affected)
    [4/15/2015 8:41:57 AM] 
    [4/15/2015 8:41:57 AM]      * Register assemblies T-SQL executable objects
    [4/15/2015 8:41:57 AM]         - Creating assemblies T-SQL registeration stored procedure.
    [4/15/2015 8:41:57 AM] 
    [4/15/2015 8:41:57 AM]      * Creating the internal_core schema
    [4/15/2015 8:41:57 AM]         - Registering Microsoft.Ssdqs.Infra T-SQL executable objects.
    [4/15/2015 8:42:05 AM]         - Registering Microsoft.Ssdqs.Core T-SQL executable objects.
    [4/15/2015 8:42:11 AM]         - Registering Microsoft.Ssdqs.DataService T-SQL executable objects.
    [4/15/2015 8:42:16 AM]         - Registering Microsoft.Ssdqs.ReferenceData T-SQL executable objects.
    [4/15/2015 8:42:21 AM]         - Registering Microsoft.Ssdqs.Index T-SQL executable objects.
    [4/15/2015 8:42:26 AM]         - Registering Microsoft.Ssdqs.Cleansing T-SQL executable objects.
    [4/15/2015 8:42:32 AM]         - Registering Microsoft.Ssdqs.Association T-SQL executable objects.
    [4/15/2015 8:42:37 AM]         - Registering Microsoft.Ssdqs.Flow T-SQL executable objects.
    [4/15/2015 8:42:42 AM]         - Registering Microsoft.Ssdqs T-SQL executable objects.
    [4/15/2015 8:42:51 AM]         - Registering Microsoft.Ssdqs.DataValueService T-SQL executable objects.
    [4/15/2015 8:42:56 AM]         - Registering Microsoft.Ssdqs.DomainRules T-SQL executable objects.
    [4/15/2015 8:43:01 AM]         - Registering Microsoft.Ssdqs.AssociationRules T-SQL executable objects.
    [4/15/2015 8:43:06 AM]         - Registering Microsoft.Ssdqs.Matching T-SQL executable objects.
    [4/15/2015 8:43:12 AM]         - Creating refresh assemblies stored procedure.
    [4/15/2015 8:43:12 AM] Changed database context to 'master'.
    [4/15/2015 8:43:12 AM]         - Creating refresh assemblies helper linked server.
    [4/15/2015 8:43:12 AM]         - Creating and registering [dbo].[DQInitDQS_MAIN] (calls internal_core.InitServer) as startup stored procedure.
    [4/15/2015 8:43:12 AM] 
    [4/15/2015 8:43:12 AM]  --> Running File: create_service_broker_objects.sql
    [4/15/2015 8:43:12 AM] ---[ Registering Service Broker Objects ]---
    [4/15/2015 8:43:12 AM] Changed database context to 'DQS_MAIN'.
    [4/15/2015 8:43:12 AM]         - Creating SB dispatcher stored procedure, messages and contract
    [4/15/2015 8:43:13 AM]         - Creating parallel execution SB queues and services
    [4/15/2015 8:43:13 AM]         - Creating calibration SB queues and services
    [4/15/2015 8:43:13 AM]         - Creating parallel calibration SB queues and services
    [4/15/2015 8:43:13 AM]
    [4/15/2015 8:43:13 AM]
    [4/15/2015 8:43:13 AM] Successfully registered all assemblies.
    [4/15/2015 8:43:13 AM]
    [4/15/2015 8:43:13 AM]
    [4/15/2015 8:43:13 AM] Action 'Register data quality assemblies and stored procedures' finished successfully.
    [4/15/2015 8:43:13 AM] Executing action: Set product version property.
    [4/15/2015 8:43:13 AM] Action 'Set product version property' finished successfully.
    [4/15/2015 8:43:13 AM] Executing action: Create MDS user (if MDS login exists).
    [4/15/2015 8:43:13 AM] Action 'Create MDS user (if MDS login exists)' finished successfully.
    [4/15/2015 8:43:13 AM] Executing action: Load out of the box data.
    [4/15/2015 8:43:22 AM] Started loading knowledgebase 'DQS Data'
    [4/15/2015 8:45:04 AM] Starting installation rollback...
    [4/15/2015 8:45:04 AM] Installation rollback completed successfully.
    [4/15/2015 8:45:04 AM] Fatal error while executing the DQS installer.
    Microsoft.Ssdqs.Proxy.ImportExport.ImportExportException: Error occurred in a server proxy call during the import/export process. ---> Microsoft.Ssdqs.Proxy.ImportExport.ImportProcessFailedException: System.NullReferenceException: Object reference not set
    to an instance of an object.
       at System.Security.Cryptography.CryptoStream..ctor(Stream stream, ICryptoTransform transform, CryptoStreamMode mode)
       at Microsoft.Ssdqs.ImportExportManagement.ImportExport.ImportExportReader..ctor(Stream stream)
       at Microsoft.Ssdqs.ImportExportManagement.ImportExport.ImportExportManager.Import(Stream input)
       at Microsoft.Ssdqs.ImportExportManagement.Calibrator.ImportKnowledgebaseCalibrator.Calibrate(IMasterContext masterContext, CalibrationMode calibrationMode, ConfigurationDomParameter calibratorConfiguration)
       at Microsoft.Ssdqs.Core.Service.Calibration.Impl.ExecuteCalibratorFlow.Process(IMasterContext context)
       --- End of inner exception stack trace ---
       at Microsoft.Ssdqs.Proxy.ImportExport.ImportAsProcessManager.WaitUntillProcessCompletes(Int64 processId, Int64 knowledgebaseId, ImportExportCancellationToken cancelToken)
       at Microsoft.Ssdqs.Proxy.ImportExport.ImportAsProcessManager.KnowledgebaseImport(String kbName, String kbDescription, String fileName, ImportExportCancellationToken cancelToken)
       at Microsoft.Ssdqs.DqsInstaller.Logic.Actions.LoadOutOfTheBoxDataAction.Execute()
       at Microsoft.Ssdqs.DqsInstaller.Logic.ActionExecuter.ExecuteAllActions()
       at Microsoft.Ssdqs.DqsInstaller.Logic.Installer.Main(String[] args)
    Microsoft (R) DQS Installer Command Line Tool
    Copyright (c) 2014 Microsoft. All rights reserved.

    Thank you for the quick response.
    Unfortunately running cmd as admin had no effect on the result. Same exact error.
    After the rollback indicated in the log above, only the DQS_STAGING_DATA database remains in the sql server instance.

Maybe you are looking for

  • How do I create an Integration Domain with 3 servers ?

    Hi, I would like to create a WLI domain with three servers:<br> Ø     One for the administration console;<br> Ø     One for WLI;<br> Ø     And the last to deploy EJB Session (which are the service called by WLI)<br><br> To create the domain, I use

  • Error while creating the backup directory

    I bought a Time Capsule yesterday. As with all of Apple's networking products, it set up like a breeze. Took about 10 minutes. I have two MacBooks running on the network now. I set Time Machine to start working on both of them (one of them has used T

  • Using UMS on a Glassfish cluster - how to support session replication ?

    Hello, I have deployed the imqums.war in a Glassfish cluster but can not use the session replication. The UMS uses sid as client session identifier. If the client session is is created on node 1, when the call hits the node 2 it gets: com.sun.messagi

  • Executing sqlplus / sqlldr command from java code

    hi, I have my application on one server(tomcat) and oracle server is installed on other server. i.e. Both are on different machine. Now i want to run sqlplus / sqlldr command on oracle server from my java code. Again my script for lodder command is i

  • Older version of Elements 8 won't install?

    I recently purchased a version of Elements 8 on an internet site.Turns out the serial number on the DVD is already registered. I can't install the software. No setup filed launches, no auto run, no *.exe file that I can find. I'll get my money back I