Alerts not stored in SecMon during database compaction period

I used the idsDbCompact utility to compact the SecMon database which took a few hours. After the compaction, I cannot find in the database any alerts generated by the sensors during the compaction period. Is there a way to recover/retrieve those alerts?
Thanks.

During the compaction period, the database and other cisco works services are unavailable. Therefore, none of the events that were generated during this period were logged. Use the link below for more information.
http://www.cisco.com/en/US/products/sw/cscowork/ps3991/products_user_guide_chapter09186a0080325c8f.html#wp367698
Faris

Similar Messages

  • Image path not storing in sql database

    Hello,
    I have read here on the forum how to upload an image to server and store path in  your database, the image uploads correctly to the correct folder on my server but the image path does not get stored on my sql database (not local hosting). I receive the error: The file has been uploaded, and your information has been added to the directory. Column 'image' cannot be null.
    My database has the following columns:
    id
    datum
    image
    sectie
    My code is as follows:
    <?php require_once('Connections/dbTroch.php'); ?>
    <?php
    if (!function_exists("GetSQLValueString")) {
    function GetSQLValueString($theValue, $theType, $theDefinedValue = "", $theNotDefinedValue = "")
      if (PHP_VERSION < 6) {
        $theValue = get_magic_quotes_gpc() ? stripslashes($theValue) : $theValue;
      $theValue = function_exists("mysql_real_escape_string") ? mysql_real_escape_string($theValue) : mysql_escape_string($theValue);
      switch ($theType) {
        case "text":
          $theValue = ($theValue != "") ? "'" . $theValue . "'" : "NULL";
          break;   
        case "long":
        case "int":
          $theValue = ($theValue != "") ? intval($theValue) : "NULL";
          break;
        case "double":
          $theValue = ($theValue != "") ? doubleval($theValue) : "NULL";
          break;
        case "date":
          $theValue = ($theValue != "") ? "'" . $theValue . "'" : "NULL";
          break;
        case "defined":
          $theValue = ($theValue != "") ? $theDefinedValue : $theNotDefinedValue;
          break;
      return $theValue;
    mysql_select_db($database_dbTroch, $dbTroch);
    $query_rs_aanbod = "SELECT * FROM tblSlideshow ORDER BY id ASC";
    $rs_aanbod = mysql_query($query_rs_aanbod, $dbTroch) or die(mysql_error());
    $row_rs_aanbod = mysql_fetch_assoc($rs_aanbod);
    $totalRows_rs_aanbod = mysql_num_rows($rs_aanbod);
    $editFormAction = $_SERVER['PHP_SELF'];
    if (isset($_SERVER['QUERY_STRING'])) {
      $editFormAction .= "?" . htmlentities($_SERVER['QUERY_STRING']);
    $target = "images/slides/";  //This is the directory where images will be saved// 
    $target = $target . basename( $_FILES['image']['name']); //change the image and name to whatever your database fields are called//
    if ((isset($_POST["MM_insert"])) && ($_POST["MM_insert"] == "add-photos-aanbod")) {
      $insertSQL = sprintf("INSERT INTO tblSlideshow (image, sectie) VALUES (%s, %s)",
                           GetSQLValueString($_POST['file'], "text"),
                           GetSQLValueString($_FILES['image']['name'], "text"));
    //This code writes the photo to the server//
    if(move_uploaded_file($_FILES['image']['tmp_name'], $target))
    //And confirms it has worked//
    echo "The file ". basename( $_FILES['uploadedfile']['name']). " has been uploaded, and your information has been added to the directory";
    else {
    //Gives error if not correct//
    echo "Sorry, there was a problem uploading your file.";
      mysql_select_db($database_dbTroch, $dbTroch);
      $Result1 = mysql_query($insertSQL, $dbTroch) or die(mysql_error());
    ?>
    <!doctype html>
    <html>
    <meta charset="utf-8" />
      <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Troch Project Solutions - Admin - Toevoegen</title>
      <link rel="stylesheet" href="css/foundation.css" />
      <link rel="stylesheet" href="css/layout.css" />
      <!-- Fonts
      ================================================== -->
      <script type="text/javascript" src="//use.typekit.net/vob8gxg.js"></script>
      <script type="text/javascript">try{Typekit.load();}catch(e){}</script>
      <!-- jQuery
      ================================================== -->
      <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
      <script src="js/vendor/modernizr.js"></script>
    </head>
    <body>
        <div class="row">
          <div class="large-8 medium-8 small-8 large-centered medium-centered small-centered columns intro">
              <h2 class="subheader text-center">Admin</h2>
              <p><a>Log uit</a></p>
              <p><a href="admin.php">Terug naar Admin menu</a>
              <h4>image toevoegen naar aanbod slideshows:</h4>
              <form action="<?php echo $row_rs_aanbod['']; ?>" method="POST" name="add-photos-aanbod" id="add-photos-aanbod" enctype="multipart/form-data">
              <table>
                    <tbody>
                         <tr>
                              <td><label for="image">Kies foto:</label></td>
                              <td><input name="image" type="file" id="image" value="<?php echo $row_rs_aanbod['image']; ?>" /></td>
                        <tr>
                              <td>Sectie:</td>
                              <td><select name="sectie" id="sectie" option value="sectie">
                                <?php
    do { 
    ?>
                                <option value="<?php echo $row_rs_aanbod['sectie']?>"><?php echo $row_rs_aanbod['sectie']?></option>
                                <?php
    } while ($row_rs_aanbod = mysql_fetch_assoc($rs_aanbod));
      $rows = mysql_num_rows($rs_aanbod);
      if($rows > 0) {
          mysql_data_seek($rs_aanbod, 0);
          $row_rs_aanbod = mysql_fetch_assoc($rs_aanbod);
    ?>
                              </select></td> 
                        </tr>
                        <tr>
                            <td><input type="Submit" name="Add" id="add" value="Toevoegen" /></td>
                        </tr>
                </tbody>
            </table>
            <input type="hidden" name="MM_insert" value="add-photos-aanbod" />
        </form>
          </div><!-- end large-8 -->
        </div><!-- end row -->
    <script src="js/vendor/jquery.js"></script>
    <script src="/js/vendor/fastclick.js"></script>
    <script src="js/foundation.min.js"></script>
    <script>
                $(document).foundation();
            </script>
    </body>
    </html>
    <?php
    mysql_free_result($rs_aanbod);
    ?>
    I cannot work out what is wrong and I would appreciate any help on this. Thanks

    Your form field and array variable names do not match
    <td><input name="image" type="file" id="image" value="<?php echo $row_rs_aanbod['image']; ?>" /></td>
    GetSQLValueString($_POST['file'], "text"),

  • BPEL Workflow email not stored in database!

    Hi,
    I'm using SOA Suite 10.1.3.4. I have a worklist application where notifications are sent to Assignees, Approvers when the action is Assign or Approve. The mails are delivered as expected. But there is not trace of emails sent. I checked WFNOTIFICATION in ORABPEL Schema. The tables are empty.
    Could anyone help me in fixing this issue.
    1. Why email notification sent are not stored in database?
    2. Is there any other means that we can track the email sent?
    3. Do we need to configure in SOA server to store the email notifications in the database.
    Thanks in advance.
    Regards,
    Pradeep

    Hi Pradeep,
    i am also facing the same proublem. let me know the update if you get anythnig regarding to this.
    thanks,
    Hari

  • Data is not stored in database

    dear all
    i have creation page in that default 5 should be disaplyed like usd0.00
    as well as those items should be stored in db
    usd (came from functional currency came from set of books)
    when i take message styled text and set value to those it will not stored in db
    later i want to update those values
    Regards
    Sreekanth

    Hi,
    you can create a rowlayout and then create a rawtext and a messagestyled text in it...
    for raw text as the page loads you can set the text as "USD".
    and for message styled text you can simply use the code for currency formatting...
    so it will display as USD 0.00 and will store only 0.00.
    Thanks,
    Gaurav

  • Difficulty compressing a huge database - compact command fails

    I am trying to compress a 419 GB database, following the instructions in the compression example described here: 
    http://msdn.microsoft.com/en-us/library/ms190257(v=sql.90).aspx
    I have set the filegroup to read-only, and taken the database offline.  I logged into the server that hosts the file, and ran this from a command prompt:
    ===================================================== 
    F:\Database>compact /c CE_SecurityLogsArchive_DataSecondary.ndf
     Compressing files in F:\Database\
    CE_SecurityLogsArchive_DataSecondary.ndf [ERR]
    CE_SecurityLogsArchive_DataSecondary.ndf: Insufficient system resources exist to
     complete the requested service.
    0 files within 1 directories were compressed.
    0 total bytes of data are stored in 0 bytes.
    The compression ratio is 1.0 to 1.
    F:\Database>
    ===============================================
    As you can see, it gave me an error: "Insufficient system resources exist to complete the requested service."  The drive has 564 GB free, so I doubt that is the issue.  Here are some specs on the server:
    MS Windows Server 2003 R2, Enterprise x64 Edition, SP2
    Intel Xeon E7420 CPU @ 2.13 GHz; 8 logical processors (8 physical)
    7.99 GB RAM
    Any suggestions how to handle this?  I really need to get this huge file compressed, and this method seems appealing if I can make it work, because you can supposedly continue to query from the database even though it has been compressed.

    I hope that you observed that this operation is only supported on read-only filegroups. Don't do this, if you plan to write to the tables in this filegroup.
    I don't know the fine-print of the COMPACT command, and a Windows forum is probably a better forum to ask. But a guess is that the command works in-memory only, in which case 419 GB is far more than a mouthful for a machine with 8 GB of memory.
    There are compression facilities in SQL Server itself, but you need Enterprise Edition for this. (And SQL 2008 or later.) But they are likely to give better effect than NTFS compression. And you don't have to set the filegroup as read-only.
    By the way, while 419 GB is not exactly a small database, it does not qualifies as huge. There are databases with thousandfold size out there.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Performance optimization during database selection.

    hi gurus,
    pls any explain about this...
    Strong knowledge of performance optimization during database selection.
    regards,
    praveen

    Hi Praveen,
    Performance Notes 
    1.Keep the Result Set Small 
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    •     There are no more physical I/Os in the database than necessary
    •     No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    •     The CPU usage of the database host is minimize
    •     The network load is reduced, since only the data that is required by the application is transferred to the application server.
      Minimize the Amount of Data Transferred 
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers 
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead 
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system. 
    Reduce the Database Load 
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    •     Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    •     Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    •     Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1.     An ABAP program requests data from a buffered table.
    2.     The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3.     If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4.     The database server passes the data to the application server, which places it in the table buffer.
    5.     The data is passed to the program.
    When you change a buffered table, the following happens:
    1.     The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2.     All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3.     Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    •     Tables that are read very frequently
    •     Tables that are changed very infrequently
    •     Relatively small tables (few lines, few columns, or short columns)
    •     Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    •     The BYPASSING BUFFER addition in the FROM clause
    •     The DISTINCT addition in the SELECT clause
    •     Aggregate expressions in the SELECT clause
    •     Joins in the FROM clause
    •     The IS NULL condition in the WHERE clause
    •     Subqueries in the WHERE clause
    •     The ORDER BY clause
    •     The GROUP BY clause
    •     The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes 
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    •     Establishing and terminating connections between the work process and the database.
    •     Access to database tables
    •     Access to R/3 Repository objects (ABAP programs, screens and so on)
    •     Access to catalog information (ABAP Dictionary)
    •     Controlling transactions (commit and rollback handling)
    •     Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server 
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running.  A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
           1.      The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
           2.      The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
           3.      While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
           4.      After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
           5.      While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    •        A dialog step from a program is assigned to a single work process for execution.
    •        The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    •        A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • MCOD installation fails during Database instance phase  "CREATE TABLESPACES

    We are installing solution manager 3.2 using MCOD option on windows 2003/oracle.  Installation fails during Database instance phase  "CREATE TABLESPACES" step without any error.  Interesting thing is no error message in installation log or oracle alert log.
    Installation Details
    First instance: SMD / Instance number: 00 / schema: sapsmd / Database SID: SMD
    Second instance; SMQ / Instance number: 01 / schema: sapsmq / Database SID: SMD
    Did any one come across this issue?  Any help is greatly appreciated.
    Thanks

    Hi,
    Did u checked for the Space utilization on the drives.Normally when installation fails there will be some intimation of the cause in the log files written in the sapinst directory.IF it is failing in the tablespace creation phase the that means that the data files cannot be created.
    It depends on the database you use: Data files(Oracle), Containers (DB2).
    Do check in the sapdata directories whethere the data file have created.
    If its not giving you any error then the installation might got stuck bcoz of some memory problems during the "create tablespace phase"
    Regards
    Ashly.

  • Selective XML Index feature is not supported for the current database version , SQL Server Extended Events , Optimizing Reading from XML column datatype

    Team , Thanks for looking into this  ..
    As a last resort on  optimizing my stored procedure ( Below ) i wanted to create a Selective XML index  ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
    Index feature is not supported for the current database version.. How ever
    EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
    Is there ANY alternative way i can optimize below stored proc ?
    Thanks in advance for your response(s) !
    /****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- EXEC [dbo].[MN_Process_DDLSchema_Changes]
    ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
    AS
    BEGIN
    SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    select getdate() as getdate_0
    DECLARE @XML XML , @Prev_Insertion_time DATETIME
    -- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
    SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
    -- PRINT '1'
    CREATE TABLE #Temp
    EventName VARCHAR(100),
    Time_Stamp_EE DATETIME,
    ObjectName VARCHAR(100),
    ObjectType VARCHAR(100),
    DbName VARCHAR(100),
    ddl_Phase VARCHAR(50),
    ClientAppName VARCHAR(2000),
    ClientHostName VARCHAR(100),
    server_instance_name VARCHAR(100),
    ServerPrincipalName VARCHAR(100),
    nt_username varchar(100),
    SqlText NVARCHAR(MAX)
    CREATE TABLE #XML_Hold
    ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
    BufferXml XML
    select getdate() as getdate_01
    INSERT INTO #XML_Hold (BufferXml)
    SELECT
    CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
    FROM sys.dm_xe_session_targets xet
    INNER JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
    WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
    --RETURN
    --SELECT * FROM #XML_Hold
    select getdate() as getdate_1
    -- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
    CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
    FOR
    PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
    --RETURN
    --CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
    --SELECT GETDATE() AS GETDATE_2
    -- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
    --CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
    --USING XML INDEX [IX_XML_Hold]
    ---- FOR VALUE
    -- --FOR PROPERTY
    -- FOR PATH
    --SELECT GETDATE() AS GETDATE_3
    --PRINT '2'
    -- RETURN
    SELECT GETDATE() GETDATE_3
    INSERT INTO #Temp
    EventName ,
    Time_Stamp_EE ,
    ObjectName ,
    ObjectType,
    DbName ,
    ddl_Phase ,
    ClientAppName ,
    ClientHostName,
    server_instance_name,
    nt_username,
    ServerPrincipalName ,
    SqlText
    SELECT
    p.q.value('@name[1]','varchar(100)') AS eventname,
    p.q.value('@timestamp[1]','datetime') AS timestampvalue,
    p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
    p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
    p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
    p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
    p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
    p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
    p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
    p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
    p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
    p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
    FROM #XML_Hold
    CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
    WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
    p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
    AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
    AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
    SELECT GETDATE() GETDATE_4
    -- SELECT * FROM #TEMP
    -- SELECT COUNT(*) FROM #TEMP
    -- SELECT GETDATE()
    -- RETURN
    -- PRINT '3'
    --RETURN
    INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
    [UserName]
    ,[DbName]
    ,[ObjectName]
    ,[client_app_name]
    ,[ClientHostName]
    ,[ServerName]
    ,[SQL_TEXT]
    ,[EE_Time_Stamp]
    ,[Event_Name]
    SELECT
    CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
    ELSE T.nt_username
    END
    ,T.DbName
    ,T.objectname
    ,T.clientappname
    ,t.ClientHostName
    ,T.server_instance_name
    ,T.sqltext
    ,T.Time_Stamp_EE
    ,T.eventname
    FROM
    #TEMP T
    /** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
    -- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
    WHERE ddl_Phase ='Commit'
    AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
    AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
    WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
    AND MN.[DbName] = T.DbName
    AND MN.[Event_Name] = T.EventName
    AND MN.[ObjectName]= T.ObjectName
    AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
    AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
    -- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
    --SELECT GETDATE()
    --PRINT '4'
    --RETURN
    SELECT
    top 100
    [EE_Time_Stamp]
    ,[ServerName]
    ,[DbName]
    ,[Event_Name]
    ,[ObjectName]
    ,[UserName]
    ,[SQL_TEXT]
    ,[client_app_name]
    ,[Created_Date]
    ,[ClientHostName]
    FROM
    [dbo].[MN_DDLSchema_Changes_log]
    ORDER BY [EE_Time_Stamp] desc
    -- select getdate()
    -- ** DELETE EVENTS after logging into Physical table
    -- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
    -- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
    -- SELECT @XML
    SELECT GETDATE() GETDATE_5
    END
    GO
    Rajkumar Yelugu

    @@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
        May 14 2014 18:34:29
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    (1 row(s) affected)
    Compatibility level is set to 110 .
    One of the limitation states - XML columns with a depth of more than 128 nested nodes
    How do i verify this ? Thanks .
    Rajkumar Yelugu

  • Enterprise manager is not able to connect to database instance

    Hi
    I am having a problem with Oracle EM. I would appreciate very much if anyone can help me.
    I installed Oracle on XP which runs on virtual pc on XP as well. Everything is fine, I can connect to database with sqlplus, and there were no errors during installation.
    But the problem is that, when I try to connect to database with EM, it says:
    Enterprise manager is not able to connect to database instance . The state of the components are listed below.
    Can anoyone help me?

    user10637311 wrote:
    When you want OEM DB , your existing DB should be created by DBCA or if suppose manually creaed DB then you have to configure that DB to EM by
    "emca -config dbcontrol db -repos create". for these two cases istener should be running.
    emctl start dbconsole -> to start dbconsole service
    emctl status dbconsole-> status of servicesI am sorry, I was looking at $ORACLE_HOME\admin . And in the proper path, both listener.ora and tnsnames.ora do exist.
    emca -config dbcontrol db -repos create :
    <last part>:
    INFO: This operation is being logged at C:\oracle\product\10.2.0\db_1\cfgtoollog
    s\emca\orcl\emca_2010-02-18_02-01-40-AM.log.
    Feb 18, 2010 2:02:46 AM oracle.sysman.emcp.util.DBControlUtil stopOMS
    INFO: Stopping Database Control (this may take a while) ...
    Feb 18, 2010 2:03:19 AM oracle.sysman.emcp.EMReposConfig createRepository
    INFO: Creating the EM repository (this may take a while) ...
    Feb 18, 2010 2:03:19 AM oracle.sysman.emcp.EMReposConfig invoke
    SEVERE: Error creating the repository
    Feb 18, 2010 2:03:19 AM oracle.sysman.emcp.EMReposConfig invoke
    INFO: Refer to the log file at C:\oracle\product\10.2.0\db_1\cfgtoollogs\emca\or
    cl\emca_repos_create_<date>.log for more details.
    Feb 18, 2010 2:03:19 AM oracle.sysman.emcp.EMConfig perform
    SEVERE: Error creating the repository
    Refer to the log file at C:\oracle\product\10.2.0\db_1\cfgtoollogs\emca\orcl\emc
    a_2010-02-18_02-01-40-AM.log for more details.
    Could not complete the configuration. Refer to the log file at C:\oracle\product
    \10.2.0\db_1\cfgtoollogs\emca\orcl\emca_2010-02-18_02-01-40-AM.log for more deta
    ils.
    C:\oracle\product\10.2.0\db_1\cfgtoollogs\emca\or
    cl\emca_repos_create_<date>.log:
    Check if repos user already exists.
    old 6: WHERE username=UPPER('&EM_REPOS_USER');
    new 6: WHERE username=UPPER('SYSMAN');
    old 8: IF ( '&EM_CHECK_TYPE' = 'EXISTS') THEN
    new 8: IF ( 'NOT_EXISTS' = 'EXISTS') THEN
    old 11: raise_application_error(-20000, '&EM_REPOS_USER does not exists..');
    new 11: raise_application_error(-20000, 'SYSMAN does not exists..');
    old 14: ELSIF ( '&EM_CHECK_TYPE' = 'NOT_EXISTS' ) THEN
    new 14: ELSIF ( 'NOT_EXISTS' = 'NOT_EXISTS' ) THEN
    old 17: raise_application_error(-20001, '&EM_REPOS_USER already exists..');
    new 17: raise_application_error(-20001, 'SYSMAN already exists..');
    old 21: raise_application_error(-20002, 'Invalid Check type &EM_CHECK_TYPE');
    new 21: raise_application_error(-20002, 'Invalid Check type NOT_EXISTS');
    DECLARE
    ERROR at line 1:
    ORA-20001: SYSMAN already exists..
    ORA-06512: at line 17

  • Unable to retrieve nametab info for logic table BSEG during Database Export

    Hi,
    Our aim is to Migrate to New hardware and do the Database Export of the existing System(Unicode) and Import the same in the new Hardware
    I am doing Database Export on SAP 4.7 SR1,HP-UX ,Oracle 9i(Unicode System) and during Database Export "Post Load Processing phase" got the error as mentioned in SAPCLUST.log
    more SAPCLUST.log
    /sapmnt/BIA/exe/R3load: START OF LOG: 20090216174944
    /sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
    $ SAP
    /sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
    Compiled Aug 13 2007 16:20:31
    /sapmnt/BIA/exe/R3load -ctf E /nas/biaexp2/DATA/SAPCLUST.STR /nas/biaexp2/DB/DDLORA.T
    PL /SAPinst_DIR/SAPCLUST.TSK ORA -l /SAPinst_DIR/SAPCLUST.log
    /sapmnt/BIA/exe/R3load: job completed
    /sapmnt/BIA/exe/R3load: END OF LOG: 20090216174944
    /sapmnt/BIA/exe/R3load: START OF LOG: 20090216182102
    /sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
    $ SAP
    /sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
    Compiled Aug 13 2007 16:20:31
    /sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
    R/SAPCLUST.log -stop_on_error
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
    (GSI) INFO: dbname   = "BIA20071101021156                                                                               
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "tinsp041                                                    
    (GSI) INFO: sysname  = "HP-UX"
    (GSI) INFO: nodename = "tinsp041"
    (GSI) INFO: release  = "B.11.11"
    (GSI) INFO: version  = "U"
    (GSI) INFO: machine  = "9000/800"
    (GSI) INFO: instno   = "0020293063"
    (EXP) TABLE: "AABLG"
    (EXP) TABLE: "CDCLS"
    (EXP) TABLE: "CLU4"
    (EXP) TABLE: "CLUTAB"
    (EXP) TABLE: "CVEP1"
    (EXP) TABLE: "CVEP2"
    (EXP) TABLE: "CVER1"
    (EXP) TABLE: "CVER2"
    (EXP) TABLE: "CVER3"
    (EXP) TABLE: "CVER4"
    (EXP) TABLE: "CVER5"
    (EXP) TABLE: "DOKCL"
    (EXP) TABLE: "DSYO1"
    (EXP) TABLE: "DSYO2"
    (EXP) TABLE: "DSYO3"
    (EXP) TABLE: "EDI30C"
    (EXP) TABLE: "EDI40"
    (EXP) TABLE: "EDIDOC"
    (EXP) TABLE: "EPIDXB"
    (EXP) TABLE: "EPIDXC"
    (EXP) TABLE: "GLS2CLUS"
    (EXP) TABLE: "IMPREDOC"
    (EXP) TABLE: "KOCLU"
    (EXP) TABLE: "PCDCLS"
    (EXP) TABLE: "REGUC"
    myCluster (55.16.Exp): 1557: inconsistent field count detected.
    myCluster (55.16.Exp): 1558: nametab says field count (TDESCR) is 305.
    myCluster (55.16.Exp): 1561: alternate nametab says field count (TDESCR) is 304.
    myCluster (55.16.Exp): 1250: unable to retrieve nametab info for logic table BSEG   
    myCluster (55.16.Exp): 8033: unable to retrieve nametab info for logic table BSEG   
    myCluster (55.16.Exp): 2624: failed to convert cluster data of cluster item.
    myCluster: RFBLG      *003**IN07**0001100000**2007*
    myCluster (55.16.Exp): 318: error during conversion of cluster item.
    myCluster (55.16.Exp): 319: affected physical table is RFBLG.
    (CNV) ERROR: data conversion failed.  rc = 2
    (RSCP) WARN: env I18N_NAMETAB_TIMESTAMPS = IGNORE
    (DB) INFO: disconnected from DB
    /sapmnt/BIA/exe/R3load: job finished with 1 error(s)
    /sapmnt/BIA/exe/R3load: END OF LOG: 20090216182145
    /sapmnt/BIA/exe/R3load: START OF LOG: 20090217115935
    /sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
    $ SAP
    /sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
    Compiled Aug 13 2007 16:20:31
    /sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
    R/SAPCLUST.log -stop_on_error
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
    (GSI) INFO: dbname   = "BIA20071101021156                                                                               
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "tinsp041                                                    
    (GSI) INFO: sysname  = "HP-UX"
    (GSI) INFO: nodename = "tinsp041"
    (GSI) INFO: release  = "B.11.11"
    (GSI) INFO: version  = "U"
    (GSI) INFO: machine  = "9000/800"
    (GSI) INFO: instno   = "0020293063"
    myCluster (55.16.Exp): 1557: inconsistent field count detected.
    myCluster (55.16.Exp): 1558: nametab says field count (TDESCR) is 305.
    myCluster (55.16.Exp): 1561: alternate nametab says field count (TDESCR) is 304.
    myCluster (55.16.Exp): 1250: unable to retrieve nametab info for logic table BSEG   
    myCluster (55.16.Exp): 8033: unable to retrieve nametab info for logic table BSEG   
    myCluster (55.16.Exp): 2624: failed to convert cluster data of cluster item.
    myCluster: RFBLG      *003**IN07**0001100000**2007*
    myCluster (55.16.Exp): 318: error during conversion of cluster item.
    myCluster (55.16.Exp): 319: affected physical table is RFBLG.
    (CNV) ERROR: data conversion failed.  rc = 2
    (RSCP) WARN: env I18N_NAMETAB_TIMESTAMPS = IGNORE
    (DB) INFO: disconnected from DB
    SAPCLUST.l/sapmnt/BIA/exe/R3load: job finished with 1 error(s)
    /sapmnt/BIA/exe/R3load: END OF LOG: 20090217115937
    og (97%)
    The main eror is "unable to retrieve nametab info for logic table BSEG "  
    Your reply to this issue is highly appreciated
    Thanks
    Sunil

    Hello,
    acording to this output:
    /sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
    R/SAPCLUST.log -stop_on_error
    you are doing the export with a non-unicode SAP codepage. The codepage has to be 4102/4103 (see note #552464 for details). There is a screen in the sapinst dialogues that allows the change of the codepage. 1100 is the default in some sapinst versions.
    Best Regards,
    Michael

  • Data not storing in Custom infotype build with tab strips

    Hi All,
    I have created one custom infotype 9030,
    With screen 2000.
    which is having tab strips, I have build the screens refering to p9030-zztest its not storing in database.
    where I am doing wrong...please let me know what should be done in this case.
    Regards
    Satish.v

    Hi,
    Checkout the below link :
    <link to blacklisted site removed by moderator>
    Shailaja Ainala.
    Edited by: Thomas Zloch on Jan 28, 2012 9:06 PM

  • ORA-01180: can not create datafile 1 during RMAN restore.

    Hello,
    I am trying to refresh one of our QA environments and I am getting this error message:
    RMAN> run
    2> {
    3> allocate channel c1 device type disk;
    4> allocate channel c2 device type disk;
    5> restore database;
    6> recover database;
    7> }
    allocated channel: c1
    channel c1: SID=5 device type=DISK
    allocated channel: c2
    channel c2: SID=131 device type=DISK
    Starting restore at 08-NOV-12
    using channel ORA_DISK_1
    creating datafile file number=1 name=+DATA1/alephpr/datafile/system.269.722874729
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 11/08/2012 16:27:40
    ORA-01180: can not create datafile 1
    ORA-01110: data file 1: '+DATA1/alephpr/datafile/system.269.722874729' I created a new database from scratch with the same name as is in Production, later on I will proceed to rename it to the right one. I started the steps for refreshing the PRD copy:
    RMAN> shutdown immediate
    using target database control file instead of recovery catalog
    database dismounted
    Oracle instance shut down
    RMAN> startup nomount
    connected to target database (not started)
    Oracle instance started
    Total System Global Area 534462464 bytes
    Fixed Size 2228200 bytes
    Variable Size 176160792 bytes
    Database Buffers 348127232 bytes
    Redo Buffers 7946240 bytes
    RMAN> set dbid=3573460394
    executing command: SET DBID
    RMAN> restore controlfile from '/restorealeph/c-3573460394-20121106-01';
    Starting restore at 08-NOV-12
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=130 device type=DISK
    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
    +output file name=+DATA1/alephpr/controlfile/current.260.798747585+
    +output file name=+FLASH/alephpr/controlfile/current.276.798747585+
    Finished restore at 08-NOV-12
    RMAN> alter database mount;
    database mounted
    released channel: ORA_DISK_1Once the control file is restored, I need to crosscheck, delete expired and catalog the backups at the new server:
    RMAN> crosscheck backup;
    Crosschecked 48 objects
    RMAN> delete noprompt expired backup;
    Deleted 48 EXPIRED objects
    RMAN> list backup summary;
    specification does not match any backup in the repositoryI need now to catalog the backups we transferred from Prod into the QA server's directory /restorealeph:
    RMAN> catalog start with '/restorealeph/';
    searching for all files that match the pattern /restorealeph/
    List of Files Unknown to the Database
    =====================================
    File Name: /restorealeph/9cnpm51u_1_1
    File Name: /restorealeph/9enpm6lj_1_1
    File Name: /restorealeph/c-3573460394-20121107-00
    File Name: /restorealeph/c-3573460394-20121107-01
    File Name: /restorealeph/c-3573460394-20121106-01
    Do you really want to catalog the above files (enter YES or NO)? YES
    cataloging files...
    cataloging done
    List of Cataloged Files
    =======================
    File Name: /restorealeph/9cnpm51u_1_1
    File Name: /restorealeph/9enpm6lj_1_1
    File Name: /restorealeph/c-3573460394-20121107-00
    File Name: /restorealeph/c-3573460394-20121107-01
    File Name: /restorealeph/c-3573460394-20121106-01
    RMAN> list backup summary;
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
    4097    B  F  A DISK        07-NOV-12       1       1       NO         BACKUP_ALEPHPR.TUR_110712030000
    4098    B  A  A DISK        07-NOV-12       1       1       NO         BACKUP_ALEPHPR.TUR_110712030000
    RMAN> list backup tag="BACKUP_ALEPHPR.TUR_110712030000";
    List of Backup Sets
    ===================
    BS Key  Type LV Size       Device Type Elapsed Time Completion Time
    4097    Full    178.04G    DISK        00:00:00     07-NOV-12
            BP Key: 4097   Status: AVAILABLE  Compressed: NO  Tag: BACKUP_ALEPHPR.TUR_110712030000
            Piece Name: /restorealeph/9cnpm51u_1_1
      List of Datafiles in backup set 4097
      File LV Type Ckp SCN    Ckp Time  Name
      1       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/system.269.722874729
      2       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/sysaux.266.722874731
      3       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/undotbs1.289.722874727
      4       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/undotbs2.257.722874727
      5       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/users.298.722874731
      6       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/audit.299.723372305
      7       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/case_datos.260.723372307
      8       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/case_indices.261.723372307
      9       Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/data_hist.262.723372309
      10      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/data_l.264.723372319
      11      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/data_long.265.723372349
      12      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/data_md.270.723372355
      13      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/data_sm.271.723372369
      14      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/data_xl.272.723372375
      15      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/index_l.273.723372401
      16      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/index_md.274.723372427
      17      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/index_sm.275.723372455
      18      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/index_xl.276.723372473
      19      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/tools.278.723372501
      26      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/data_sm.300.736088959
      27      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/dev_disco_ptm5_meta.301.746385117
      28      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/dev_disco_pstore.302.746385119
      29      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/dev_disco_ptm5_cache.304.746385121
      30      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/undotbs1.305.796391931
      31      Full 6919879655786 07-NOV-12 +DATA1/alephpr/datafile/undotbs2.306.796392185
    BS Key  Size       Device Type Elapsed Time Completion Time
    4098    16.89G     DISK        00:00:00     07-NOV-12
            BP Key: 4098   Status: AVAILABLE  Compressed: NO  Tag: BACKUP_ALEPHPR.TUR_110712030000
            Piece Name: /restorealeph/9enpm6lj_1_1
      List of Archived Logs in backup set 4098
      Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
      1    35722   6919870864350 06-NOV-12 6919871887697 06-NOV-12
      1    35723   6919871887697 06-NOV-12 6919872372211 06-NOV-12
      1    35724   6919872372211 06-NOV-12 6919872410158 06-NOV-12
      1    35725   6919872410158 06-NOV-12 6919872447301 06-NOV-12
      1    35726   6919872447301 06-NOV-12 6919872503332 06-NOV-12
      1    35727   6919872503332 06-NOV-12 6919872551564 06-NOV-12
      1    35728   6919872551564 06-NOV-12 6919872603881 06-NOV-12
      1    35729   6919872603881 06-NOV-12 6919872655942 06-NOV-12
      1    35730   6919872655942 06-NOV-12 6919872698722 06-NOV-12
      1    35731   6919872698722 06-NOV-12 6919872741655 06-NOV-12
      1    35732   6919872741655 06-NOV-12 6919872782284 06-NOV-12
      1    35733   6919872782284 06-NOV-12 6919872872302 06-NOV-12
      1    35734   6919872872302 06-NOV-12 6919872910206 06-NOV-12
      1    35735   6919872910206 06-NOV-12 6919872945577 06-NOV-12
      1    35736   6919872945577 06-NOV-12 6919872980056 06-NOV-12
      1    35737   6919872980056 06-NOV-12 6919873013411 06-NOV-12
      1    35738   6919873013411 06-NOV-12 6919873050761 06-NOV-12
      1    35739   6919873050761 06-NOV-12 6919873084996 06-NOV-12
      1    35740   6919873084996 06-NOV-12 6919873122049 06-NOV-12
      1    35741   6919873122049 06-NOV-12 6919873521767 06-NOV-12
      1    35742   6919873521767 06-NOV-12 6919873952773 06-NOV-12
      1    35743   6919873952773 06-NOV-12 6919874258549 06-NOV-12
      1    35744   6919874258549 06-NOV-12 6919874472213 06-NOV-12
      1    35745   6919874472213 06-NOV-12 6919874744856 06-NOV-12
      1    35746   6919874744856 06-NOV-12 6919875113086 06-NOV-12
      1    35747   6919875113086 06-NOV-12 6919875733337 06-NOV-12
      1    35748   6919875733337 06-NOV-12 6919876139061 06-NOV-12
      1    35749   6919876139061 06-NOV-12 6919876707162 06-NOV-12
      1    35750   6919876707162 06-NOV-12 6919877706313 06-NOV-12
      1    35751   6919877706313 06-NOV-12 6919877919039 06-NOV-12
      1    35752   6919877919039 06-NOV-12 6919878024429 06-NOV-12
      1    35753   6919878024429 06-NOV-12 6919878107673 06-NOV-12
      1    35754   6919878107673 06-NOV-12 6919878258511 06-NOV-12
      1    35755   6919878258511 06-NOV-12 6919878308336 06-NOV-12
      1    35756   6919878308336 06-NOV-12 6919878424419 06-NOV-12
      1    35757   6919878424419 06-NOV-12 6919878488485 06-NOV-12
      1    35758   6919878488485 06-NOV-12 6919878827092 06-NOV-12
      1    35759   6919878827092 06-NOV-12 6919879350098 07-NOV-12
      1    35760   6919879350098 07-NOV-12 6919879675556 07-NOV-12
      2    35949   6919870864360 06-NOV-12 6919871494640 06-NOV-12
      2    35950   6919871494640 06-NOV-12 6919871887487 06-NOV-12
      2    35951   6919871887487 06-NOV-12 6919872410655 06-NOV-12
      2    35952   6919872410655 06-NOV-12 6919872552468 06-NOV-12
      2    35953   6919872552468 06-NOV-12 6919872698940 06-NOV-12
      2    35954   6919872698940 06-NOV-12 6919872872690 06-NOV-12
      2    35955   6919872872690 06-NOV-12 6919872980371 06-NOV-12
      2    35956   6919872980371 06-NOV-12 6919873085902 06-NOV-12
      2    35957   6919873085902 06-NOV-12 6919873569082 06-NOV-12
      2    35958   6919873569082 06-NOV-12 6919873949096 06-NOV-12
      2    35959   6919873949096 06-NOV-12 6919874404640 06-NOV-12
      2    35960   6919874404640 06-NOV-12 6919875011814 06-NOV-12
      2    35961   6919875011814 06-NOV-12 6919875631429 06-NOV-12
      2    35962   6919875631429 06-NOV-12 6919876324885 06-NOV-12
      2    35963   6919876324885 06-NOV-12 6919876363526 06-NOV-12
      2    35964   6919876363526 06-NOV-12 6919876748508 06-NOV-12
      2    35965   6919876748508 06-NOV-12 6919877741784 06-NOV-12
      2    35966   6919877741784 06-NOV-12 6919878108943 06-NOV-12
      2    35967   6919878108943 06-NOV-12 6919878424477 06-NOV-12
      2    35968   6919878424477 06-NOV-12 6919879012111 06-NOV-12
      2    35969   6919879012111 06-NOV-12 6919879260589 07-NOV-12
      2    35970   6919879260589 07-NOV-12 6919879350086 07-NOV-12
      2    35971   6919879350086 07-NOV-12 6919879464935 07-NOV-12
      2    35972   6919879464935 07-NOV-12 6919879548399 07-NOV-12
      2    35973   6919879548399 07-NOV-12 6919879675564 07-NOV-12
    RMAN> list incarnation;
    List of Database Incarnations
    DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
    1       1       ALEPHPR  3573460394       PARENT  1          13-MAY-10
    2       2       ALEPHPR  3573460394       PARENT  2229467    28-JUN-10
    3       3       ALEPHPR  3573460394       CURRENT 6918261828355 26-SEP-12The ASM structure is created...
    ASMCMD> lsdg
    State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
    MOUNTED EXTERN N 512 4096 1048576 509282 507421 0 507421 0 N DATA1/
    MOUNTED EXTERN N 512 4096 1048576 50641 42850 0 42850 0 N FLASH/
    ASMCMD> cd DATA1/ALEPHPR/DATAFILE
    ASMCMD> pwd
    +DATA1/ALEPHPR/DATAFILEBoth source and target databases are 11.2.0.2 PSU 6 running on Linux x64. I cannot use RMAN Duplicate since there is no visibility among different environments (PROD - QA in this case).
    Any idea?
    Thanks
    Martin
    Edited by: martin.morono on Nov 8, 2012 11:19 AM
    Edited by: martin.morono on Nov 8, 2012 11:49 AM

    Thanks Levi,
    I slightly modified your script to recatalog the backup pieces since they are note stored at the same location in PR and QA.
    No luck. The error message is different but the result is the same... it keeps failing.
    RMAN> run {
    2> allocate channel c1 device type disk;
    3> allocate channel c2 device type disk;
    4> restore controlfile from '/restorealeph/c-3573460394-20121107-01';
    5> startup mount;
    6> catalog start with '/restorealeph/';
    7> restore database from tag 'BACKUP_ALEPHPR.TUR_110712030000';
    8> }
    allocated channel: c1
    channel c1: SID=191 device type=DISK
    allocated channel: c2
    channel c2: SID=131 device type=DISK
    Starting restore at 09 NOV 2012 13:11:09
    channel c2: skipped, AUTOBACKUP already found
    channel c1: restoring control file
    channel c1: restore complete, elapsed time: 00:00:15
    output file name=+DATA1/alephpr/controlfile/current.260.798747585
    output file name=+FLASH/alephpr/controlfile/current.276.798747585
    Finished restore at 09 NOV 2012 13:11:24
    database is already started
    database mounted
    Starting implicit crosscheck backup at 09 NOV 2012 13:11:31
    Crosschecked 52 objects
    Finished implicit crosscheck backup at 09 NOV 2012 13:11:39
    Starting implicit crosscheck copy at 09 NOV 2012 13:11:39
    Crosschecked 2 objects
    Finished implicit crosscheck copy at 09 NOV 2012 13:11:40
    searching for all files in the recovery area
    cataloging files...
    cataloging done
    List of Cataloged Files
    =======================
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_2_seq_1.279.795017193
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_2_seq_34950.273.795014469
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_1_seq_1.281.795017413
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_1_seq_2.283.795017519
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_1_seq_3.290.795018411
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_1_seq_4.291.795018559
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_1_seq_5.292.795018707
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_1_seq_6.293.795018811
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_1_seq_7.294.795018899
    File Name: +flash/ALEPHPR/ARCHIVELOG/2012_09_26/thread_1_seq_8.295.795020509
    File Name: +flash/ALEPHPR/CONTROLFILE/Current.268.798725123
    searching for all files that match the pattern /restorealeph/
    List of Files Unknown to the Database
    =====================================
    File Name: /restorealeph/9cnpm51u_1_1
    File Name: /restorealeph/9enpm6lj_1_1
    File Name: /restorealeph/c-3573460394-20121107-00
    File Name: /restorealeph/c-3573460394-20121107-01
    File Name: /restorealeph/c-3573460394-20121106-01
    Do you really want to catalog the above files (enter YES or NO)? YEs
    cataloging files...
    cataloging done
    List of Cataloged Files
    =======================
    File Name: /restorealeph/9cnpm51u_1_1
    File Name: /restorealeph/9enpm6lj_1_1
    File Name: /restorealeph/c-3573460394-20121107-00
    File Name: /restorealeph/c-3573460394-20121107-01
    File Name: /restorealeph/c-3573460394-20121106-01
    Starting restore at 09 NOV 2012 13:11:48
    released channel: c1
    released channel: c2
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 11/09/2012 13:11:48
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 31 found to restore
    RMAN-06023: no backup or copy of datafile 30 found to restore
    RMAN-06023: no backup or copy of datafile 29 found to restore
    RMAN-06023: no backup or copy of datafile 28 found to restore
    RMAN-06023: no backup or copy of datafile 27 found to restore
    RMAN-06023: no backup or copy of datafile 26 found to restore
    RMAN-06023: no backup or copy of datafile 19 found to restore
    RMAN-06023: no backup or copy of datafile 18 found to restore
    RMAN-06023: no backup or copy of datafile 17 found to restore
    RMAN-06023: no backup or copy of datafile 16 found to restore
    RMAN-06023: no backup or copy of datafile 15 found to restore
    RMAN-06023: no backup or copy of datafile 14 found to restore
    RMAN-06023: no backup or copy of datafile 13 found to restore
    RMAN-06023: no backup or copy of datafile 12 found to restore
    RMAN-06023: no backup or copy of datafile 11 found to restore
    RMAN-06023: no backup or copy of datafile 10 found to restore
    RMAN-06023: no backup or copy of datafile 9 found to restore
    RMAN-06023: no backup or copy of datafile 8 found to restore
    RMAN-06023: no backup or copy of datafile 7 found to restore
    RMAN-06023: no backup or copy of datafile 6 found to restore
    RMAN-06023: no backup or copy of datafile 5 found to restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    RMAN-06023: no backup or copy of datafile 3 found to restore
    RMAN-06023: no backup or copy of datafile 2 found to restore
    RMAN-06023: no backup or copy of datafile 1 found to restore
    RMAN>Just in case, I re-run this script including the crosscheck backup and the delete noprompt expired backups before restoring an the error messages were the same.
    THanks again for your help.
    Regards.
    Martin
    Edited by: martin.morono on Nov 9, 2012 7:21 AM

  • How to configure email Alerts in OEM Cloud 12c for Database Servers up/down

    Hi everybody,
    How to configure email Alerts in OEM Cloud 12c for Database Servers up/down status?
    Regards,
    Miguel Vega

    Hi Miguel Vega,
    Information regarding the notifications:
    ==============================
    Configuring notification rules in 12c is different from earlier releases.
    The concept and function of notification rules has been replaced with a two-tier system consisting of Incident Rules and Incident Rule Sets :
    1. Incident Rules: Operate at the lowest level granularity (on discrete events) and performs the same role as notification rules from earlier releases.
    By using incident rules, you can automate the response to incoming incidents and their updates.
    A rule contains a set of automated actions to be taken on specific events, incidents or problems.
    The actions taken are for example : sending e-mails, creating incidents, updating incidents, and creating tickets.
    2. Incident Rule Set: A rule set is a collection of rules that applies to a common set of objects, for example, targets, jobs, and templates.
    To help you to achieve the Notification Rules configuration, refer those notes :
    How To Configure Notification Rules in 12C Enterprise Manager Cloud Control ? Doc ID 1368036.1
    EM12c How to Add and Configure Email Addresses to EM Administrators and Update the Notification Schedule ?Doc ID 1368262.1
    EM12c How to Subscribe or Unsubscribe for Email Notification for an Incident Rule Set ?Doc ID 1389460.1
    EM 12c How to Configure Notifications for Job Executions ? Doc ID 1386816.1
    Best Regards,
    Venkat

  • Error during database load in ECC 5.0

    Hello All,
    i am trying to install ECC.5 with oracle 10g during database load phase he is giving me a error when i saw in log file i found this.
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: START OF LOG: 20090611085559
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: version R6.40/V1.4
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe -ctf I E:/sap5/51031332_1/EXP1/DATA/SAP0000.STR C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/DDLORA.TPL C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/SAP0000.TSK ORA -l C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/SAP0000.log -o D
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: job completed
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: END OF LOG: 20090611085559
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: START OF LOG: 20090611103425
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: version R6.40/V1.4
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe -dbcodepage 1100 -i C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/SAP0000.cmd -l C:\Program Files\sapinst_instdir\ECC50\SYSTEM\ABAP\ORA\NUC\DB/SAP0000.log -stop_on_error
    DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1034
    DbSl Trace: CONNECT failed with sql error '1034'
    DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1034
    DbSl Trace: CONNECT failed with sql error '1034'
    (DB) ERROR: db_connect rc = 256
    DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1034
    DbSl Trace: CONNECT failed with sql error '1034'
    DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1034
    DbSl Trace: CONNECT failed with sql error '1034'
    (DB) ERROR: DbSlErrorMsg rc = 99
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\EPT\SYS\exe\run/R3load.exe: END OF LOG: 20090611103428
    can anybody help me to sort this out.
    Any help highly appreciated.
    Thanks & Regards
    Subhash

    First of all thanks for helping me
    i have tried this
    Connected to an idle instance.
    SQL> STARTUP MOUNT
    ORACLE instance started.
    Total System Global Area  205520896 bytes
    Fixed Size                  1248116 bytes
    Variable Size             117441676 bytes
    Database Buffers           83886080 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    SQL> SET AUTORECOVERY ON
    SQL> RECOVER DATABASE
    ORA-00279: change 262364 generated at 06/11/2009 09:43:56 needed for thread 1
    ORA-00289: suggestion : C:\ORACLE\EPT\ORAARCH\EPTARCHARC00052_0689209324.001
    ORA-00280: change 262364 for thread 1 is in sequence #52
    ORA-00308: cannot open archived log
    'C:\ORACLE\EPT\ORAARCH\EPTARCHARC00052_0689209324.001'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    But i m not able to see any archive file on oraarch folder
    Regards
    Subhash

  • Stored procedures in PV database

    I want to create PL/SQL stored procedures in Primavera database(Oracle 8.1.7) using JDBC ODBC bridge driver which further uses system dsn PrimaveraSDK_PE(Installed by standard Primavera client).
    Whenever i try creating it, i get an error message saying syntax error near word procedure.
    Same error occurs when i try creating a table.
    Following are my connection parameters:
    odbcURL = "jdbc:odbc:PrimaveraSDK_PE"
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
              con1 = DriverManager.getConnection(odbcURL, "admin", "admin");
    I can create the procedures using Oracle OCI driver but using that i do not have access to all fields..so errors says..invalid column.
    following are connection parameters:
    String driverName = "oracle.jdbc.driver.OracleDriver";
         Class.forName(driverName);
              con = DriverManager.getConnection("jdbc:oracle:oci8:@sapxi","privuser","privuser");
    Since I cannot see all the fields of database through PL/SQL, I think this approach would not work. To see all the fields of database I must access it using DSN ONLY!!! Even the administrative login is not been able to show me all the fields.This is the way they have designed it....
    So whats wrong in creating the stored procedures in oracle database using JDBC ODBC bridge driver using system DSN????????
    Please help

    Thanks for your reply!!!
    Here is the code!
    import java.sql.*;
    import java.util.*;
    import java.text.*;
    class Try
    static String odbcURL = "jdbc:odbc:PrimaveraSDK_PE";
    static public synchronized Connection getConnection ()
                   throws SQLException, ClassNotFoundException
              Connection con1 = null;
              Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
              con1 = DriverManager.getConnection(odbcURL, "admin", "admin");
              return con1;
    public static void main(String[] args) throws Exception
              try
              Connection con = getConnection();
         String actionQuery = new StringBuffer( "select wbs_id, wbs_concat_name , wbs_short_name , wbs_name from projwbs where wbs_id = 3986").toString();
              ResultSet rs=null;
              PreparedStatement prepStmt = con.prepareStatement(actionQuery);
              rs = prepStmt.executeQuery();
              int wbsID = 0;
              if (rs == null )
                   System.out.println("rs null");
              else
              while(rs.next())
              wbsID = rs.getInt(1);
              System.out.println("rs not null");
              System.out.print(" " + wbsID);
              System.out.print(" " + rs.getString(2));
              System.out.print(" " + rs.getString(3));
              System.out.print(" " + rs.getString(4));
              rs.close();
              prepStmt.close();
    // Creating Stored Procedure....
    Statement stmt = con.createStatement();
    String procedure = "CREATE OR REPLACE PROCEDURE FirstProc AS BEGIN SELECT PROJ_ID FROM PROJECT; END; ";
    stmt.executeUpdate(procedure);
    System.out.println("Proceudre created or updated successfully!");
    stmt.close();
              catch(Exception e)
                   e.printStackTrace();
    and the complete stack trace!!!
    rs not null
    3986 EN EN ABCfghjkl
    java.sql.SQLException: [ATI][OpenRDA ODBC]Syntax error in SQL statement. syntax error line 1 at or after token <OR>.
         at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source)
         at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source)
         at sun.jdbc.odbc.JdbcOdbc.SQLExecDirect(Unknown Source)
         at sun.jdbc.odbc.JdbcOdbcStatement.execute(Unknown Source)
         at sun.jdbc.odbc.JdbcOdbcStatement.executeUpdate(Unknown Source)
         at pack1.Try.main(Try.java:134)
    First two lines show the valid output for the select query..which is absolutely correct. But the create procedure statement is throwing above mentioned errors.
    Also one more important thing, I am not been able to view all the columns of table projwbs(its regarding Primavera SDK) through SQL Plus having administrative rights.
    But I can access all the fields using jdbc odbc bridge driver in Java.
    Is there any specific security or access control????
    so that users would able to access specific columns through JDBC only and not through SQL Plus.
    Please help!!!

Maybe you are looking for

  • The spinning colorwheel come up frequently on my MacbookPro, how do i speed up my computer?

    I have a MacBookPro that i use for DJing.  I frequently get the spinning colorwheel.  How do i stop that from coming up so often.  I've increased my RAM from 4GB to 8GB.  My original 320GB HD was replaced by a 500GB HD at the Apple Store when the ori

  • How to emulate CSS fixed position in Flex 3

    Does anyone know how to emulate CSS fixed positioning in Flex 3? Canvas (absolute layout) uses coordinates same as HTML/CSS absolute positioning. How do I achieve fixed positioning where the panel is taken out of the flow and positioned relative to t

  • Issue with the music output

    There's a major issue with my z1's music output,infact with all audio outputs,the sound quality including the volume is too less to be heard, this us seriously dissapointing as SONY is known for good music output as the trademark representing WALKMAN

  • Ann: North East Conference on MVC Frameworks and Struts in 3 weeks:

    Ann: North East Conference on MVC Frameworks and Struts in 3 weeks: Most people already know what MVC is, once you know what it is, here is the next step. Presenting will be the popular frameworks and components in use by 8 speakers: -Ted Husted – Au

  • Has anybody tried the soa_ibarra application on ESB????

    I have imported all the files of orderprocessing from soa_ibarra.zip file on top of the Chapter 2 application of eval guide 8.1 SP3 into the weblogic workshop, But when i am trying to build the application , The following errors are coming. C:\beaSP3