How to improve SQL table perfomance

Hi,
I have one table having 5lacks recording during the select operation the query is very slow. The table is in third normal form having primary key and index.Is there any other way to improve table performance....

Generally consider indexing ON clause and WHERE clause columns.
Also make sure the WHERE clause predicates are SARGable:
http://www.sqlusa.com/bestpractices/sargable/
Optimizaton: http://www.sqlusa.com/articles/query-optimization/
Kalman Toth Database & OLAP Architect
SQL Server 2014 Database Design
New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

Similar Messages

  • How to improve sql perfomance/access speed by altering session parameters

    Dear friends
    how to improve sql perfomance/access speed by altering the session parameters? without altering indexes & sql expression
    regrads
    Edited by: sak on Mar 14, 2011 2:10 PM
    Edited by: sak on Mar 14, 2011 2:43 PM

    One can try:
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3696883368520
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5180609822543
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/memory.htm#sthref497
    But make sure you understand the caveats you can run into!
    It would be better to post the outcome of select * from v$version; first.
    Also and execution plan would be nice to see.
    See:
    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting

  • How to integrate SQL table with the cahce

    Hi,
    In normal ASP.net application we use the
    sqlCacheDependency to integrate the SQL server with the cache in ASP.NET, so that any change in the SQL table row will replace the cache with the latest data.
    How to achieve the same in the Azure cache.
    We need to integrate the SQL server with the Azure cache so that any change in the SQL table row should replace the cache with the latest data.

    Hi,
    Cache in Azure is not different with ASP.NET, please see
    http://msdn.microsoft.com/en-us/library/windowsazure/gg278356.aspx for more details, Azure provides for multiple types of persistent storage which can be leveraged for caching (Azure SQL Database, Azure Table Storage, Azure Blob Storage etc…). I would suggest
    you read this article (http://www.dnnsoftware.com/blog/cid/425642/Understanding-Windows-Azure-Caching-for-building-high-performance-Websites
    ), because of we know where the cache data is, so we can sync up the data as expected.
    Best Regards
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to access SQL tables from WebDynPro ABAP application ?

    Hi,
    I am trying a scenario, where I need to send an user ID to SQL server table (update/modify/delete) from webDynpro ABAP application.
    Basically ,I am trying to know:---
    a>How to write a SQL Connection from ABAP code within webdynpro ABAP application
    b>What are the ways to do it.(by code or any other API/mechanism)
    I appreciate if anybody knows this.
    Thanks
    Praveen

    Hi,
    The EXEC CONNECT ... is usually used in the procedural ABAP code. For this you can refer to the ABAPDOCU.
    I dont have any sample code on the classes I listed try to check out them for the parameters and the methods they have.
    In WD for Java, we have these connection classes to connect to any databsae server.
    Or try to create an RFC with DESTINATION for this
    Pls check out this link for this -
    Pull data from another r3 server using abap dynpro
    Regards
    Lekha

  • How to export sql table data to Excel/PDf using Storedprocedure?

    Hi ,
            I have one table in sqlserver2008R2 that named "Customer" so that table hold the 1 Lac rows. Now I want send this table data to Excel/pdf with Columns using Storedprocedure.
       I have tried this using xp_cmdshell so This is not working for me.
    finally i want to get data in Excel and pdf also.
    Please guide me

    You can actually run an Excel Macro to grab the data from the SQL Server DB.  Below are some samples.
    Sub ADOExcelSQLServer()
    ' Carl SQL Server Connection
    ' FOR THIS CODE TO WORK
    ' In VBE you need to go Tools References and check Microsoft Active X Data Objects 2.x library
    Dim Cn As ADODB.Connection
    Dim Server_Name As String
    Dim Database_Name As String
    Dim User_ID As String
    Dim Password As String
    Dim SQLStr As String
    Dim rs As ADODB.Recordset
    Set rs = New ADODB.Recordset
    Server_Name = "Excel-PC\SQLEXPRESS" ' Enter your server name here
    Database_Name = "Northwind" ' Enter your database name here
    User_ID = "" ' enter your user ID here
    Password = "" ' Enter your password here
    SQLStr = "SELECT * FROM dbo.Orders" ' Enter your SQL here
    Set Cn = New ADODB.Connection
    Cn.Open "Driver={SQL Server};Server=" & Server_Name & ";Database=" & Database_Name & _
    ";Uid=" & User_ID & ";Pwd=" & Password & ";"
    rs.Open SQLStr, Cn, adOpenStatic
    ' Dump to spreadsheet
    With Worksheets("sheet1").Range("a1:z500") ' Enter your sheet name and range here
    .ClearContents
    .CopyFromRecordset rs
    End With
    ' Tidy up
    rs.Close
    Set rs = Nothing
    Cn.Close
    Set Cn = Nothing
    End Sub
    Sub ADOExcelSQLServer()
    Dim Cn As ADODB.Connection
    Dim Server_Name As String
    Dim Database_Name As String
    Dim User_ID As String
    Dim Password As String
    Dim SQLStr As String
    Dim rs As ADODB.Recordset
    Set rs = New ADODB.Recordset
    Server_Name = "LAPTOP\SQL_EXPRESS" ' Enter your server name here
    Database_Name = "Northwind" ' Enter your database name here
    User_ID = "" ' enter your user ID here
    Password = "" ' Enter your password here
    SQLStr = "SELECT * FROM Orders" ' Enter your SQL here
    Set Cn = New ADODB.Connection
    Cn.Open "Driver={SQL Server};Server=" & Server_Name & ";Database=" & Database_Name & _
    ";Uid=" & User_ID & ";Pwd=" & Password & ";"
    rs.Open SQLStr, Cn, adOpenStatic
    With Worksheets("Sheet1").Range("A2:Z500")
    .ClearContents
    .CopyFromRecordset rs
    End With
    rs.Close
    Set rs = Nothing
    Cn.Close
    Set Cn = Nothing
    End Sub
    Sub TestMacro()
    ' Create a connection object.
    Dim cnPubs As ADODB.Connection
    Set cnPubs = New ADODB.Connection
    ' Provide the connection string.
    Dim strConn As String
    'Use the SQL Server OLE DB Provider.
    strConn = "PROVIDER=SQLOLEDB;"
    'Connect to the Pubs database on the local server.
    strConn = strConn & "DATA SOURCE=(local);INITIAL CATALOG=NORTHWIND.MDF;"
    'Use an integrated login.
    strConn = strConn & " INTEGRATED SECURITY=sspi;"
    'Now open the connection.
    cnPubs.Open strConn
    ' Create a recordset object.
    Dim rsPubs As ADODB.Recordset
    Set rsPubs = New ADODB.Recordset
    With rsPubs
    ' Assign the Connection object.
    .ActiveConnection = cnPubs
    ' Extract the required records.
    .Open "SELECT * FROM Categories"
    ' Copy the records into cell A1 on Sheet1.
    Sheet1.Range("A1").CopyFromRecordset rsPubs
    ' Tidy up
    .Close
    End With
    cnPubs.Close
    Set rsPubs = Nothing
    Set cnPubs = Nothing
    End Sub
    Finally, you can run a SProc in SQL Server, from Excel, if you want the SProc to do the export.
    Option Explicit
    Sub Working2()
    'USE [Northwind]
    'GO
    'DECLARE @return_value int
    'EXEC @return_value = [dbo].[TestNewProc]
    ' @ShipCountry = NULL
    'SELECT 'Return Value' = @return_value
    'GO
    Dim con As Connection
    Dim rst As Recordset
    Dim strConn As String
    Set con = New Connection
    strConn = "Provider=SQLOLEDB;"
    strConn = strConn & "Data Source=LAPTOP\SQL_EXPRESS;"
    strConn = strConn & "Initial Catalog=Northwind;"
    strConn = strConn & "Integrated Security=SSPI;"
    con.Open strConn
    'Put a country name in Cell E1
    Set rst = con.Execute("Exec dbo.TestNewProc '" & ActiveSheet.Range("E1").Text & "'")
    'The total count of records is returned to Cell A5
    ActiveSheet.Range("A5").CopyFromRecordset rst
    rst.Close
    con.Close
    End Sub
    Sub Working()
    'USE [Northwind]
    'GO
    'DECLARE @return_value int
    'EXEC @return_value = [dbo].[Ten Most Expensive Products]
    'SELECT 'Return Value' = @return_value
    'GO
    Dim con As Connection
    Dim rst As Recordset
    Set con = New Connection
    con.Open "Provider=SQLOLEDB;Data Source=LAPTOP\SQL_EXPRESS;Initial Catalog=Northwind;Integrated Security=SSPI;"
    Set rst = con.Execute("Exec dbo.[Ten Most Expensive Products]")
    'Results of SProc are returned to Cell A1
    ActiveSheet.Range("A1").CopyFromRecordset rst
    rst.Close
    con.Close
    End Sub
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • How to improve SQL Query

    Hi, I have to search record from one table that consists of more than 75 million row records. I've try to index in this table.
    My problem is query result execute so slowly. How to increase the performance so i can get query result quickly ??
    here is my explain plan :
    PLAN_TABLE_OUTPUT
    Plan hash value: 2042796762
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 12 | 45118 (10)| 00:09:02 |
    |* 1 | TABLE ACCESS FULL| EID_DATA | 1 | 12 | 45118 (10)| 00:09:02 |
    Predicate Information (identified by operation id):
    1 - filter(RAWTOHEX("UID_DATA")='80297CFA353B0488')
    13 rows selected
    Thank you..

    Hi,
    if you have an index on UID_DATA, the query cannot benefit from it because the column is wrapped in RAWTOHEX function. Creating a function-based index on RAWTOHEX(UID_DATA) should help. If this function call wasn't added to the query explicitly, then you should work on why it showed up there and how to get rid of it.
    Next time you post a similar problem, please don't forget to include the query's text.
    Best regards,
    Nikolay

  • How to improve sql server query response time

    I have a table that contains 60 million records with the following structure
    1. SEQ_ID (Bigint),
    2. SRM_CLIENT_ENTITIES_SEQ_ID (Bigint),
    3. CUS_ENTITY_DATA_SEQ_ID (Bigint),
    4. SRM_CLIENT_ENTITY_ATTRIBUTES_SEQ_ID (Bigint),
    5. ATTRIBUTE_DATETIME (DateTime),
    6. ATTRIBUTE_DECIMAL (Decimal(18,2)),
    7. ATTRIBUTE_STRING (nvarchar(255)),
    8. ATTRIBUTE_BOOLEAN (Char(1)),
    9. SRM_CLIENTS_SEQ_ID (Bigint)
    Clustered index with key SEQ_ID
    Non unique non clustered index : I've following four composite indexes
    a. SRM_CLIENTS_SEQ_ID, SRM_CLIENT_ENTITIES_SEQ_ID, SRM_CLIENT_ENTITY_ATTRIBTUES_SEQ_ID, ATTRIBUTE_DATETIME
    b. SRM_CLIENTS_SEQ_ID, SRM_CLIENT_ENTITIES_SEQ_ID, SRM_CLIENT_ENTITY_ATTRIBTUES_SEQ_ID, ATTRIBUTE_DECIMAL
    c. SRM_CLIENTS_SEQ_ID, SRM_CLIENT_ENTITIES_SEQ_ID, SRM_CLIENT_ENTITY_ATTRIBTUES_SEQ_ID, ATTRIBUTE_STRING
    d. SRM_CLIENTS_SEQ_ID, SRM_CLIENT_ENTITIES_SEQ_ID, SRM_CLIENT_ENTITY_ATTRIBTUES_SEQ_ID, ATTRIBUTE_BOOLEAN
    The problem is that when i execute a simple query over this table it does not return the results in an acceptable time.
    Query:
    SELECT CUS_ENTITY_DATA_SEQ_ID FROM dbo.CUS_PIVOT_NON_UNIQUE_INDEXES WHERE SRM_CLIENT_ENTITY_ATTRIBUTES_SEQ_ID = 51986 AND ATTRIBUTE_DECIMAL = 4150196
    Execution Time : 2 seconds
    Thanks

    Did you look at the execution plan.
    The query may not use none of the indexes. The Clustered index is on SEQ_ID and the non clustered index doesn't start with SRM_CLIENT_ENTITY_ATTRIBUTES_SEQ_ID
    OR ATTRIBUTE_DECIMAL.
    The order of the columns in an index matters. Just for testing ( if it is not prod. environment) Create an NCI with SRM_CLIENT_ENTITY_ATTRIBUTES_SEQ_ID
    and ATTRIBUTE_DECIMAL and check.
    Please use Marked as Answer if my post solved your problem and use
    Vote As Helpful if a post was useful.

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • How to use PL/SQL table

    Hi all,
    can you guys suggest me how can I use pl/sql tables for the below query to incresing the performance.
    DECLARE
        TYPE cur_typ IS REF CURSOR;
        c           cur_typ;
        total_val varchar2(1000);
        sql_stmt varchar2(1000);
        freeform_name NUMBER;
        freeform_id NUMBER;
        imgname_rec EMC_FTW_PREVA.EMC_Image_C_Mungo%rowtype;
        imgval_rec  EMC_FTW_PREVA.EMC_Content_C_Mungo%rowtype;
        CURSOR imgname_cur IS
            select * from EMC_FTW_PREVA.EMC_Image_C_Mungo
            where cs_ownerid in (
                        select id from EMC_FTW_PREVA.EMC_Image_C
                        where updateddate > '01-JUN-13'
                        and path is not null
                        and createddate != updateddate)
            and cs_attrid = (select id from EMC_FTW_PREVA.EMC_ATTRIBUTE where name = 'Image_Upload');
    BEGIN
        OPEN imgname_cur;
        LOOP
          FETCH imgname_cur INTO imgname_rec;
          EXIT WHEN imgname_cur%NOTFOUND;
          total_val := 'EMC_Image_C_' || imgname_rec.cs_ownerid;
          sql_stmt := 'SELECT instr(textvalue,''' || total_val || '''), cs_ownerid FROM EMC_FTW_PREVA.EMC_Content_C_Mungo a Where cs_attrid = (select id from EMC_FTW_PREVA.EMC_ATTRIBUTE where name = ' || '''' || 'Body_freeform' || '''' || ')';
            OPEN c FOR sql_stmt;
            LOOP
              FETCH c INTO freeform_id,freeform_name;
              EXIT WHEN c%NOTFOUND;
                                      IF freeform_id > 0 THEN
                dbms_output.put_line (imgname_rec.cs_ownerid || ',' || total_val || ',' || freeform_id || ',' || freeform_name);
                                      END IF;
            END LOOP;
            CLOSE c;     
       END LOOP;
       CLOSE imgname_cur;
    END;
    Thanks in Advance.

    can you guys suggest me how can I use pl/sql tables for the below query to incresing the performance.
    There would be absolutely no point at all in improving the performance of code that has NO benefit.
    The only result of executing that code is to possibly produce some lines of output AFTER the entire procedure if finished:
    dbms_output.put_line (imgname_rec.cs_ownerid || ',' || total_val || ',' || freeform_id || ',' || freeform_name);
    So first you need to explain:
    1. what PROBLEM you are trying to solve?
    2. why you are trying to use PL/SQL code to solve it.
    3. why are you using 'slow by slow' (row by row) processing and then, for each row, opening a new cursor to query more data?
    You should be using a single query rather than two nested cursors. But that begs the question of what the code is even supposed to be doing since the only output is going to a memory buffer.

  • How to improve speed of queries that use ORM one table per concrete class

    Hi,
    Many tools that make ORM (Object Relational Mapping) like Castor, Hibernate, Toplink, JPOX, etc.., have the one table per concrete class feature that maps objects to follow structure:
    CREATE TABLE ABSTRACTPRODUCT (
        ID VARCHAR(8) NOT NULL,
        DESCRIPTION VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    CREATE TABLE PRODUCT (
        ID VARCHAR(8) NOT NULL REFERENCES ABSTRACTPRODUCT(ID),
        CODE VARCHAR(10) NOT NULL,
        PRICE DECIMAL(12,2),
        PRIMARY KEY(ID)
    CREATE UNIQUE INDEX iProduct ON Product(code)
    CREATE TABLE BOOK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        AUTHOR VARCHAR(60) NOT NULL,
        PRIMARY KEY (ID)
    CREATE TABLE COMPACTDISK (
        ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
        ARTIST VARCHAR(60) NOT NULL,
        PRIMARY KEY(ID)
    there is a way to improve queries like
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        pd.code like '101%'
    or like this:
    SELECT
        pd.code CODE,   
        abpd.description DESCRIPTION,
        DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
    FROM
        ABSTRACTPRODUCT abpd,
        PRODUCT pd,
        BOOK bk,
        COMPACTDISK cd
    WHERE
        pd.id = abpd.id AND
        bk.id(+) = abpd.id AND
        cd.id(+) = abpd.id AND
        abpd.description like '%STARS%' AND
        pd.price BETWEEN 1 AND 10
    think in a table with many rows, then exists something inside MaxDB to improve this type of queries? like some anotations on SQL? or declare tables that extends another by PK? on other databases i managed this using Materialized Views, but i think that this can be faster just using PK, i'm wrong? the better is to consolidate all tables in one table? what is the impact on database size with this consolidation?
    note: with consolidation i will miss NOT NULL constraint at database side.
    thanks for any insight.
    Clóvis

    Hi Lars,
    i dont understand because the optimizer get that Index for TM at execution plan, and because dont use the join via KEY column, note the WHERE clause is "TM.OID = MF.MY_TIPO_MOVIMENTO" by the key column, and the optimizer uses an INDEX that the indexed column is ID_SYS, that isnt and cant be a primary key, because its not UNIQUE, follow the index columns:
    indexes of TipoMovimento
    INDEXNAME     COLUMNNAME          SORT     COLUMNNO     DATATYPE     LEN     INDEX_USED     FILESTATE     DISABLED
    ITIPOMOVIMENTO     TIPO               ASC     1          VARCHAR          2     220546          OK          NO
    ITIPOMOVIMENTO     ID_SYS               ASC     2          CHAR          6     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_DEBITO          ASC     3          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO     MY_CONTA_CREDITO     ASC     4          CHAR          8     220546          OK          NO
    ITIPOMOVIMENTO1     ID_SYS               ASC     1          CHAR          6     567358          OK          NO
    ITIPOMOVIMENTO2     DESCRICAO          ASC     1          VARCHAR          60     94692          OK          NO
    after i create the index iTituloCobrancaX7 on TituloCobranca(OID,DATA_VENCIMENTO) in a backup instance and get surprised with the follow explain:
    OWNER     TABLENAME     COLUMN_OR_INDEX          STRATEGY                    PAGECOUNT     
         TC          ITITULOCOBRANCA1     RANGE CONDITION FOR INDEX          5368     
                   DATA_VENCIMENTO               (USED INDEX COLUMN)          
         MF          OID               JOIN VIA KEY COLUMN               9427     
         TM          OID               JOIN VIA KEY COLUMN               22     
                                  TABLE HASHED          
         PS          OID               JOIN VIA KEY COLUMN               1350     
         BOL          OID               JOIN VIA KEY COLUMN               497     
                                       NO TEMPORARY RESULTS CREATED          
         JDBC_CURSOR_19                    RESULT IS COPIED   , COSTVALUE IS     988
    note that now the optimizer gets the index ITITULOCOBRANCA1 as i expected, if i drop the new index iTituloCobrancaX7 the optimizer still getting this execution plan, with this the query executes at 110 ms, with that great news i do same thing in the production system, but the execution plan dont changes, and i still getting a long execution time this time at 413516 ms. maybe the problem is how optimizer measure my tables.
    i checked in DBAnalyser that the problem is catalog cache hit rate (we discussed this at [catalog cache hit rate, how to increase?|;
    ) and the low selectivity of this SQL command, then its because of this that to achieve a better selectivity i must have an index with, MF.MY_SACADO, MF.TIPO and TC.DATA_VENCIMENTO, as explained in previous posts, since this type of index inside MaxDB isnt possible, i have no choice to speed this type of query without changing tables structure.
    MaxDB developers can develop this type of index? or a feature like this dont have any plans to be made?
    if no, i must create another schema, to consolidate tables to speed queries on my system, but with this consolidation i will get more overhead, i must solve the less selectivity because i think if the data on tables increase, the query becomes impossible, i see that CREATE INDEX supports FUNCTION, maybe a   FUNCTION that join data of two tables can solve this?
    about instance configuration it is:
    Machine:
    Version:       '64BIT Kernel'
    Version:       'X64/LIX86 7.6.03   Build 007-123-157-515'
    Version:       'FAST'
    Machine:       'x86_64'
    Processors:    2 ( logical: 8, cores: 8 )
    data volumes:
    ID     MODE     CONFIGUREDSIZE     USABLESIZE     USEDSIZE     USEDSIZEPERCENTAGE     DROPVOLUME     TOTALCLUSTERAREASIZE     RESERVEDCLUSTERAREASIZE     USEDCLUSTERAREASIZE     PATH     
    1     NORMAL     4194304          4194288          379464          9               NO          0               0               0               /db/SPDT/data/data01.dat     
    2     NORMAL     4194304          4194288          380432          9               NO          0               0               0               /db/SPDT/data/data02.dat     
    3     NORMAL     4194304          4194288          379184          9               NO          0               0               0               /db/SPDT/data/data03.dat     
    4     NORMAL     4194304          4194288          379624          9               NO          0               0               0               /db/SPDT/data/data04.dat     
    5     NORMAL     4194304          4194288          380024          9               NO          0               0               0               /db/SPDT/data/data05.dat
    log volumes:
    ID     CONFIGUREDSIZE     USABLESIZE     PATH               MIRRORPATH
    1     51200          51176          /db/SPDT/log/log01.dat     ?
    parameters:
    KERNELVERSION                         KERNEL    7.6.03   BUILD 007-123-157-515
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    _SERVERDB_FOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      ISO
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         2
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   /db/SPDT/log/log01.dat
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   6400
    DATA_VOLUME_NAME_0005                 /db/SPDT/data/data05.dat
    DATA_VOLUME_NAME_0004                 /db/SPDT/data/data04.dat
    DATA_VOLUME_NAME_0003                 /db/SPDT/data/data03.dat
    DATA_VOLUME_NAME_0002                 /db/SPDT/data/data02.dat
    DATA_VOLUME_NAME_0001                 /db/SPDT/data/data01.dat
    DATA_VOLUME_TYPE_0005                 F
    DATA_VOLUME_TYPE_0004                 F
    DATA_VOLUME_TYPE_0003                 F
    DATA_VOLUME_TYPE_0002                 F
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0005                 524288
    DATA_VOLUME_SIZE_0004                 524288
    DATA_VOLUME_SIZE_0003                 524288
    DATA_VOLUME_SIZE_0002                 524288
    DATA_VOLUME_SIZE_0001                 524288
    DATA_VOLUME_MODE_0005                 NORMAL
    DATA_VOLUME_MODE_0004                 NORMAL
    DATA_VOLUME_MODE_0003                 NORMAL
    DATA_VOLUME_MODE_0002                 NORMAL
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    LOG_MIRRORED                          NO
    MAXVOLUMES                            14
    LOG_IO_BLOCK_COUNT                    8
    DATA_IO_BLOCK_COUNT                   64
    BACKUP_BLOCK_CNT                      64
    _DELAY_LOGWRITER                      0
    LOG_IO_QUEUE                          50
    _RESTART_TIME                         600
    MAXCPU                                8
    MAX_LOG_QUEUE_COUNT                   0
    USED_MAX_LOG_QUEUE_COUNT              8
    LOG_QUEUE_COUNT                       1
    MAXUSERTASKS                          500
    _TRANS_RGNS                           8
    _TAB_RGNS                             8
    _OMS_REGIONS                          0
    _OMS_RGNS                             7
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        8
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    _ROW_RGNS                             8
    RESERVEDSERVERTASKS                   16
    MINSERVERTASKS                        28
    MAXSERVERTASKS                        28
    _MAXGARBAGE_COLL                      1
    _MAXTRANS                             4008
    MAXLOCKS                              120080
    _LOCK_SUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       180
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    _IOPROCS_PER_DEV                      2
    _IOPROCS_FOR_PRIO                     0
    _IOPROCS_FOR_READER                   0
    _USE_IOPROCS_ONLY                     NO
    _IOPROCS_SWITCH                       2
    LRU_FOR_SCAN                          NO
    _PAGE_SIZE                            8192
    _PACKET_SIZE                          131072
    _MINREPLY_SIZE                        4096
    _MBLOCK_DATA_SIZE                     32768
    _MBLOCK_QUAL_SIZE                     32768
    _MBLOCK_STACK_SIZE                    32768
    _MBLOCK_STRAT_SIZE                    16384
    _WORKSTACK_SIZE                       8192
    _WORKDATA_SIZE                        8192
    _CAT_CACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      131072
    INIT_ALLOCATORSIZE                    262144
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    _TASKCLUSTER_01                       tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
    _TASKCLUSTER_02                       ti,100*dw;63*us;
    _TASKCLUSTER_03                       equalize
    _DYN_TASK_STACK                       NO
    _MP_RGN_QUEUE                         YES
    _MP_RGN_DIRTY_READ                    DEFAULT
    _MP_RGN_BUSY_WAIT                     DEFAULT
    _MP_DISP_LOOPS                        2
    _MP_DISP_PRIO                         DEFAULT
    MP_RGN_LOOP                           -1
    _MP_RGN_PRIO                          DEFAULT
    MAXRGN_REQUEST                        -1
    _PRIO_BASE_U2U                        100
    _PRIO_BASE_IOC                        80
    _PRIO_BASE_RAV                        80
    _PRIO_BASE_REX                        40
    _PRIO_BASE_COM                        10
    _PRIO_FACTOR                          80
    _DELAY_COMMIT                         NO
    _MAXTASK_STACK                        512
    MAX_SERVERTASK_STACK                  500
    MAX_SPECIALTASK_STACK                 500
    _DW_IO_AREA_SIZE                      50
    _DW_IO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    _FBM_LOW_IO_RATE                      10
    CACHE_SIZE                            262144
    _DW_LRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    _DATA_CACHE_RGNS                      64
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              64
    SEQUENCE_CACHE                        1
    _IDXFILE_LIST_SIZE                    2048
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    _READAHEAD_BLOBS                      32
    CLUSTER_WRITE_THRESHOLD               80
    CLUSTERED_LOBS                        NO
    RUNDIRECTORY                          /var/opt/sdb/data/wrk/SPDT
    OPMSG1                                /dev/console
    OPMSG2                                /dev/null
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        2
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        20
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       5369
    EXTERNAL_DUMP_REQUEST                 NO
    _AK_DUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    _UTILITY_PROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    _BACKUP_HISTFILE                      dbm.knl
    _BACKUP_MED_DEF                       dbm.mdf
    _MAX_MESSAGE_FILES                    0
    _SHMKERNEL                            44601
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-03 23:12:55
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     /var/opt/sdb/data/wrk/SPDT/DIAGHISTORY
    _DIAG_SEM                             1
    SHOW_MAX_STACK_USE                    NO
    SHOW_MAX_KB_STACK_USE                 NO
    LOG_SEGMENT_SIZE                      2133
    _COMMENT                              
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    OFFICIAL_NODE                         
    UKT_CPU_RELATIONSHIP                  NONE
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    30
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       YES
    USE_OPEN_DIRECT_FOR_BACKUP            NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    JOIN_TABLEBUFFER                      128
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             YES
    SHAREDSQL_CLEANUPTHRESHOLD            25
    SHAREDSQL_COMMANDCACHESIZE            262144
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    FORBID_LOAD_BALANCING                 YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    ENABLE_CHECK_INSTANCE                 YES
    RTE_TEST_REGIONS                      0
    HASHED_RESULTSET                      YES
    HASHED_RESULTSET_CACHESIZE            262144
    CHECK_HASHED_RESULTSET                0
    AUTO_RECREATE_BAD_INDEXES             NO
    AUTHENTICATION_ALLOW                  
    AUTHENTICATION_DENY                   
    TRACE_AK                              NO
    TRACE_DEFAULT                         NO
    TRACE_DELETE                          NO
    TRACE_INDEX                           NO
    TRACE_INSERT                          NO
    TRACE_LOCK                            NO
    TRACE_LONG                            NO
    TRACE_OBJECT                          NO
    TRACE_OBJECT_ADD                      NO
    TRACE_OBJECT_ALTER                    NO
    TRACE_OBJECT_FREE                     NO
    TRACE_OBJECT_GET                      NO
    TRACE_OPTIMIZE                        NO
    TRACE_ORDER                           NO
    TRACE_ORDER_STANDARD                  NO
    TRACE_PAGES                           NO
    TRACE_PRIMARY_TREE                    NO
    TRACE_SELECT                          NO
    TRACE_TIME                            NO
    TRACE_UPDATE                          NO
    TRACE_STOP_ERRORCODE                  0
    TRACE_ALLOCATOR                       0
    TRACE_CATALOG                         0
    TRACE_CLIENTKERNELCOM                 0
    TRACE_COMMON                          0
    TRACE_COMMUNICATION                   0
    TRACE_CONVERTER                       0
    TRACE_DATACHAIN                       0
    TRACE_DATACACHE                       0
    TRACE_DATAPAM                         0
    TRACE_DATATREE                        0
    TRACE_DATAINDEX                       0
    TRACE_DBPROC                          0
    TRACE_FBM                             0
    TRACE_FILEDIR                         0
    TRACE_FRAMECTRL                       0
    TRACE_IOMAN                           0
    TRACE_IPC                             0
    TRACE_JOIN                            0
    TRACE_KSQL                            0
    TRACE_LOGACTION                       0
    TRACE_LOGHISTORY                      0
    TRACE_LOGPAGE                         0
    TRACE_LOGTRANS                        0
    TRACE_LOGVOLUME                       0
    TRACE_MEMORY                          0
    TRACE_MESSAGES                        0
    TRACE_OBJECTCONTAINER                 0
    TRACE_OMS_CONTAINERDIR                0
    TRACE_OMS_CONTEXT                     0
    TRACE_OMS_ERROR                       0
    TRACE_OMS_FLUSHCACHE                  0
    TRACE_OMS_INTERFACE                   0
    TRACE_OMS_KEY                         0
    TRACE_OMS_KEYRANGE                    0
    TRACE_OMS_LOCK                        0
    TRACE_OMS_MEMORY                      0
    TRACE_OMS_NEWOBJ                      0
    TRACE_OMS_SESSION                     0
    TRACE_OMS_STREAM                      0
    TRACE_OMS_VAROBJECT                   0
    TRACE_OMS_VERSION                     0
    TRACE_PAGER                           0
    TRACE_RUNTIME                         0
    TRACE_SHAREDSQL                       0
    TRACE_SQLMANAGER                      0
    TRACE_SRVTASKS                        0
    TRACE_SYNCHRONISATION                 0
    TRACE_SYSVIEW                         0
    TRACE_TABLE                           0
    TRACE_VOLUME                          0
    CHECK_BACKUP                          NO
    CHECK_DATACACHE                       NO
    CHECK_KB_REGIONS                      NO
    CHECK_LOCK                            NO
    CHECK_LOCK_SUPPLY                     NO
    CHECK_REGIONS                         NO
    CHECK_TASK_SPECIFIC_CATALOGCACHE      NO
    CHECK_TRANSLIST                       NO
    CHECK_TREE                            NO
    CHECK_TREE_LOCKS                      NO
    CHECK_COMMON                          0
    CHECK_CONVERTER                       0
    CHECK_DATAPAGELOG                     0
    CHECK_DATAINDEX                       0
    CHECK_FBM                             0
    CHECK_IOMAN                           0
    CHECK_LOGHISTORY                      0
    CHECK_LOGPAGE                         0
    CHECK_LOGTRANS                        0
    CHECK_LOGVOLUME                       0
    CHECK_SRVTASKS                        0
    OPTIMIZE_AGGREGATION                  YES
    OPTIMIZE_FETCH_REVERSE                YES
    OPTIMIZE_STAR_JOIN                    YES
    OPTIMIZE_JOIN_ONEPHASE                YES
    OPTIMIZE_JOIN_OUTER                   YES
    OPTIMIZE_MIN_MAX                      YES
    OPTIMIZE_FIRST_ROWS                   YES
    OPTIMIZE_OPERATOR_JOIN                YES
    OPTIMIZE_JOIN_HASHTABLE               YES
    OPTIMIZE_JOIN_HASH_MINIMAL_RATIO      1
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_MINSIZE        1000000
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_QUAL_ON_INDEX                YES
    DDLTRIGGER                            YES
    SUBTREE_LOCKS                         NO
    MONITOR_READ                          2147483647
    MONITOR_TIME                          2147483647
    MONITOR_SELECTIVITY                   0
    MONITOR_ROWNO                         0
    CALLSTACKLEVEL                        0
    OMS_RUN_IN_UDE_SERVER                 NO
    OPTIMIZE_QUERYREWRITE                 OPERATOR
    TRACE_QUERYREWRITE                    0
    CHECK_QUERYREWRITE                    0
    PROTECT_DATACACHE_MEMORY              NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FILEDIR_SPINLOCKPOOL_SIZE             10
    TRANS_HISTORY_SIZE                    0
    TRANS_THRESHOLD_VALUE                 60
    ENABLE_SYSTEM_TRIGGERS                YES
    DBFILLINGABOVELIMIT                   70L80M85M90H95H96H97H98H99H
    DBFILLINGBELOWLIMIT                   70L80L85L90L95L
    LOGABOVELIMIT                         50L75L90M95M96H97H98H99H
    AUTOSAVE                              1
    BACKUPRESULT                          1
    CHECKDATA                             1
    EVENT                                 1
    ADMIN                                 1
    ONLINE                                1
    UPDSTATWANTED                         1
    OUTOFSESSIONS                         3
    ERROR                                 3
    SYSTEMERROR                           3
    DATABASEFULL                          1
    LOGFULL                               1
    LOGSEGMENTFULL                        1
    STANDBY                               1
    USESELECTFETCH                        YES
    USEVARIABLEINPUT                      NO
    UPDATESTAT_PARALLEL_SERVERS           0
    UPDATESTAT_SAMPLE_ALGO                1
    SIMULATE_VECTORIO                     IF_OPEN_DIRECT_OR_RAW_DEVICE
    COLUMNCOMPRESSION                     YES
    TIME_MEASUREMENT                      NO
    CHECK_TABLE_WIDTH                     NO
    MAX_MESSAGE_LIST_LENGTH               100
    SYMBOL_RESOLUTION                     YES
    PREALLOCATE_IOWORKER                  NO
    CACHE_IN_SHARED_MEMORY                NO
    INDEX_LEAF_CACHING                    2
    NO_SYNC_TO_DISK_WANTED                NO
    SPINLOCK_LOOP_COUNT                   30000
    SPINLOCK_BACKOFF_BASE                 1
    SPINLOCK_BACKOFF_FACTOR               2
    SPINLOCK_BACKOFF_MAXIMUM              64
    ROW_LOCKS_PER_TRANSACTION             50
    USEUNICODECOLUMNCOMPRESSION           NO
    about send you the data from tables, i dont have permission to do that, since all data is in a production system, the customer dont give me the rights to send any information. sorry about that.
    best regards
    Clóvis

  • How to read excel file in document library and store excel content in sql table

    Hello,
    Can anyone help me how to read the excel file present in document library and store the content inside excel into sql table?
    Please let me know the ways to acheive this. Feel free to give your suggestions.
    Thanks,
    Cool Developer

    Hi!
    this code i have written becuase i donot find any soltions on net for this , u can try with this . :)
    System.Data.OleDb.
    OleDbConnection ExcelConnection = null;
    FileMode fileMode;
    string filePath = ConfigurationManager.AppSettings["TempLoaction"] + "\\" + fileName;
    using (SPSite _site = new SPSite(SPContext.Current.Web.Url))
    using (SPWeb _web = _site.OpenWeb())
    string docLibrary = ConfigurationManager.AppSettings["DocumentLibrary"];
    SPFile _file = _web.GetFile("/" + docLibrary + "/" + fileName);
    fileMode =
    FileMode.Create;
    byte[] byteArray = _file.OpenBinary();
    MemoryStream dataStream = new MemoryStream(byteArray);
    Stream stream = dataStream;
    using (FileStream fs = File.Open(filePath, fileMode))
    byte[] buffer = new byte[4096];
    int bytesRead;
    while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) != 0)
    fs.Write(buffer, 0, bytesRead);
    fs.Close();
    //Create the Connection String
    try
    string ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;
    Data Source ='"
    + filePath + "'; Extended Properties=Excel 5.0";
    //Create the connection
    ExcelConnection =
    new System.Data.OleDb.OleDbConnection(ConnectionString);
    //create a string for the query
    string ExcelQuery;
    //Sheet1 is the sheet name
    //create the query:
    //read columns from the Excel file
    ExcelQuery =
    "Select * from [Sheet1$]"; // from Sheet1";
    //use "Select * ... " to select the entire sheet
    //create the command
    System.Data.OleDb.
    OleDbCommand ExcelCommand = new System.Data.OleDb.OleDbCommand(ExcelQuery, ExcelConnection);
    //Open the connection
    ExcelConnection.Open();
    //Create a reader
    System.Data.OleDb.
    OleDbDataReader ExcelReader;
    ExcelReader = ExcelCommand.ExecuteReader();
    //For each row after the first
    while (ExcelReader.Read())
    thanks,
    kshitij

  • How to delete the Table Contents before inserting records into SQL table ?

    Hello Experts,
            I have a scenario where in I have to Pick up some records from SAP RFC & insert into SQL table.
            i know how to do this scenario but the proble with this is before inserting we first have to ZAP the SQL table & insert a new records. One more twist is The Triggering is happening from SAP side with Sender RFC. If this would have been from SQL Side i could have written a Stored Procedure/ Trigger & could have called this before the SENDER JDBC communciation channel picks up the Triggering event from SQL side.
    So how to do this scenarioin XI, First deleting all the Records of SQL table & then inserting the new reocrds. without using the BPM.
    Regards,
    Umesh

    hi umesh,
    you can achieve this by writing SQL query in message mapping level..
    refer this link:
    http://help.sap.com/saphelp_nw04/helpdata/en/b0/676b3c255b1475e10000000a114084/frameset.htm
    regards.

  • How can I read a file with ASCII Special Character into a SQL table using SSIS 2008?

    I've tried everything to read this file and am getting no where.   Help how can I read this file and load a SQL table?
    RS - AscII - 30  (Record Separator)
    GS - AscII - 29 (Group Separator)
    Thank you for your assistance - Covi
    Mark Covian

    We can use script component as source/transformation to read the text file and assign the contains to a string. Split the string by chr(30)  i.e RS and finally stored into an array or write to the output buffer of the script component.
    For example on how to use script component refer this link
    http://social.technet.microsoft.com/Forums/en-US/6ff2007d-d246-4107-b77c-624781baab38/how-to-use-substring-in-derived-column-in-ssis?forum=sqlintegrationservices
    Regards, RSingh

  • How to get value from API which returns parameter in PL/SQL table?

    Hello
    I have below workflow API with returns info in PL/SQL table. I want to get value of 'USER_ORIG_SYSTEM_ID' column. How can i get that value in local variable.
    Please help.
    Thanks
    Avalon
    Wf_Directory.GetRoleInfo2
    Syntax
    procedure GetRoleInfo2
    (Role in varchar2,
    Role_Info_Tbl out wf_directory.wf_local_roles_tbl_type);
    Description
    Returns the following information about a role in a PL/SQL table:
    • Name
    • Display name
    • Description
    • Notification preference (’QUERY’, ’MAILTEXT’, ’MAILHTML’,
    ’MAILATTH’, ’MAILHTM2’, ’SUMMARY’, or, for Oracle
    Applications only, ’SUMHTML’)
    • Language
    • Territory
    • E–mail address
    • Fax
    • Status
    • Expiration date
    • Originating system
    • Originating system ID
    • Parent originating system
    • Parent originating system ID
    • Owner tag
    • Standard Who columns
    *******************************************************

    create a variable RoleXXX wf_directory.wf_local_roles_tbl_type;
    call procedure GetRoleInfo2('TEST_ROLE',RoleXXX)
    use RoleXXX.USER_ORIG_SYSTEM_ID

  • How to use Temporary Table in PL-SQL

    In MySQL there is no Temporary table concept.
    So for intermediate calculation I have created a table as below
    create table SequenceTempTable
    SessionId VARCHAR(50),
    Sequence VARCHAR(500),
    CreatedDate DATE
    ) ENGINE=MEMORY;
    Whenever I invoke a SP, I load the data into SequenceTempTable using Session Id as below
    CREATE PROCEDURE `GetSequence`(
    IN Start_Date VARCHAR(25),
    IN End_Date VARCHAR(25)
    BEGIN
    SELECT UUID() INTO v_SessionId;
    INSERT INTO SequenceTempTable values (v_SessionId,'1,2,5,3',now());
    required code
    DELETE FROM SequenceTempTable WHERE SessionId = v_SessionId;
    COMMIT;
    END;
    i.e. I have created a table as temporary table (created once),
    and load the data using Session Id and once session specific intermediate computation done,
    I deleted the session specific data.
    Could you give me examples of How to use Temporary table in PL-SQL code with respect to my above example.
    Because I have gone through creating Temporary table but I stuck with use in PL-SQL. I mean to say Is there any need of creating table in advance before invoking SP.
    And one more thing as in MySQL temp table I created which is using MEMORY engine i.e. this table will always be in MEMORY so there is no need of writing data on disk.
    Regards
    Sanjeev

    Hi Sanjeev
    Read about GTT here
    http://www.oracle-base.com/articles/8i/TemporaryTables.php
    GTT always contains just session specific data. \
    In case you want to use the GTT in the same session again you can use option
    ON COMMIT PRESERVE ROWS;
    Or if it is used just once in the session use can use
    ON COMMIT DELETE ROWS;
    Do remember that for GTT the data of one session can not be accessed in other session.
    Also you can go away with Delete from GTT if not used again in same session.
    Regards
    Arun

Maybe you are looking for

  • Can I set multiple iCal alarms to play iTunes 4 times a day?

    I would like it to play an iTune at 4 different times during the day (upon my AM Kdg class' arrival/cleanup time and again upon the PM Kdg class' arrival/cleanup time, for instance). Is this possible or am I stuck with only 1 alarm capability? (I don

  • [Resolved] Failed Install

    Hi, When I install the Oracle database 10g Client Release2 (10.2.0.1.0) 32-bit, I get and error from Windows stating that the "Data Execution Prevention (DEP) helps protect against damage from viruses...." and then the install stops. I went ahead and

  • Documenting FIM Sync flows

    Hi, Is there any tool available for download today which can document attribute flow base on configuration (MA or Server) exported directly from FIM Sync (without Portal & FIM Service)? Borys Majewski, Identity Management Solutions Architect (Blog: I

  • Unchecked exception

    hi all... can throws be used to avoid unchecked exception??? for example,,,,,in this c ode throws is not able to handle unchecked exception i.e. arithmetic exception..... public class Test { public static void main(String[] args) throws ArithmeticExc

  • Windows 7 crash on starting of Photoshop CS6, any idea what could have caused this?

    Any advise is much appreciated