SQL,PL/SQL Training Centres in Hyderabad

Hi,
Iam looking for SQL,PL/SQL institues in Hyderabad.Can anyone suggest me the best institute.
Regards,
Manju

Hi:
U can get training from below mentioned places. It is Near Polaris, Begumpet.
SQL Star International Limited
4, 1st Floor, Motila Nehru Nagar, Begumpet
Hyderabad
Andhra Pradesh - 500016
India
Tel : 04027763125 9885199529
Fax : 04027761921
SQL Star International Limited
504, Maitrivanam, HUDA Complex,
Software Technology Park, Ameerpet,
Hyderabad-500 038.
Ph 91-040-296313 / 3743583
Fax 294687 E-mail :[email protected]
cheers,
Manish

Similar Messages

  • Oracle Apps DBA Training Centres in  Hyderabad.

    Hi All,
    Could you please help me in guiding where i can get good and hands-on training in 11i Oracle Apps DBA. Currently i am working as Core DBA?
    I appreciate your help.
    Regards
    Shalini

    Shalini,
    Oracle has quite a few trainings. It depends on where you are?
    http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=3
    Jim P.

  • Training Centers in Hyderabad

    Hi
    any suggestions on institutes in hyderabad for cert preparation on microsoft windows 7 and server 2008?
    planing to write certs by next month
    rahul

    Hi,
    Here are some links for the training centres in Hyderabad, India... You will also find a lot on MSN search.. for Windows 7 and Windows Server 2008
    Synergetics India Pvt. Ltd.
    http://hyderabad.olx.in/windows-7-training-in-hyderabad-from-synergetics-india-iid-67197859
    Zoom Technologies Course
    http://www.zoomgroup.com/training/india/mcse.asp and Contact us
    http://www.zoomgroup.com/zoom_tech/contact/index.asp
    Self Paced Training From Microsoft :
    http://www.microsoft.com/learning/en/us/training/windows.aspx for windows 7 similarly you can get for Windows Server 2008.
    Koening Solutions
    Deccansoft Solutions http://www.deccansoft.com/ContactUs.aspx : Here you have online chat option and you can chat with the representative to book a slot if they provide training for Windows 7 and Windows
    Server 2008 which is not sure but they are best in .NET Training.
    IIHT Training Institute - Hyderabad
    http://yellowpages.sulekha.com/hyderabad/iiht-computer-education-ameerpet-hyderabad_contact-address.htm?business=pmp
    For Windows 7 I think you have
    70 - 680 MCTS  Installation And Configuration of Windows 7 - Microsoft Press book - 70 - 680
    70 - 685 MCITP Desktop Technician
    70 - 686 MCITP Desktop Administrator
    Please mark as answer if it helps you or unpropose as answer if it does not help you
    Thanks Rehan Bharucha - The Tech Robot (MCTS, MCITP, MCPD, MCT, MCC)

  • The Best SQL Server DBA Training at DBA School In Hyderabad

    SQL Server DBA Training at DBA School
    Faculty: working in Top MNC company.
    Fees: Only 5000/- (batch max. 10 members)
    Duration: 1 month.
    SQL Server Coures Contents:
    1.     Introduction to SQL Server 2005
    2.     Roles and Responsibilities of DBA
    3.     SQL Server 2005 Licensing and Pricing
    4.     System Design and Architecture
    5.     Overview of Database and Database Snapshots
    6.     Overview of SQL Server Objects
    7.     Transactions
    8.     SQL Server Partitions
    9.     Managing Security
    10.     Backup Fundamentals
    11.     Restoring Data
    12.     Replication
    13.     Disaster Recovery Solutions
    14.     Performance Tuning and Troubleshooting
    15.     DB Mail
    Here Mr. Satya seelam has 8 years experiense in United States. You can get practical knowledge through experts. They are providing
    Excellent and Certified training
    Very Good Lab Facility,
    Real time demos,
    case studies and projects.
    Who had trained here they are placed in foreign and domestic countries.
    By Satya seelam 8 years experience in United States
    Contact Us
    DBA School
    E - Block, Flat No - 508
    Keerthi Apartments,
    Yosufguda Road(Opposite Huda Maitrivanam),
    DTDC Lane(First Left of Yosufguda Road),
    Yellareddy Guda, Ameerpet,
    Hyderabad, Pin: 500016
    Email: [email protected] Mobile: +91-9966293445 Phone: +91-40-30629104, 04066446847
    VISIT: www.hyddbatraining.com/beta (Use Mozilla for opening the website.)

    You've got to be kidding.
    From what I've seen of your graduates they are barely able to recognize a command line.
    Anyone that pays you money to take one of your classes would get more for their hard earned rupees by just tossing them into the Ganges.
    At least someone downstream would benefit.
    You should apologize for your misrepresentation and your spam.

  • SQL 2005 Administration Training (SAP)

    Does anyone know when SAP plans to come out with SQL 2005 Administration Training courses (in the USA)?
    I thought I might have seen a German version of a class (AD655), but haven't seen any taught in the US (still SQL 2000)

    Hello Michael,
    Please review http://www.microsoft-sap.com/events.aspx
    Thanks
    N.P.C

  • SQL Fundamentals I : oe_main.sql , hr_main.sql, ...

    Hi forum,
    I just bought "SQL Fundamentals I" from McGraw Hill. This book mentions several schemas and creation scripts like hr_main.sql,
    oe_main.sql, ... ! Unfortunately I cannot find these scripts for download and it is not on the accompanying CD. In the meantime
    I sestablished a 10G Express Database on my notebook for training purposes. The hr schema and the tables in question could
    be found there. Does anyone know where to find the scripts necessary for studying this book ?
    Kind regards
    oeggert2706

    user11128142 wrote:
    Hi forum,
    I just bought "SQL Fundamentals I" from McGraw Hill. This book mentions several schemas and creation scripts like hr_main.sql,
    oe_main.sql, ... ! Unfortunately I cannot find these scripts for download and it is not on the accompanying CD. In the meantime
    I sestablished a 10G Express Database on my notebook for training purposes. The hr schema and the tables in question could
    be found there. Does anyone know where to find the scripts necessary for studying this book ?
    Kind regards
    oeggert2706HR and OE etc. are standard schemas that come as part of the Database installation. Not sure about the express edition, but as Oracle provide them as example schemas, I don't see why they wouldn't be included with that version.
    http://download.oracle.com/docs/cd/B12037_01/server.101/b10771/scripts003.htm
    http://download.oracle.com/docs/cd/B13789_01/server.101/b10771/installation002.htm

  • PL/SQL vs SQL

    What are the fundamental differences between PL/SQL and SQL?

    Just to expand on what was said.
    PL/SQL is a formal procedural language based on Ada (Ada and Pascal are very similar). PL/SQL is in that respect equivalent to C, Pascal, Visual Basic etc.
    What makes PL/SQL different (a 4GL instead of a 3GL) is that it is specifically designed to deal with data processing problems in Oracle using SQL.
    In 3GLs you need to use SQL pre-compilers or special classes (wrapping the database's SQL call interface) to talk SQL to the database. E.g. Pro*C precompiler for use with C. The TQuery class in Delphi that wraps the Oracle Call Interface (OCI) into an object class.
    PL/SQL goes further as it allows you to natively use SQL inside the language, as if it is part of the language. The PL compiler/PL engine is however clever enough to recognise SQL statements and do the complex stuff needed to make a SQL call to the SQL engine. It handles bind variables for you. It handles SQL data types for you. It does the SQL engine call for you. It fetches the data from the SQL engine for you. Etc.
    Using SQL natively in the PL language is what makes PL/SQL so powerfull. It blurs the line between having to deal with two separate langauges - a procedure (and object orientated) programming language and the SQL language.
    This blurring does have its cons . Developers often fail to recognise just what is PL and what is SQL ito doing a proper program design and implementation. Or they treat PL/SQL as something different than Java, Delphi or C/C++. A programming language is a programming language, Programming 101 fundamentals apply. Irrespective of the language. Period.
    For example, they use PL/SQL cursor fetch loops to process SQL data in a row-by-row fashion, instead of using SQL to do that work instead. SQL is by far superior in this regard. Or they use SQL (i.e. SELECT func() INTO var FROM dual) to assign values from functions to PL/SQL variables. Why use SQL to do perform this variable assignment when dealing with PL variables?
    There are numerous brain farts from developers in this respect. Not understanding PL/SQL. Not even bothering to familiarise themselves with the Oracle manuals on the language. Which is a pity as this result in crappy code and a developer that fails to understand Oracle. Worse, developers start to dislike Oracle because it "does not work properly" due to their utter failure to grasp the concepts of the database and the language.
    However, if you understand what PL is and what SQL is in PL/SQL, and you treat both languages correctly, no other language on this planet allows you to process Oracle data more effectively. Fact: PL/SQL scales and performs better than whatever Java architecture and code you can deploy on an application tier.

  • How to resolve most of the Oracle SQL , PL/SQL Performance issues with help of quick Checklist/guidelines ?

    Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
    Checklist for Quick Performance  problem Resolution
    ·         get trace, code and other information for given PE case
              - Latest Code from Production env
              - Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
              - Program parameters & their frequently used values
              - Run Frequency of the program
              - existing Run-time/response time in Production
              - Business Purpose
    ·         Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
    ·         Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
    ·         Identify most time consuming operation(s) using Row Source Operation section
    ·         Study program parameter input directly mapped to SQL
    ·         Identify all Input bind parameters being used to SQL
    ·         Is SQL query returning large records for given inputs
    ·         what are the large tables and their respective columns being used to mapped with input parameters
    ·         which operation is scanning highest number of records in Row Source operation/Explain Plan
    ·         Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
    ·         Check the time consuming index on large table and measure Index Selectivity
    ·         Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
    ·         Is correct index being used for all large tables?
    ·         Is there any Full Table Scan on Large tables ?
    ·         Is there any unwanted Table being used in SQL ?
    ·         Evaluate Join condition on Large tables and their columns
    ·         Is FTS on large table b'cos of usage of non index columns
    ·         Is there any implicit or explicit conversion causing index not getting used ?
    ·         Statistics of all large tables are upto date ?
    Quick Resolution tips
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    2) Use Data Caching Technique/Options to cache static data
    3) Use Pipe Line Table Functions whenever possible
    4) Use Global Temporary Table, Materialized view to process complex records
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    8) Follow Oracle PL/SQL Best Practices
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    12) Review Join condition on existing query explain plan
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    14) Avoid applying SQL functions on index columns
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    Thanks
    Praful

    I understand you were trying to post something helpful to people, but sorry, this list is appalling.
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    No, use pure SQL.
    2) Use Data Caching Technique/Options to cache static data
    No, use pure SQL, and the database and operating system will handle caching.
    3) Use Pipe Line Table Functions whenever possible
    No, use pure SQL
    4) Use Global Temporary Table, Materialized view to process complex records
    No, use pure SQL
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    No, use pure SQL
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    Makes no sense.
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    What about using the execution trace?
    8) Follow Oracle PL/SQL Best Practices
    Which are?
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    You mean design your database and queries properly?  And table scanning is not always bad.
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    It depends if that is necessary or not.
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan.  There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
    12) Review Join condition on existing query explain plan
    Well, if you don't have your join conditions right then your query won't work, so that's obvious.
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    No.  Oracle recommends you do not use hints for query optimization (it says so in the documentation).  Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general.  Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
    14) Avoid applying SQL functions on index columns
    Why?  If there's a need for a function based index, then it should be used.
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    See 13.
    In short, there are no silver bullets for dealing with performance.  Each situation is different and needs to be evaluated on its own merits.

  • SQL to SQL Append

    Hi,
    I have a interface to load data from tables to table into same Oracle server, the target table is truncate every time that run interface and the data is just insert (don't update), i read that can use "IKM SQL to SQL Append" to avoid extra loading phases ( like flow tables, create index, etc. ). I put source table and datastore table from the same model, select "Staging Area Diferent from Target" and select the apropiate staging. The problem is that i can select the "IKM SQL to SQL Append" in the flow tab.
    Please can you help me with that ? i need this option to improve performance and space, every time that this interfaces run load aprox. 3 million of registers about 9 times with diferent conditions.
    Thanks.
    Juan Carlos Lopez
    Startegic Account Sales Consultant
    Oracle Venezuela

    Hola Juan, que tal estas?
    Eso es muy sencillo....
    Just import (or use if it is already imported) the KM Control Append and change the option "Flow Control" to No.
    Plus, let the Staging Area together at Target.
    I believe that will solve your problem because will generate just one step to load the data...
    If you have big problems on this, is very simple to change a KM....
    Un Saludo!
    Cezar Santos

  • XML parsing with SQL/PL-SQL

    Hi,
    My question is about how can an XML message can be best parsed using SQL/PL-SQL.
    The scenario is as follow. The XML message is stored in a CLOB; only some of its data needs to be extracted; there are six different types of structures of XML; the size of each XML is about 50 lines (maximum depth level is 3); the data could be written in English or Greek or French or German or Russian; this is going to be done every hour and the parsing is going to be against 3,000 records approx.
    In the development, I need to take into consideration performance. We are using Oracle 10, but we could migrate to Oracle 11 if necessary.
    Apologies for this basic question but I have never done XML parsing in SQL/PL-SQL before.
    Thank you.
    PS I have copied this question to the XML forum.
    Edited by: user3112983 on May 19, 2010 3:30 PM
    Edited by: user3112983 on May 19, 2010 3:39 PM

    user3112983 wrote:
    The scenario is as follow. The XML message is stored in a CLOB; only some of its data needs to be extracted; there are six different types of structures of XML; the size of each XML is about 50 lines (maximum depth level is 3); the data could be written in English or Greek or French or German or Russian; this is going to be done every hour and the parsing is going to be against 3,000 records approx.Parsing is done using the XMLTYPE data type (object class) in Oracle.
    Something as follows:
    SQL> create table xml_doc( id number, doc clob );
    Table created.
    SQL>
    SQL> insert into xml_doc values( 1, '<root><row><name>John</name></row><row><name>Jack</name></row></root>' );
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>
    SQL> declare
      2          rawXml  xml_doc.doc%type;
      3          xml     xmltype;
      4  begin
      5          -- get the raw XML (as a CLOB)
      6          select doc into rawXml from xml_doc where id = 1;
      7
      8          -- parse it
      9          xml := new xmltype( rawXml );  
    10         -- process the XML...
    11  end;
    12  /
    PL/SQL procedure successfully completed.
    SQL>The variable xml in the sample code is the XML DOM object. XML functions can be used against it (e.g. to extract values in a tabular row and column structure).
    Note that the CLOB needs to contain a valid XML. An XML containing XML fragments is not valid and cannot be parsed. E.g.
    SQL> declare
      2          xml     xmltype;
      3  begin
      4          -- attemp to parse fragments
      5          xml := new xmltype( '<row><name>John</name></row>  <data><column>Name</column></data>' );
      6  end;
      7  /
    declare
    ERROR at line 1:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00245: extra data after end of document
    Error at line 1
    ORA-06512: at "SYS.XMLTYPE", line 301
    ORA-06512: at line 5This XML contains 2 fragments. A row structure and a data structure. It is not a valid XML and as such cannot be parsed. If a root tag is used to encapsulate these 2 fragments, then it will be a valid XML structure.
    In the development, I need to take into consideration performance. We are using Oracle 10, but we could migrate to Oracle 11 if necessary.Have not run into any XML performance problems specifically - and am using it extensively. Even large XMLs (10's of 1000's of elements) parse pretty fast.

  • Dilemma of an OCA (SQL, PL/SQL) with 4 years work-ex

    Dear all,
    I am an OCA(SQL, PL/SQL) and working on a enhancement/production support project (Tech: JAVA and Oracle, Func: Insurance) in my firm. I am doing quite well here and keep updating myself using oracle documentation and application functionality. But in a long term, I am confused about my career path, what should I do next? Should I upgrade myself with certification in advanced PL/SQL or should I move towards DBA activities or should I learn JAVA to be an software architect?
    Personally, I have great interest in oracle database, design and implementation, what can be a career path for a software architect?
    Please help.

    OracleDeft wrote:
    Dear all,
    I am an OCA(SQL, PL/SQL) and working on a enhancement/production support project (Tech: JAVA and Oracle, Func: Insurance) in my firm. I am doing quite well here and keep updating myself using oracle documentation and application functionality. But in a long term, I am confused about my career path, what should I do next? Should I upgrade myself with certification in advanced PL/SQL or should I move towards DBA activities or should I learn JAVA to be an software architect?
    Personally, I have great interest in oracle database, design and implementation, what can be a career path for a software architect?
    Please help.Bringing you PL/SQL up to OCP by studying for and taking 1z0-146 is probably a straightford and low impact descision.
    It may not however be in the route you wish to take your career, but it is relatively low cost and good gain.
    My impression is your not currently into Java ... learning that from the bottom may be a hard process and even having learn Java than in itself does not make one a software architect.
    Take a browse down all Oracle Certifications at [http://www.oracle.com/education/certification] ... certifications ... view all certifications ... but remember not all Oracle products / technologies have associated certifications.
    Consider SOA or BIEE or Oracle Application Express as well
    -

  • Convert Oracle SQL line to SQL Server SQL

    Hi All,
    I'm trying to convert this Oracle SQL to SQL that will work with SQL Server 2008 and am having some trouble with the MOD function. Anyone have any SQL Server chops?
    Oracle SQL
    TBL_MAIN.AuthoredDate- MOD (TBL_MAIN.AuthoredDate - to_date ('2012-01-01', 'fxYYYY-MM-DD'), 7) + 6 AS "Week Ending"
    SQL Server Code
    TBL_MAIN.AuthoredDate - ((Cast(TBL_MAIN.AuthoredDate as DATE) % CAST('2012-01-01' AS DATE)), 7) + 6 AS "Week Ending" Thanks,
    John
    Edited by: Johnbr (Oracle10G) on Mar 1, 2013 8:41 AM

    ahhh.. I just had to change my thinking... I got it now:
    SQL Server SQL
    DATEADD(dd, 7-(DATEPART(dw, TBL_MAIN.AuthoredDate)), TBL_MAIN.AuthoredDate) AS 'Week Ending'

  • How to make code standardization in oracle 10 in sql/pl-sql

    if any body helps to handle how to make code standaridazation in oracle 10g in sql/pl-sql.

    refer tis link and get download..
    http://www.itap.purdue.edu/ea/data/standards/plsql.cfm

  • Oracle 10g DB - Sql & Pl/sql fundamenals

    All,
    Is there any major change in the sql & pl/sql fundamentals from oracle 9i to Oracle 10g? If anybody have the document on the enhanced features in oracle10g on the sql fundamentals, please let me know...
    regards,
    sen

    How about PL/SQL Enhancements?
    C.

  • Please help urgent in PL/SQL or SQL

    I have table like
    TIMESTAMP SID
    11/12/2008 1:25:02 PM 10
    11/12/2008 1:25:02 PM 20
    11/12/2008 1:25:02 PM 30
    11/12/2008 1:30:02 PM 10
    11/12/2008 1:30:02 PM 40
    11/12/2008 1:35:00 PM 40
    11/12/2008 1:35:00 PM 50
    11/12/2008 1:35:00 PM 60
    11/12/2008 1:35:00 PM 70
    You can assume that for the first timestamp entry, all SID are new.
    eg:1.25.02timestamp new sid(10,20,30)
    compare that sid with next timestamp of sid
    eg:1.25.02timestamp has sid 10 and 1.30.02timestamp has sid 10 so existing sid is 10
    1.30.2timestamp don't have 20,30 compare with 1.25.02 timestamp so sid 20,30 are deleted
    1.30.2timestamp have 40 so newsid is 40
    then compare the secondtimestamp(1.30.2timestamp ) to thirdtimestamp(1:35:00)
    NOTE: LOOK THREAD :nee help in PL/SQL or SQL
    THIS QUERY GIVES LIKE:
    TIMESTAMP New SID Existing SID Deleted SID
    11/12/2008 1:25:02 PM 3 0 0
    11/12/2008 1:30:02 PM 1 1 2
    11/12/2008 1:35:00 PM 3 1 1
    BUT EXPECTED OUTPUT LIKE(I WANT LIKE)
    TIMESTAMP New SID Existing SID Deleted SID
    11/12/2008 1:25:02 PM 10,20, 30 0 0
    11/12/2008 1:30:02 PM 40 10 20, 30
    11/12/2008 1:35:00 PM 50,60, 70 40 10
    ANYBODY HELP PLEASE

    alter session set nls_date_format = 'MM/DD/YYYY HH:MI:SS PM'
    with t as (
               select '11/12/2008 1:25:02 PM' tstamp,10 sid  from dual union all
               select '11/12/2008 1:25:02 PM',20 from dual union all
               select '11/12/2008 1:25:02 PM',30 from dual union all
               select '11/12/2008 1:30:02 PM',10 from dual union all
               select '11/12/2008 1:30:02 PM',40 from dual union all
               select '11/12/2008 1:35:00 PM',40 from dual union all
               select '11/12/2008 1:35:00 PM',50 from dual union all
               select '11/12/2008 1:35:00 PM',60 from dual union all
               select '11/12/2008 1:35:00 PM',70 from dual
    select  tstamp,
            ltrim(replace(sys_connect_by_path(case new when 1 then sid else -1 end,','),',-1'),',') "New SID",
            ltrim(replace(sys_connect_by_path(case existing when 1 then sid else -1 end,','),',-1'),',')"Existing SID",
            ltrim(replace(sys_connect_by_path(case deleted when 1 then sid else -1 end,','),',-1'),',')"Deleted SID"
      from  (
             select  tstamp,
                     sid,
                     grp,
                     new,
                     existing,
                     deleted,
                     row_number() over(partition by grp order by sid nulls last) rn
               from  (
                       select  tstamp,
                               sid,
                               -- group number based on timestamp
                               dense_rank() over(order by tstamp) grp,
                               -- Check if sid is new sid (not present in previous group)
                               case when lag(tstamp) over(partition by sid order by tstamp) is null then 1 else 0 end new,
                               -- Check if sid is existing sid (present in previous group)
                               case when lag(tstamp) over(partition by sid order by tstamp) is null then 0 else 1 end existing,
                               0 deleted
                         from  t
                      union all
                       -- List of sid's not present in a group but present in a previous group
                       select  null tstamp,
                               sid,
                               grp + 1 grp,
                               0 new,
                               0 existing,
                               1 deleted
                         from  (
                                select  sid,
                                        grp,
                                        -- Check if sid is present in next group (1 - present, 0 - not present).
                                        case lead(grp) over(partition by sid order by grp)
                                          when grp + 1 then 1
                                          else 0
                                        end in_next_grp,
                                         -- last group number
                                        max(grp) over() max_grp
                                  from  (
                                         select  tstamp,
                                                 sid,
                                                 -- group number based on timestamp
                                                 dense_rank() over(order by tstamp) grp
                                           from  t
                         where in_next_grp = 0
                           and grp < max_grp
      where connect_by_isleaf = 1 -- we are only interested in a leaf row which represents complete branch
      start with rn = 1 -- start with first row in a group
      connect by rn = prior rn + 1 and grp = prior grp -- traverse through each sid in a group including deleted
      order by tstamp
    SQL> alter session set nls_date_format = 'MM/DD/YYYY HH:MI:SS PM'
      2  /
    Session altered.
    SQL> with t as (
      2             select '11/12/2008 1:25:02 PM' tstamp,10 sid  from dual union all
      3             select '11/12/2008 1:25:02 PM',20 from dual union all
      4             select '11/12/2008 1:25:02 PM',30 from dual union all
      5             select '11/12/2008 1:30:02 PM',10 from dual union all
      6             select '11/12/2008 1:30:02 PM',40 from dual union all
      7             select '11/12/2008 1:35:00 PM',40 from dual union all
      8             select '11/12/2008 1:35:00 PM',50 from dual union all
      9             select '11/12/2008 1:35:00 PM',60 from dual union all
    10             select '11/12/2008 1:35:00 PM',70 from dual
    11            )
    12  select  tstamp,
    13          ltrim(replace(sys_connect_by_path(case new when 1 then sid else -1 end,','),',-1'),',') "New SID",
    14          ltrim(replace(sys_connect_by_path(case existing when 1 then sid else -1 end,','),',-1'),',')"Existing SID",
    15          ltrim(replace(sys_connect_by_path(case deleted when 1 then sid else -1 end,','),',-1'),',')"Deleted SID"
    16    from  (
    17           select  tstamp,
    18                   sid,
    19                   grp,
    20                   new,
    21                   existing,
    22                   deleted,
    23                   row_number() over(partition by grp order by sid nulls last) rn
    24             from  (
    25                     select  tstamp,
    26                             sid,
    27                             -- group number based on timestamp
    28                             dense_rank() over(order by tstamp) grp,
    29                             -- Check if sid is new sid (not present in previous group)
    30                             case when lag(tstamp) over(partition by sid order by tstamp) is null then 1 else 0 end new,
    31                             -- Check if sid is existing sid (present in previous group)
    32                             case when lag(tstamp) over(partition by sid order by tstamp) is null then 0 else 1 end existing,
    33                             0 deleted
    34                       from  t
    35                    union all
    36                     -- List of sid's not present in a group but present in a previous group
    37                     select  null tstamp,
    38                             sid,
    39                             grp + 1 grp,
    40                             0 new,
    41                             0 existing,
    42                             1 deleted
    43                       from  (
    44                              select  sid,
    45                                      grp,
    46                                      -- Check if sid is present in next group (1 - present, 0 - not present).
    47                                      case lead(grp) over(partition by sid order by grp)
    48                                        when grp + 1 then 1
    49                                        else 0
    50                                      end in_next_grp,
    51                                       -- last group number
    52                                      max(grp) over() max_grp
    53                                from  (
    54                                       select  tstamp,
    55                                               sid,
    56                                               -- group number based on timestamp
    57                                               dense_rank() over(order by tstamp) grp
    58                                         from  t
    59                                      )
    60                             )
    61                       where in_next_grp = 0
    62                         and grp < max_grp
    63                   )
    64          )
    65    where connect_by_isleaf = 1 -- we are only interested in a leaf row which represents complete branch
    66    start with rn = 1 -- start with first row in a group
    67    connect by rn = prior rn + 1 and grp = prior grp -- traverse through each sid in a group including deleted
    68    order by tstamp
    69  /
    TSTAMP                New SID              Existing SID         Deleted SID
    11/12/2008 1:25:02 PM 10,20,30
    11/12/2008 1:30:02 PM 40                   10                   20,30
    11/12/2008 1:35:00 PM 50,60,70             40                   10
    SQL> SY.

Maybe you are looking for