Table Test Case in OFT

Hello All,
Why/where we use TableTest Cases ?
What scenarios we use them ?
Thanks in advance.

Hi
lets say yo have an html table in the web page and you need to verify its contents, make sure that the contents of the cells are numbers, dates, Names etc ... that is where you would use a Table Test to verify table structure and data.
Regards
Alex

Similar Messages

  • Problem at the time of Implementing Text Matching Test Case in OFT

    hi,
    I had add Text Matching Test Case on login of the application for the username and password. and If the Test Case fail on that screen,then it should not allow to go further in the application.
    As currently at the time of Playback, it is allowing to go further and in the Result Report it display the case failed.

    Actually my ques is that suppose at the time of recording i enter the username as abcd and password as 123.
    When the recording is done . I Insert a Text Matching test case for both the username and password where i put the condition for the username that select text should be present as "def" . And the Test Case failed .
    So i want to know that if the test case failed on Login. Whether it should be move forward at the time of playback?

  • Simple test case with NL and table order .

    Hi,
    did some tests on my 9.2.0.8 and got few questions:
    SQL> select count(*)  from p;
      COUNT(*)
          2000
    SQL> select count(*)  from c;
      COUNT(*)
          1000
    SQL> select count(*) , id from p group by id having count(*) > 1;
    no rows selected
    SQL> select count(*) , id from c group by id having count(*) > 1;
      COUNT(*)         ID
           100         10
    SQL> desc p
               Name
        1      ID number
        2      FILLER varchar2(100)
    SQL> desc c
               Name
        1      ID number
        2      FILLER varchar2(100)
    Got 10046 traces:
    case A
    select /*+ use_nl(p) leading(c) */ *
    from
    p , c where p.id = c.id and c.id in (10)
    Rows     Row Source Operation
        100  TABLE ACCESS BY INDEX ROWID P
        201   NESTED LOOPS
        100    TABLE ACCESS BY INDEX ROWID C
        100     INDEX RANGE SCAN C_ID (object id 411255)
        100    INDEX RANGE SCAN P_ID (object id 411256)
    Case B optimal
    select /*+ use_nl(c) leading(p) */ *
    from
    p , c where p.id = c.id and c.id in (10)
    Rows     Row Source Operation
        100  TABLE ACCESS BY INDEX ROWID C
        102   NESTED LOOPS
          1    TABLE ACCESS BY INDEX ROWID P
          1     INDEX RANGE SCAN P_ID (object id 411256)
        100    INDEX RANGE SCAN C_ID (object id 411255)So its simple nested loop with postponed inner table access .
    Why in row source operation we have got 102 rows (NL level) ? (It means NL was executed 102 times ?)
    And why 201 in other case ?
    Regards
    GregG

    I am not sure about the calculation/reason for those A-ROWS figures but the NL operation executes only once (but accesses inner rowsource 100 times in case one and once in second case) in both cases (which is as expected).
    A closer test case (to OP) is
    SQL> select * from v$version ;
    BANNER                                                                                                                                                                    
    Oracle Database 10g Release 10.2.0.5.0 - Production                                                                                                                       
    PL/SQL Release 10.2.0.5.0 - Production                                                                                                                                    
    CORE     10.2.0.5.0     Production                                                                                                                                                
    TNS for Linux: Version 10.2.0.5.0 - Production                                                                                                                            
    NLSRTL Version 10.2.0.5.0 - Production                                                                                                                                    
    SQL> create table p nologging as select level as id, cast(dbms_random.string('a', 100) as varchar2(100)) as filler from dual connect by level <= 2000 ;
    Table created.
    SQL> exec dbms_stats.gather_table_stats(user, 'P') ;
    PL/SQL procedure successfully completed.
    SQL> create index p_id on p(id) nologging ;
    Index created.
    SQL> select count(*)  from p;
      COUNT(*)                                                                                                                                                                
          2000                                                                                                                                                                
    SQL> select count(*) , id from p group by id having count(*) > 1;
    no rows selected
    SQL> create table c nologging as select level as id, cast(dbms_random.string('a', 100) as varchar2(100)) as filler from dual connect by level <= 900 union all select 10, cast(dbms_random.string('a', 100) as varchar2(100)) as filler from dual connect by level <= 99 ;
    Table created.
    SQL> select count(*) , id from c group by id having count(*) > 1;
      COUNT(*)         ID                                                                                                                                                     
           100         10                                                                                                                                                     
    SQL> exec dbms_stats.gather_table_stats(user, 'C') ;
    PL/SQL procedure successfully completed.
    SQL> create index c_id on c(id) nologging ;
    Index created.
    SQL> select /*+ use_nl(p) leading(c) gather_plan_statistics */ * from p , c where p.id = c.id and c.id in (10) ;
            ID FILLER                                                                                                       ID                                                
    FILLER                                                                                                                                                                    
            10 opKRJynLxjeCiOScvOklQBXfpnfgvlhHNLzlKKrFaNzQLODKSnKMxpzecqyFkVSLvdosZJhWckBcQbpIaqttahlqBxrugKQVrnIk         10                                                
    zrGZSmUFXNyNMOViUYSvPDdfznSlMvaFnQakopPtcBvXQkWmMlWCnrPyeZLfhuLLeYyAEkcwZNSfoASLYpoAnpESqlQWkaEGatXV                                                                      
            10 opKRJynLxjeCiOScvOklQBXfpnfgvlhHNLzlKKrFaNzQLODKSnKMxpzecqyFkVSLvdosZJhWckBcQbpIaqttahlqBxrugKQVrnIk         10                                                
    hKtrWPCfAmWWLGMXfwHCusSwVpehEnZdxYPLouIuBlMMiSKlIJWwklZCAXZaCbIxKlhzBVRhhTPdLcheyAdoYyfxwomqWRrMXuMk                                                                      
            10 opKRJynLxjeCiOScvOklQBXfpnfgvlhHNLzlKKrFaNzQLODKSnKMxpzecqyFkVSLvdosZJhWckBcQbpIaqttahlqBxrugKQVrnIk         10                                                
    ncSqclZvOGgyXDPaaouGaUqXmJtFNbNyFzUalDknEMvTsBRwGmTxOCIalLvqMnuTFBZJGzNfBqaSVHUtvNDceVZqKQQyqeGKOUdz                                                                      
    100 rows selected.
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST')) ;
    PLAN_TABLE_OUTPUT                                                                                                                                                         
    SQL_ID  1f55m4rabtu3h, child number 0                                                                                                                                     
    select /*+ use_nl(p) leading(c) gather_plan_statistics */ * from p , c where p.id = c.id and                                                                              
    c.id in (10)                                                                                                                                                              
    Plan hash value: 2553281496                                                                                                                                               
    | Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |                                                                 
    |   0 | SELECT STATEMENT              |      |      0 |        |      0 |00:00:00.01 |       0 |      0 |                                                                 
    |   1 |  TABLE ACCESS BY INDEX ROWID  | P    |      1 |      1 |    100 |00:00:00.01 |     112 |      2 |                                                                 
    |   2 |   NESTED LOOPS                |      |      1 |      1 |    201 |00:00:00.02 |     110 |      2 |                                                                 
    |   3 |    TABLE ACCESS BY INDEX ROWID| C    |      1 |      1 |    100 |00:00:00.01 |       7 |      1 |                                                                 
    |*  4 |     INDEX RANGE SCAN          | C_ID |      1 |      1 |    100 |00:00:00.01 |       3 |      1 |                                                                 
    |*  5 |    INDEX RANGE SCAN           | P_ID |    100 |      1 |    100 |00:00:00.01 |     103 |      1 |                                                                 
    Predicate Information (identified by operation id):                                                                                                                       
       4 - access("C"."ID"=10)                                                                                                                                                
       5 - access("P"."ID"=10)                                                                                                                                                
    24 rows selected.
    SQL> select /*+ use_nl(c) leading(p) gather_plan_statistics */ * from p , c where p.id = c.id and c.id in (10) ;
            ID FILLER                                                                                                       ID                                                
    FILLER                                                                                                                                                                    
            10 opKRJynLxjeCiOScvOklQBXfpnfgvlhHNLzlKKrFaNzQLODKSnKMxpzecqyFkVSLvdosZJhWckBcQbpIaqttahlqBxrugKQVrnIk         10                                                
    zrGZSmUFXNyNMOViUYSvPDdfznSlMvaFnQakopPtcBvXQkWmMlWCnrPyeZLfhuLLeYyAEkcwZNSfoASLYpoAnpESqlQWkaEGatXV                                                                      
            10 opKRJynLxjeCiOScvOklQBXfpnfgvlhHNLzlKKrFaNzQLODKSnKMxpzecqyFkVSLvdosZJhWckBcQbpIaqttahlqBxrugKQVrnIk         10                                                
    hKtrWPCfAmWWLGMXfwHCusSwVpehEnZdxYPLouIuBlMMiSKlIJWwklZCAXZaCbIxKlhzBVRhhTPdLcheyAdoYyfxwomqWRrMXuMk                                                                      
            10 opKRJynLxjeCiOScvOklQBXfpnfgvlhHNLzlKKrFaNzQLODKSnKMxpzecqyFkVSLvdosZJhWckBcQbpIaqttahlqBxrugKQVrnIk         10                                                
    ncSqclZvOGgyXDPaaouGaUqXmJtFNbNyFzUalDknEMvTsBRwGmTxOCIalLvqMnuTFBZJGzNfBqaSVHUtvNDceVZqKQQyqeGKOUdz                                                                      
    100 rows selected.
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST')) ;
    PLAN_TABLE_OUTPUT                                                                                                                                                         
    SQL_ID  7hvf1zvsvfhdp, child number 0                                                                                                                                     
    select /*+ use_nl(c) leading(p) gather_plan_statistics */ * from p , c where p.id =                                                                                       
    c.id and c.id in (10)                                                                                                                                                     
    Plan hash value: 2133717140                                                                                                                                               
    | Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |                                                                          
    |   0 | SELECT STATEMENT              |      |      0 |        |      0 |00:00:00.01 |       0 |                                                                          
    |   1 |  TABLE ACCESS BY INDEX ROWID  | C    |      1 |      1 |    100 |00:00:00.01 |      11 |                                                                          
    |   2 |   NESTED LOOPS                |      |      1 |      1 |    102 |00:00:00.01 |       7 |                                                                          
    |   3 |    TABLE ACCESS BY INDEX ROWID| P    |      1 |      1 |      1 |00:00:00.01 |       4 |                                                                          
    |*  4 |     INDEX RANGE SCAN          | P_ID |      1 |      1 |      1 |00:00:00.01 |       3 |                                                                          
    |*  5 |    INDEX RANGE SCAN           | C_ID |      1 |      1 |    100 |00:00:00.01 |       3 |                                                                          
    Predicate Information (identified by operation id):                                                                                                                       
       4 - access("P"."ID"=10)                                                                                                                                                
       5 - access("C"."ID"=10)                                                                                                                                                
    24 rows selected.
    SQL> drop table p purge ;
    Table dropped.
    SQL> drop table c purge ;
    Table dropped.
    SQL> spool offEdited by: user503699 on Jan 18, 2012 11:49 PM

  • Test case upload in solar02 transaction using function module

    I have requirement to upload test case and test case name in transaction solar02,
    i want functional module name which will ask for project name, business scenarious, business processes
    test case type,  test case , test case name

    Hello Vinod, see the following code (I assume you have a item data on columns A and B as from row 4, and header data on row 2)
        Set rfcctl = CreateObject("sap.functions")
        Set conn = rfcctl.Connection
        conn.Client = "<client>"
        conn.hostname = "<server>"
        conn.user = user "<username>"
        conn.Language = "<lang>"
        conn.password = "<password>"
        conn.SystemNumber = "<system number>"
        If conn.Logon(0, True) Then
           Set rfc = rfcctl.Add("PROCESS_MESS_UPLOAD")
           Set data = rfc.Tables("MSHD").Rows.Add
           item("WERK") = Range("A2").Value
           item("MSCLA") = Range("B2").Value
           item("SEDAT") = Range("C2").Value
           'add all necessary table columns
            i = 4
            While Range("A" & i).Value <> ""
                Set item = rfc.Tables("MSEL").Rows.Add
                item("ATNAM") = Range("A" & i).Value
                item("ATWRT") = Range("B" & i).Value
                'add all necessary table columns
                i = i + 1
            Wend
            If rfc.Call Then
              'CHECK FOR SUCCESS OR ERRORS
            Else
                MsgBox "Call error", vbOKOnly
                Exit Sub
            end if
        Else
            MsgBox "Logon error", vbOKOnly
            Exit Sub
        End If
        Set rfcctl = Nothing
        Set conn = Nothing
    Note: I haven't used this FM before, so I'm guessing which table fields may be useful to you
    Cheers
    Michael

  • How to change the status of test cases in Test Plan from Design to Ready using Excel VBA

    HI,
    How to change the status of test cases in Test Plan from Design to Ready using Excel VBA

    Thanks Florin,
    Your piece of code has worked alot, and it was very helpful in changing the Status of the Workitem to "READY" for all the Users fo the workitem.
    Points have been rewarded for your help.
    Process: We have acheived this using the "Work Item Exits", Usng "AFTER_EXECUTION" Method.
    Note: The Exit will be executed if "exit_cancelled"  statement is present/used in the work item method. if not it is not taking to the exit code. I'm unable to find the reason for it. Florin can u please explain this point.
    Please check the link for adding the code in Work Item Exits.
    http://wiki.sdn.sap.com/wiki/display/ABAP/ProgramExitsIn+Workflow
    Please find the Code:
    method IF_SWF_IFS_WORKITEM_EXIT~EVENT_RAISED.
    Get the context of the workitem
      me->wi_context = im_workitem_context.
    After execution of the workitem call the method AFTER_EXECUTION
      if im_event_name eq swrco_event_after_execution.
        me->after_execution( ).
      endif.
    endmethod.
    METHOD AFTER_EXECUTION.
    This method acts as the Event Handler for SWRCO_EVENT_AFTER_EXECUTION
      DATA: LCL_L_WID TYPE SWW_WIID,
            L_STATUS TYPE SWR_WISTAT-STATUS,
            L_NEW_STATUS  TYPE SWR_WISTAT,
            L_SWR_MESSAG  TYPE STANDARD TABLE OF SWR_MESSAG,
            L_SWR_MSTRUC  TYPE STANDARD TABLE OF SWR_MSTRUC.
    Get work item
      CALL METHOD WI_CONTEXT->GET_WORKITEM_ID
        RECEIVING
          RE_WORKITEM = LCL_L_WID.
      L_STATUS = 'READY'.
      CALL FUNCTION 'SAP_WAPI_SET_WORKITEM_STATUS'
        EXPORTING
          WORKITEM_ID    = LCL_L_WID
          STATUS         = L_STATUS
          USER           = SY-UNAME
          LANGUAGE       = SY-LANGU
          DO_COMMIT      = 'X'
        IMPORTING
          NEW_STATUS     = L_NEW_STATUS
         RETURN_CODE    = SY-SUBRC
        TABLES
          MESSAGE_LINES  = L_SWR_MESSAG
          MESSAGE_STRUCT = L_SWR_MSTRUC.
      IF SY-SUBRC EQ 0.
      ENDIF.
    ENDMETHOD.
    Thank You Once Again,
    Ajay Kumar Chippa

  • How to copy test cases from one project to another?

    Currently we have several projects withing the oracle test manager suite with test cases in each of them. I need to be able to copy some of the test cases to other projects or atleast select them to run under a new project instead of retyping them everytime. Is there a way to do this? I see that I can copy within a project and paste but have not be able to find way to do it outside of a project. Maybe there is a way to export and then import? Any ideas would be helpful?
    Thanks

    Hi
    You can always export a test Summary table Report in XLS and import it in the new project, however that will not give you results or test steps but at least is a start.
    Also there is something called Data Links have a look at the help file you may be able to use that when creating test cases, you could create the test case in CSV first and then upload it to multiple projects using the data link.
    Hope this helps
    Alex

  • [svn] 889: Add test case for BLZ-82 where HttpService should return multiple headers with the same name .

    Revision: 889
    Author: [email protected]
    Date: 2008-03-21 13:08:05 -0700 (Fri, 21 Mar 2008)
    Log Message:
    Add test case for BLZ-82 where HttpService should return multiple headers with the same name.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-82
    Added Paths:
    blazeds/trunk/qa/apps/qa-regress/remote/MultipleHeadersTest.jsp
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/proxyService/httpservice/MultiHe aderTest.mxml

    Hi again,
    this may be old news to some people, but I just realized we can have the desired benefits I originally listed (encapsulation, reuse, maintainability, security) TODAY by using pipelined functions and using the table() function in Apex report region queries.
    So the report query basically becomes, for example (if get_employees is a pipelined function)
    select * from table(my_package.get_employees(:p1_deptno))
    The only downside compared to a (weakly typed) sys_refcursor is that you have to define the type you are returning in your package spec (or as an SQL type). So it's a bit more coding, but it's still worth it for the other benefits it provides.
    I like Apex even better now! :-)
    - Morten

  • Interesting test case

    Hi,
    I played a little on my test machine and get interesting results so if someone could explain me what happens here I would be grateful.
    It is obvious that some data corruption happened but still interesting situation.
    OS: Linux 32bit
    Oracle: 10.2.0.2.0
    TEST CASE:
    1. Created tablespace and one table in that tablespace:
    SQL> create tablespace test_tbs datafile '/oradata/tbs01.dbf' size 75M autoextend on next 10M maxsize 512M;
    Tablespace created.
    SQL> create table objtab tablespace test_tbs as select * from dba_objects where 1=2;
    Table created.
    2. Made two more copies of tbs01.dbf datafile:
    SQL> !cp tbs01.dbf tbs02.dbf
    SQL> !cp tbs01.dbf tbs03.dbf
    3. Insert some rows into table objtab:
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> /
    50417 rows created.
    SQL> /
    50417 rows created.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from objtab;
    COUNT(*)
    151251
    4. Deleted tbs01.dbf:
    SQL> !rm tbs01.dbf
    5. Insert still works:
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> commit;
    Commit complete.
    6. Renamed tbs02.dbf to tbs01.dbf:
    SQL> !mv tbs02.dbf tbs01.dbf
    7. Inserted new rows:
    SQL> select count(*) from objtab;
    COUNT(*)
    302502
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> commit;
    Commit complete.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> commit;
    Commit complete.
    8. Check size of datafile:
    SQL> !du -hs tbs01.dbf
    *96M tbs01.dbf*
    SQL> select count(*) from objtab;
    COUNT(*)
    756255
    9. Deleted datafile tbs01.dbf and renamed tbs03.dbf to tbs01.dbf:
    SQL> !rm tbs01.dbf
    SQL> !mv tbs03.dbf tbs01.dbf
    10. Insert more rows and executed "alter system checkpoint":
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> insert into objtab select * from dba_objects;
    50417 rows created.
    SQL> commit;
    Commit complete.
    SQL> alter system checkpoint;
    System altered.
    SQL> select count(*) from objtab;
    COUNT(*)
    907506
    11. When I check the size of tbs01.dbf it is smaller then before despite of more rows I inserted. How come? Where are all this rows stored?
    SQL> !du -hs tbs01.dbf
    *86M tbs01.dbf*
    12. Now try to offline tablespace and then I get errors in alertlog:
    SQL> alter tablespace test_tbs offline normal;
    alter tablespace test_tbs offline normal
    ERROR at line 1:
    ORA-00603: ORACLE server session terminated by fatal error
    ALERT LOG
    Errors in file /oracle/admin/um/udump/um_ora_501.trc:
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-01122: database file 6 failed verification check
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-01208: data file is an old version - not accessing current version
    Fri May 29 09:01:31 2009
    ORA-600 signalled during: alter tablespace test_tbs offline normal...
    Fri May 29 09:01:31 2009
    Errors in file /oracle/admin/um/udump/um_ora_501.trc:
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-01122: database file 6 failed verification check
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-01208: data file is an old version - not accessing current version
    Fri May 29 09:01:37 2009
    Errors in file /oracle/admin/um/udump/um_ora_501.trc:
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-01122: database file 6 failed verification check
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-01208: data file is an old version - not accessing current version
    Fri May 29 09:01:43 2009
    Errors in file /oracle/admin/um/udump/um_ora_501.trc:
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-00600: internal error code, arguments: [krhpfh_03-1208], [fno =], [6], [fecpc =], [4], [fhcpc =], [3], []
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-01122: database file 6 failed verification check
    ORA-01110: data file 6: '/oradata/tbs01.dbf'
    ORA-01208: data file is an old version - not accessing current version
    13. One more check:
    SQL> select tablespace_name, status from dba_tablespaces;
    TABLESPACE_NAME STATUS
    SYSTEM ONLINE
    SYSAUX ONLINE
    USERS ONLINE
    UNDOTBS2 ONLINE
    TMP ONLINE
    TEST_TBS ONLINE
    7 rows selected.
    SQL> select count(*) from objtab;
    COUNT(*)
    907506
    14. Restart database:
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup;
    ORACLE instance started.
    Total System Global Area 1224736768 bytes
    Fixed Size 1267188 bytes
    Variable Size 1006635532 bytes
    Database Buffers 201326592 bytes
    Redo Buffers 15507456 bytes
    Database mounted.
    Database opened.
    15. Try to count(*) from table but it says that object no longer exists:
    SQL> select count(*) from objtab;
    select count(*) from objtab
    ERROR at line 1:
    ORA-08103: object no longer exists
    SQL> select tablespace_name, status from dba_tablespaces;
    TABLESPACE_NAME STATUS
    SYSTEM ONLINE
    SYSAUX ONLINE
    USERS ONLINE
    UNDOTBS2 ONLINE
    TMP ONLINE
    TEST_TBS ONLINE
    7 rows selected.
    16. If object does not exist, how come I get this results (probably leftovers in data dictionary):
    SQL> desc objtab;
    Name Null? Type
    OWNER VARCHAR2(30)
    OBJECT_NAME VARCHAR2(128)
    SUBOBJECT_NAME VARCHAR2(30)
    OBJECT_ID NUMBER
    DATA_OBJECT_ID NUMBER
    OBJECT_TYPE VARCHAR2(19)
    CREATED DATE
    LAST_DDL_TIME DATE
    TIMESTAMP VARCHAR2(19)
    STATUS VARCHAR2(7)
    TEMPORARY VARCHAR2(1)
    GENERATED VARCHAR2(1)
    SECONDARY VARCHAR2(1)
    SQL> create table objtab tablespace test_tbs as select * from dba_objects;
    create table objtab tablespace test_tbs as select * from dba_objects
    ERROR at line 1:
    ORA-00955: name is already used by an existing object
    17. To drop object (can't drop):
    SQL> drop table objtab;
    drop table objtab
    ERROR at line 1:
    ORA-08103: object no longer exists
    18. Clean up:
    SQL> drop tablespace test_tbs including contents and datafiles;
    Tablespace dropped.
    Best Regards,
    Marko

    Hi Uwe,
    as I said before my intention was to understand behavior of Oracle and Linux in scenario like this.
    I thought about what will happen if some Linux admin by mistake moves live datafile to another location and after 5-10 mins, when he realizes his mistake, he moves the datafile to old (original) location.
    Will Oracle notice some errors in alert log? (In my test case I didn't receive any message in alert log)
    How will I know (as a DBA) what was done with this datafile if Linux admin does not say anything to me?
    Is any damage made to datbase ? (Probably)
    etc...
    When I am on my test machine I like to do all kind of stuff and try anything that comes on my mind. It isn't important to me if this scenario has any connection to real world problems. I enjoy doing this so I like to spend some of my time on this test cases.
    Anyway thanks for your comment.
    Regards,
    Marko
    Edited by: msutic on May 29, 2009 1:03 PM

  • Report available for test cases (manual test script files uploaded)?

    We upload our test script files to the "Test Case" tab, assign it a status.  We are hoping to have a report that would report all test case files and their STATUSES.
    In solar_eval, there is a standard SAP report for test case provided.  However, the field "Status" for the test case file is not included in this report.
    Is there any report available that would give us this status?  If not, in which table does this data come from?  Is there anyway to build a simple SQVI query to extract the list of test cases and their statuses?
    Any help is highly appreciated!

    Hi Jo,
    It's not entirely clear on whether you are look for 'Status' values of the Test Case document itself OR for 'Status' values of the Test(s) conducted on such Test Case documents.
    I believe the following two SOLAR_EVAL reports should resolve either ways:
    (a) SOLAR_EVAL -> Project -> Configuration -> Assignments -> Documentation (Programme called is SAPLSPROJECT_SOLAR_DOC_EVAL_IM) , Tick only 'Test Cases' and switch off other tabs
    (b) SOLAR_EVAL -> Project -> Test -> Test Plan Status Analysis
    Best regards,
    Srini

  • Test case: unusual locking problem or expected behaviour?

    I have tried the following test case on both 9.0.1 and 10.2.0. The problem I am seeing here is that a table receives an exclusive lock that doesn't get trapped by a FOR UPDATE NOWAIT condition.
    Test case setup
    create table x (
    f1 number not null,
    f2 varchar2(100) );
    create table y (
    f1 number not null,
    f2 number not null,
    f3 number,
    f4 varchar2(100) );
    alter table x add constraint pk_x
    primary key (f1);
    alter table y add constraint pk_y
    primary key (f1);
    /*** This is a self-referential integrity check ****/
    alter table y add constraint fk_y
    foreign key (f3)
    references y ( f1 );
    create or replace trigger trig_y
    before insert on y
    for each row
    begin
    update x
    set f2 = 'trig test ' || to_char(sysdate,'ddmmyyyy hhmiss')
    where f1 = :new.f2;
    end;
    insert into x values (1,'test 1');
    insert into x values (2,'test 2');
    insert into x values (3,'test 3');
    insert into y values (2,2,2,'y test 2');
    insert into y values (3,3,3,'y test 3');
    commit;
    Test case actions
    This requires 3 independent sessions to be started.
    * SESSION 1 *
    select * from x
    where f1 = 1
    for update nowait;
    * SESSION 2 *
    insert into y values (1,1,1,'test');
    -- This session waits because of the trigger that is attempting to update the
    -- same row that is locked in session 1.
    * SESSION 3 *
    select *
    from y
    where f1 = 2
    for update of f1 nowait;
    -- The row lock succeeds.
    -- Now update the primary key column in Y.
    update y
    set f1 = 2
    where f1 = 2;
    -- This update statement waits because of a lock. Why is this as the row
    -- has been successfully locked by the FOR UPDATE?
    -- Remove the foreign key constraint from table Y and try again. This time
    -- the update will not wait but will complete successfully.
    Is this expected behaviour, or a bug in self-referential integrity checks, or in
    all foreign keys? The reason this came about in our application is because FORMS attempts to update every column on a block regardless of whether all values in the block have changed. This includes the primary key columns.
    We have worked around this issue for now by setting the 'update changed columns only' property on blocks in the forms.

    No. All you are doing there is stating your intention of later updating the selected locked rows.
    You may not even update any rows, if the program logic decides that way. Your actual update is the only case where the validation of foreign key will be applied. It cannot be done at the time of doing FOR UPDATE select, since the database does not know what new value is going to be when you do update so it is not possible to check.
    Also, note that yoru statement did NOT fail. It was just rying to validate your foreign key, and in that process wants to make sure no one else makes the changes. The statement was waiting for resource to be free, it DID NOT FAIL (no error was raised).

  • Test cases in Solution manager

    I have uploaded two test case documents under 'Test case' tab for a Business Process in the transaction solar02.
    But those two test documents does not appear in STWB_WORK for that Business Scenario.
    Any advices?

    Hi,
    Perhaps this might be of interest to you, if it happens that inconsistency is affecting the availability of your test documents
    You can use the program as it does not do any harm to
    your projects (it does not delete anything, it add only some entries
    to a table, if at any circumstances they are missing)
    The use of the program is quite simple.
    Start transaction SE38 and enter the program GHE_CHECK_TTREEI_ENTRIES
    and hit execute (F8).
    In the next screen enter the project you would like to check using the
    F4-help.
    There is also a field TEST. When you enter here 'X' then no modification
    will be done, but only the check whether there are entries missing
    is performed.
    If the program finds any missing entries, simply re-run the program
    without the 'X' at field TEST and the program corrects the inconsistency
    Hope this helps.
    Cheers
    SH

  • Changing template for Test Case Description

    Hi All,
    Greetings...
    In one of my projects, I want to customize the template provided by the SAP in test case description.
    We can add test case description while creating Manual Test Case (Tx: STWB_TC), Test Script and Test Configuration (Tx: SECATT)
    I want ot update the default template with the custom template.
    We had tried to update the same in SOLAR_PROJECT_ADMIN --> select Project --> Project Standard --> Project Templates --> Documantation Types, selected TD1, Test Case Description of Complete Directory of Document Types and using Document Template --> Change and Import we tried to update the template. In the document type, it has been changed but it is not reflecting in the other transactions.
    Please help me in this regard to update the test case description.
    Thanks in Advance
    Saman

    The template is stored in the table DOKTL with documentation type TX, object TEMPLATE_CATE, language EN
    Update the last version...
    thats it

  • Data Guard Test cases

    Hi All we are in process of implementing Data Guard (Physical Standby) setup. Everything has been done. Now we need some good test cases to test our setup. Can you please provide some inputs in testing this.

    Simple test would be to create few test tables with some data in primary and see if they are applied in your DR site by Data Guard.
    In terms of testing failover & failback or Switchover and Switch back you would need to go through oracle documents or refer the following link.
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_SwitchoverFailoverBestPractices.pdf
    sbs

  • Please help with better sql plan - full test case

    Hello,
    here is my test case:
    SQL> create table ib_auth_devices(dv_id number(12) primary key , dv_cl_id number(12));
    Table created.
    SQL> create table ib_clients (cl_id number(12) primary key);
    Table created.
    SQL> alter table ib_auth_devices add constraint fk1 foreign key(dv_cl_id) references ib_clients(cl_id);
    Table altered.
    SQL> create table ib_tokens (to_dv_id number(12) primary key);
    Table created.
    SQL> alter table ib_tokens add constraint to_dv_id foreign key(to_dv_id) references ib_auth_devices(dv_id);
    Table altered.
    SQL> create table ib_auth_cards(au_dv_id number(12) primary key);
    Table created.
    SQL>  alter table ib_auth_cards add constraint  au_dv_id foreign key(au_dv_id) references ib_auth_devices(dv_id);
    Table altered.
    SQL> insert into ib_clients values(1);
    1 row created.
    SQL> insert into ib_clients values(2);
    1 row created.
    SQL> insert into ib_clients values(3);
    1 row created.
    SQL> insert into ib_clients values(4);
    1 row created.
    SQL> insert into ib_clients values(5);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> insert into ib_auth_devices values(1 , 1);
    1 row created.
    SQL> insert into ib_auth_devices values(2 , 2);
    1 row created.
    SQL>  insert into ib_auth_devices values(3,3);
    1 row created.
    SQL> insert into ib_auth_devices values(4,4);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> insert into ib_tokens values(1);
    1 row created.
    SQL> insert into ib_tokens values(2);
    1 row created.
    SQL> insert into ib_tokens values(3);
    1 row created.
    SQL> insert into ib_auth_cards values(1);
    1 row created.
    SQL> insert into ib_auth_cards values(2);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select cl_id from ib_clients;
         CL_ID
             1
             2
             3
             4
             5
    SQL> select cl_id from ib_clients cli , ib_auth_devices ad
      2  where
      3  cli.cl_id = ad.dv_cl_id;
         CL_ID
             1
             2
             3
             4
    SQL> select * from ib_tokens;
      TO_DV_ID
             1
             2
             3
    SQL> select * from ib_auth_cards;
      AU_DV_ID
             1
             2
    SQL> select * from ib_clients;
         CL_ID
             1
             2
             3
             4
             5
    SQL> select * from ib_auth_devices;
         DV_ID   DV_CL_ID
             1          1
             2          2
             3          3
             4          4
    SQL> exec dbms_stats.gather_table_stats(user , 'IB_AUTH_DEVICES' , cascade => true);
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats(user , 'IB_TOKENS'  , cascade => true);
    PL/SQL procedure successfully completed.
    SQL>  exec dbms_stats.gather_table_stats(user , 'IB_CLIENTS' , cascade => true);
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats(user , 'IB_AUTH_CARDS' ,  cascade => true);
    PL/SQL procedure successfully completed.
    SQL> l
      1  select cli.cl_id from ib_clients cli , ib_auth_devices ad,
      2          (select dv_cl_id as cl_id
      3            from ib_auth_cards, ib_auth_devices
      4            where
      5               au_dv_id = dv_id
      6            ) cards,
      7  (       select dv_cl_id as cl_id
      8            from ib_tokens, ib_auth_devices
      9            where
    10               to_dv_id = dv_id
    11           ) tokens
    12  where
    13  cli.cl_id = ad.dv_cl_id
    14  and cards.cl_id(+)= cli.cl_id
    15  and cards.cl_id is null
    16  and tokens.cl_id(+)= cli.cl_id
    17* and tokens.cl_id is null
    SQL> r
      1  select cli.cl_id from ib_clients cli , ib_auth_devices ad,
      2          (select dv_cl_id as cl_id
      3            from ib_auth_cards, ib_auth_devices
      4            where
      5               au_dv_id = dv_id
      6            ) cards,
      7  (       select dv_cl_id as cl_id
      8            from ib_tokens, ib_auth_devices
      9            where
    10               to_dv_id = dv_id
    11           ) tokens
    12  where
    13  cli.cl_id = ad.dv_cl_id
    14  and cards.cl_id(+)= cli.cl_id
    15  and cards.cl_id is null
    16  and tokens.cl_id(+)= cli.cl_id
    17* and tokens.cl_id is null
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=4 Bytes=128)
       1    0   FILTER
       2    1     HASH JOIN (OUTER)
       3    2       FILTER
       4    3         HASH JOIN (OUTER)
       5    4           NESTED LOOPS (Cost=1 Card=4 Bytes=24)
       6    5             TABLE ACCESS (FULL) OF 'IB_AUTH_DEVICES' (Cost=1
               Card=4 Bytes=12)
       7    5             INDEX (UNIQUE SCAN) OF 'SYS_C008299' (UNIQUE)
       8    4           VIEW (Cost=1 Card=2 Bytes=26)
       9    8             NESTED LOOPS (Cost=1 Card=2 Bytes=18)
      10    9               TABLE ACCESS (FULL) OF 'IB_AUTH_DEVICES' (Cost
              =1 Card=4 Bytes=24)
      11    9               INDEX (UNIQUE SCAN) OF 'SYS_C008303' (UNIQUE)
      12    2       VIEW (Cost=1 Card=3 Bytes=39)
      13   12         NESTED LOOPS (Cost=1 Card=3 Bytes=27)
      14   13           TABLE ACCESS (FULL) OF 'IB_AUTH_DEVICES' (Cost=1 C
              ard=4 Bytes=24)
      15   13           INDEX (UNIQUE SCAN) OF 'SYS_C008301' (UNIQUE)
    Statistics
              0  recursive calls
             12  db block gets
              9  consistent gets
              0  physical reads
              0  redo size
            364  bytes sent via SQL*Net to client
            431  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              8  sorts (memory)
              0  sorts (disk)
              1  rows processedAny ideas about rewriting this query to achive better performance ?
    Oracle version 8.1.7
    Best Regards.
    Grzegorz

    The answer to this question totally depends on the real volumes you have in your tables. Your test case is probably not showing us these volumes.
    However, I see some needless table accesses, so it's probably safe to conclude that this rewrite will speed something up:
    SQL> create table ib_auth_devices(dv_id number(12) primary key , dv_cl_id number(12));
    Tabel is aangemaakt.
    SQL> create table ib_clients (cl_id number(12) primary key);
    Tabel is aangemaakt.
    SQL> alter table ib_auth_devices add constraint fk1 foreign key(dv_cl_id) references ib_clients(cl_id);
    Tabel is gewijzigd.
    SQL> create table ib_tokens (to_dv_id number(12) primary key);
    Tabel is aangemaakt.
    SQL> alter table ib_tokens add constraint to_dv_id foreign key(to_dv_id) references ib_auth_devices(dv_id);
    Tabel is gewijzigd.
    SQL> create table ib_auth_cards(au_dv_id number(12) primary key);
    Tabel is aangemaakt.
    SQL> alter table ib_auth_cards add constraint  au_dv_id foreign key(au_dv_id) references ib_auth_devices(dv_id);
    Tabel is gewijzigd.
    SQL> insert into ib_clients values(1);
    1 rij is aangemaakt.
    SQL> insert into ib_clients values(2);
    1 rij is aangemaakt.
    SQL> insert into ib_clients values(3);
    1 rij is aangemaakt.
    SQL> insert into ib_clients values(4);
    1 rij is aangemaakt.
    SQL> insert into ib_clients values(5);
    1 rij is aangemaakt.
    SQL> insert into ib_auth_devices values(1 , 1);
    1 rij is aangemaakt.
    SQL> insert into ib_auth_devices values(2 , 2);
    1 rij is aangemaakt.
    SQL> insert into ib_auth_devices values(3,3);
    1 rij is aangemaakt.
    SQL> insert into ib_auth_devices values(4,4);
    1 rij is aangemaakt.
    SQL> insert into ib_tokens values(1);
    1 rij is aangemaakt.
    SQL> insert into ib_tokens values(2);
    1 rij is aangemaakt.
    SQL> insert into ib_tokens values(3);
    1 rij is aangemaakt.
    SQL> insert into ib_auth_cards values(1);
    1 rij is aangemaakt.
    SQL> insert into ib_auth_cards values(2);
    1 rij is aangemaakt.
    SQL> exec dbms_stats.gather_table_stats(user , 'IB_AUTH_DEVICES' , cascade => true);
    PL/SQL-procedure is geslaagd.
    SQL> exec dbms_stats.gather_table_stats(user , 'IB_TOKENS'  , cascade => true);
    PL/SQL-procedure is geslaagd.
    SQL> exec dbms_stats.gather_table_stats(user , 'IB_CLIENTS' , cascade => true);
    PL/SQL-procedure is geslaagd.
    SQL> exec dbms_stats.gather_table_stats(user , 'IB_AUTH_CARDS' ,  cascade => true);
    PL/SQL-procedure is geslaagd.
    SQL> set autotrace on explain
    SQL> select cli.cl_id
      2    from ib_clients cli
      3       , ib_auth_devices ad
      4       , ( select dv_cl_id as cl_id
      5             from ib_auth_cards
      6                , ib_auth_devices
      7            where au_dv_id = dv_id
      8         ) cards
      9       , ( select dv_cl_id as cl_id
    10             from ib_tokens
    11                , ib_auth_devices
    12            where to_dv_id = dv_id
    13         ) tokens
    14   where cli.cl_id = ad.dv_cl_id
    15     and cards.cl_id(+)= cli.cl_id
    16     and cards.cl_id is null
    17     and tokens.cl_id(+)= cli.cl_id
    18     and tokens.cl_id is null
    19  /
                                     CL_ID
                                         4
    1 rij is geselecteerd.
    Uitvoeringspan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=10 Card=4 Bytes=128)
       1    0   FILTER
       2    1     HASH JOIN (OUTER)
       3    2       FILTER
       4    3         HASH JOIN (OUTER)
       5    4           NESTED LOOPS (Cost=4 Card=4 Bytes=24)
       6    5             TABLE ACCESS (FULL) OF 'IB_AUTH_DEVICES' (Cost=3 Card=4 Bytes=12)
       7    5             INDEX (UNIQUE SCAN) OF 'SYS_C001354381' (UNIQUE) (Cost=1 Card=1 Bytes=3)
       8    4           VIEW (Cost=3 Card=2 Bytes=26)
       9    8             NESTED LOOPS (Cost=3 Card=2 Bytes=18)
      10    9               INDEX (FULL SCAN) OF 'SYS_C001354385' (UNIQUE) (Cost=2 Card=2 Bytes=6)
      11    9               TABLE ACCESS (BY INDEX ROWID) OF 'IB_AUTH_DEVICES' (Cost=2 Card=1 Bytes=6)
      12   11                 INDEX (UNIQUE SCAN) OF 'SYS_C001354380' (UNIQUE) (Cost=1 Card=1)
      13    2       VIEW (Cost=3 Card=3 Bytes=39)
      14   13         NESTED LOOPS (Cost=3 Card=3 Bytes=27)
      15   14           INDEX (FULL SCAN) OF 'SYS_C001354383' (UNIQUE) (Cost=2 Card=3 Bytes=9)
      16   14           TABLE ACCESS (BY INDEX ROWID) OF 'IB_AUTH_DEVICES' (Cost=2 Card=1 Bytes=6)
      17   16             INDEX (UNIQUE SCAN) OF 'SYS_C001354380' (UNIQUE) (Cost=1 Card=1)
    SQL> select cli.cl_id
      2    from ib_clients cli
      3       , ib_auth_devices ad
      4   where cli.cl_id = ad.dv_cl_id
      5     and not exists
      6         ( select 'dummy'
      7             from ib_auth_cards
      8            where au_dv_id = ad.dv_id
      9         )
    10     and not exists
    11         ( select 'dummy'
    12             from ib_tokens
    13            where to_dv_id = ad.dv_id
    14         )
    15  /
                                     CL_ID
                                         4
    1 rij is geselecteerd.
    Uitvoeringspan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=8 Card=1 Bytes=9)
       1    0   FILTER
       2    1     NESTED LOOPS (Cost=4 Card=1 Bytes=9)
       3    2       TABLE ACCESS (FULL) OF 'IB_AUTH_DEVICES' (Cost=3 Card=1 Bytes=6)
       4    2       INDEX (UNIQUE SCAN) OF 'SYS_C001354381' (UNIQUE) (Cost=1 Card=1 Bytes=3)
       5    1     INDEX (UNIQUE SCAN) OF 'SYS_C001354385' (UNIQUE) (Cost=1 Card=1 Bytes=3)
       6    1     INDEX (UNIQUE SCAN) OF 'SYS_C001354383' (UNIQUE) (Cost=1 Card=1 Bytes=3)Regards,
    Rob.

  • How to select test cases efficiently for a test package?

    Dear experts,
    I would like to ask you if you have found a way administer how to assign test cases to test packages and test packages to testers. For regression tests this assignment remains relatively stable (often the key users).
    Do you know a way to administer this within the solution manager - or do you recommend the old excel table.

    Hello Ragu!
    Thank you for your answer! My question was regarding the organisational side, I didn't get this clear.
    I know how to generate a test package and how to assign a tester to a test package but where do I get the information which test packages I need and which tester to assign to which package.
    Maybe a good option is to assign the tester as a team member on the process step level. Thus the assignments can be listed with SOLAR_EVAL using "Assignments / Test Cases" with option "Display Team Members". This list would help to generate the test packages. The selection of the test cases for the test package has do be done manually because there seems to be no filter for team members.
    Regards,
    Martin

Maybe you are looking for