Database cursor in R/3

Can anyone tell me what is a database cursor
I have come accross a statement that  " WAIT must not be used between Open SQL statements that open or close a database cursor."
Can some one explain me this?

Hi,
Please follow the under mentioned link:
http://help.sap.com/saphelp_nw04/helpdata/EN/c6/13a228e9c5ca469fe7065387dc87be/content.htm
Thanks,
Neeraj

Similar Messages

  • Is the database cursor not working correctly?

    Hello, all
    I got a problem when retrieving all the key/data pairs from a database by using the cursor. After some debugging, I realized that the problem is due to the reuse of a DatabaseEntry object to get data from multiple entries. For example, there are two records in a database
    key | data
    1 "aaaaaaaa"
    2 "bbbb"
    If I want to know the length of data for each key/data pair, I expected the returning result would be:
    the length of key 1's data is 8
    the length of key 1's data is 5
    However, when I use a cursor to retrieve the result back, it returns the following result:
    the length of key 1's data is 8
    the length of key 1's data is 8
    The code to reproduce the incorrect result is attached below. One way to fix this code is to new a DatabaseEntry object within the loop. But the following code come from the berkeley db documentation (http://www.oracle.com/technology/documentation/berkeley-db/db/gsg/JAVA/Positioning.html). So I think it is good to report this problem.
    OS: ubuntu
    DB version: 4.7
    Programming language: Java
    import com.sleepycat.db.*;
    import java.io.*;
    public class TestDB {
         public static final int byteArrayToInt(byte [] b) {
              return (b[0] << 24)
              + ((b[1] & 0xFF) << 16)
              + ((b[2] & 0xFF) << 8)
              + (b[3] & 0xFF);
         public static final byte[] intToByteArray(int value) {
              return new byte[]{
                        (byte)(value >>> 24), (byte)(value >> 16 & 0xff), (byte)(value >> 8 & 0xff), (byte)(value & 0xff) };
         public static void main(String[] args) {
              Database myDatabase = null;
              Cursor cursor = null;     
              try {
              // Open the database. Create it if it does not already exist.
              DatabaseConfig dbConfig = new DatabaseConfig();
              dbConfig.setAllowCreate(true);
              dbConfig.setType(DatabaseType.HASH);
              myDatabase = new Database ("/tmp/cosmos/sampleDatabase.db",
              null,
              dbConfig);
              int key1 = 0;
              byte[] data1 = "aaaaaaaa".getBytes();          
              DatabaseEntry dbKey1 = new DatabaseEntry(intToByteArray(key1));
              DatabaseEntry dbData1 = new DatabaseEntry(data1);
              myDatabase.put(null, dbKey1, dbData1);
              int key2 = 1;
              byte[] data2 = "bbbbb".getBytes();           
              DatabaseEntry dbKey2 = new DatabaseEntry(intToByteArray(key2));
              DatabaseEntry dbData2 = new DatabaseEntry(data2);
              myDatabase.put(null, dbKey2, dbData2);
              cursor = myDatabase.openCursor(null, null);
              DatabaseEntry foundKey = new DatabaseEntry();
              DatabaseEntry foundData = new DatabaseEntry();
              while (cursor.getNext(foundKey, foundData, LockMode.DEFAULT) ==
              OperationStatus.SUCCESS) {
                   int key = Util.byteArrayToInt(foundKey.getData());
              System.out.println("the length of Key " + key + "'data is " + foundData.getData().length);
              } catch (DatabaseException dbe) {
              // Exception handling goes here
              } catch (FileNotFoundException fnfe) {
              // Exception handling goes here
              } finally {
                   try {
                        cursor.close();
                        myDatabase.close();
                   } catch (DatabaseException e) {
                        // TODO Auto-generated catch block
                        e.printStackTrace();
    }

    Hi,
    When a DatabaseEntry is reused when doing retrieval we internally use realloc to reuse the memory, hence the behavior you've observed.
    You should use the DatabaseEntry.getSize() method to determine the byte size of the data array. This will provide the correct results you're looking for, as after retrieving an entry we internally set in the size of the retrieved DBT:
    System.out.println("the length of Key " + key + "'data is " + foundData.getSize());
    Regards,
    Andrei

  • How to use database cursor in ABAP

    Hi all,
    I want to establish a connexion with another database system (Oracle) from SAP. Then i want to execute a cursor which is in a SQL package.
    The connexion is done sucssefuly but i don't know how to execute the cursor from SAP.
    The SQL package's definition that i have is :
    The name of package : pk_name
    The name of cursor : cursor c_dos(dat varchar, num varchar) return T_TABLE.
    Thx a lot for you help.

    hi Sourav,
    Check the sample program in SE38.
    in program name field give BCALV* press F4.
    BCALV_TEST_GRID_EVENTS
    It will give lot of example program.
    Reward if found helpful.
    Regards,
    Boobalan Suburaj

  • JCo failing in keeping database CURSOR

    Hi All,
    I'm trying to fetch records from table BSID using FETCH / CURSOR calling my RFC from a Java code. But when it go to second iteration or call to the RFC, SAP looses track of older CURSOR which RFC opened in first iteration. Can anybody tell me how to call my RFC so that I can continue my database access in SAP where I left in last iteration.
    Thanks in advance.
    Cheers!!!
    Kuldeep

    Include some of you code.
    I know that when you use java.sql.Connection with Statements and Resultsets, it acts the same way.
    If you are looping through one resultset, you can not reuse the Statement that created the resultset.

  • Dynamic sql and cursors

    We are running an oracle sql procedure that uses a LOT of dynamic sql. We are using a 3rd party package (SQR) as a sort of shell to run the sql procedure. The 3rd party package passes to us an oracle error. This error says, in effect, that there are no inactive database cursors available and that the sql program is too large to process. We conclude from this that we must increase one or more of the cursor parameters in init.ora (v$parameters). Is this the correct assumption? If not, does anyone know what we can do? We'd prefer not to break up the sql procedure into smaller pieces.

    increase the parameter for open cursors.
    check, wether all cursors in your programs are closed in time, or if you are using ref cursors from front-ends (e.g. Java JDBC) that this front-ends close these ref cursors , too.
    If you want to decrease the size of procedures get rid of comments, superfluos spaces, tabs, etc.
    keep a commented version outside vor documentation purposes.
    Hope thsi helps

  • Can I use Bulk Collect results as input parameter for another cursor

    MUSIC            ==> remote MUSIC_DB database, MUSIC table has 60 million rows
    PRICE_DATA ==> remote PRICING_DB database, PRICE_DATE table has 1 billion rows
    These two table once existed in same database, but size of database exceeded available hardware size and hardware budget, so the PRICE_DATA table was moved to another Oracle database.  I need to create a single report that combines data from both of these tables, and a distributed join with DRIVING_SITE hint will not work because the size of both table is too large to push to one DRIVING_SITE location, so I wrote this PLSQL block to process in small blocks.
    QUESTION: how can use bulk collect from one cursor and pass that bulk collected information as input to second cursor without specifically listing each cell of the PLSQL bulk collection?  See sample pseudo-code below, I am trying to determine more efficient way to code than hard-coding 100 parameter names into 2nd cursor.
    NOTE: below is truly pseudo-code, I had to change the names of everything to adhere to NDA, but below works and is fast enough for my purposes, but if I want to change from 100 input parameters to 200, I have to add more hard-coded values.  There has got to be a better way.
    DECLARE
         -- define cursor that retrieves distinct SONG_IDs from MUSIC table in remote music database
         CURSOR C_CURRENT_MUSIC
         IS
        select distinct SONG_ID
        from MUSIC@MUSIC_DB
        where PRODUCTION_RELEASE=1
         /*  define a parameterized cursor that accepts 100 SONG_IDs and retrieves
              required pricing information
         CURSOR C_get_music_price_data
                   P_SONG_ID_001 NUMBER, P_SONG_ID_002 NUMBER, P_SONG_ID_003 NUMBER, P_SONG_ID_004 NUMBER, P_SONG_ID_005 NUMBER, P_SONG_ID_006 NUMBER, P_SONG_ID_007 NUMBER, P_SONG_ID_008 NUMBER, P_SONG_ID_009 NUMBER, P_SONG_ID_010 NUMBER,
                   P_SONG_ID_011 NUMBER, P_SONG_ID_012 NUMBER, P_SONG_ID_013 NUMBER, P_SONG_ID_014 NUMBER, P_SONG_ID_015 NUMBER, P_SONG_ID_016 NUMBER, P_SONG_ID_017 NUMBER, P_SONG_ID_018 NUMBER, P_SONG_ID_019 NUMBER, P_SONG_ID_020 NUMBER,
                   P_SONG_ID_021 NUMBER, P_SONG_ID_022 NUMBER, P_SONG_ID_023 NUMBER, P_SONG_ID_024 NUMBER, P_SONG_ID_025 NUMBER, P_SONG_ID_026 NUMBER, P_SONG_ID_027 NUMBER, P_SONG_ID_028 NUMBER, P_SONG_ID_029 NUMBER, P_SONG_ID_030 NUMBER,
                   P_SONG_ID_031 NUMBER, P_SONG_ID_032 NUMBER, P_SONG_ID_033 NUMBER, P_SONG_ID_034 NUMBER, P_SONG_ID_035 NUMBER, P_SONG_ID_036 NUMBER, P_SONG_ID_037 NUMBER, P_SONG_ID_038 NUMBER, P_SONG_ID_039 NUMBER, P_SONG_ID_040 NUMBER,
                   P_SONG_ID_041 NUMBER, P_SONG_ID_042 NUMBER, P_SONG_ID_043 NUMBER, P_SONG_ID_044 NUMBER, P_SONG_ID_045 NUMBER, P_SONG_ID_046 NUMBER, P_SONG_ID_047 NUMBER, P_SONG_ID_048 NUMBER, P_SONG_ID_049 NUMBER, P_SONG_ID_050 NUMBER,
                   P_SONG_ID_051 NUMBER, P_SONG_ID_052 NUMBER, P_SONG_ID_053 NUMBER, P_SONG_ID_054 NUMBER, P_SONG_ID_055 NUMBER, P_SONG_ID_056 NUMBER, P_SONG_ID_057 NUMBER, P_SONG_ID_058 NUMBER, P_SONG_ID_059 NUMBER, P_SONG_ID_060 NUMBER,
                   P_SONG_ID_061 NUMBER, P_SONG_ID_062 NUMBER, P_SONG_ID_063 NUMBER, P_SONG_ID_064 NUMBER, P_SONG_ID_065 NUMBER, P_SONG_ID_066 NUMBER, P_SONG_ID_067 NUMBER, P_SONG_ID_068 NUMBER, P_SONG_ID_069 NUMBER, P_SONG_ID_070 NUMBER,
                   P_SONG_ID_071 NUMBER, P_SONG_ID_072 NUMBER, P_SONG_ID_073 NUMBER, P_SONG_ID_074 NUMBER, P_SONG_ID_075 NUMBER, P_SONG_ID_076 NUMBER, P_SONG_ID_077 NUMBER, P_SONG_ID_078 NUMBER, P_SONG_ID_079 NUMBER, P_SONG_ID_080 NUMBER,
                   P_SONG_ID_081 NUMBER, P_SONG_ID_082 NUMBER, P_SONG_ID_083 NUMBER, P_SONG_ID_084 NUMBER, P_SONG_ID_085 NUMBER, P_SONG_ID_086 NUMBER, P_SONG_ID_087 NUMBER, P_SONG_ID_088 NUMBER, P_SONG_ID_089 NUMBER, P_SONG_ID_090 NUMBER,
                   P_SONG_ID_091 NUMBER, P_SONG_ID_092 NUMBER, P_SONG_ID_093 NUMBER, P_SONG_ID_094 NUMBER, P_SONG_ID_095 NUMBER, P_SONG_ID_096 NUMBER, P_SONG_ID_097 NUMBER, P_SONG_ID_098 NUMBER, P_SONG_ID_099 NUMBER, P_SONG_ID_100 NUMBER
         IS
         select
         from PRICE_DATA@PRICING_DB
         where COUNTRY = 'USA'
         and START_DATE <= sysdate
         and END_DATE > sysdate
         and vpc.SONG_ID IN
                   P_SONG_ID_001 ,P_SONG_ID_002 ,P_SONG_ID_003 ,P_SONG_ID_004 ,P_SONG_ID_005 ,P_SONG_ID_006 ,P_SONG_ID_007 ,P_SONG_ID_008 ,P_SONG_ID_009 ,P_SONG_ID_010,
                   P_SONG_ID_011 ,P_SONG_ID_012 ,P_SONG_ID_013 ,P_SONG_ID_014 ,P_SONG_ID_015 ,P_SONG_ID_016 ,P_SONG_ID_017 ,P_SONG_ID_018 ,P_SONG_ID_019 ,P_SONG_ID_020,
                   P_SONG_ID_021 ,P_SONG_ID_022 ,P_SONG_ID_023 ,P_SONG_ID_024 ,P_SONG_ID_025 ,P_SONG_ID_026 ,P_SONG_ID_027 ,P_SONG_ID_028 ,P_SONG_ID_029 ,P_SONG_ID_030,
                   P_SONG_ID_031 ,P_SONG_ID_032 ,P_SONG_ID_033 ,P_SONG_ID_034 ,P_SONG_ID_035 ,P_SONG_ID_036 ,P_SONG_ID_037 ,P_SONG_ID_038 ,P_SONG_ID_039 ,P_SONG_ID_040,
                   P_SONG_ID_041 ,P_SONG_ID_042 ,P_SONG_ID_043 ,P_SONG_ID_044 ,P_SONG_ID_045 ,P_SONG_ID_046 ,P_SONG_ID_047 ,P_SONG_ID_048 ,P_SONG_ID_049 ,P_SONG_ID_050,
                   P_SONG_ID_051 ,P_SONG_ID_052 ,P_SONG_ID_053 ,P_SONG_ID_054 ,P_SONG_ID_055 ,P_SONG_ID_056 ,P_SONG_ID_057 ,P_SONG_ID_058 ,P_SONG_ID_059 ,P_SONG_ID_060,
                   P_SONG_ID_061 ,P_SONG_ID_062 ,P_SONG_ID_063 ,P_SONG_ID_064 ,P_SONG_ID_065 ,P_SONG_ID_066 ,P_SONG_ID_067 ,P_SONG_ID_068 ,P_SONG_ID_069 ,P_SONG_ID_070,
                   P_SONG_ID_071 ,P_SONG_ID_072 ,P_SONG_ID_073 ,P_SONG_ID_074 ,P_SONG_ID_075 ,P_SONG_ID_076 ,P_SONG_ID_077 ,P_SONG_ID_078 ,P_SONG_ID_079 ,P_SONG_ID_080,
                   P_SONG_ID_081 ,P_SONG_ID_082 ,P_SONG_ID_083 ,P_SONG_ID_084 ,P_SONG_ID_085 ,P_SONG_ID_086 ,P_SONG_ID_087 ,P_SONG_ID_088 ,P_SONG_ID_089 ,P_SONG_ID_090,
                   P_SONG_ID_091 ,P_SONG_ID_092 ,P_SONG_ID_093 ,P_SONG_ID_094 ,P_SONG_ID_095 ,P_SONG_ID_096 ,P_SONG_ID_097 ,P_SONG_ID_098 ,P_SONG_ID_099 ,P_SONG_ID_100
         group by
               vpc.SONG_ID
              ,vpc.STOREFRONT_ID
         TYPE SONG_ID_TYPE IS TABLE OF MUSIC@MUSIC_DB%TYPE INDEX BY BINARY_INTEGER;
         V_SONG_ID_ARRAY                         SONG_ID_TYPE                     ;
         v_commit_counter           NUMBER := 0;
    BEGIN
         /* open cursor you intent to bulk collect from */
         OPEN C_CURRENT_MUSIC;
         LOOP
              /* in batches of 100, bulk collect ADAM_ID mapped TMS_IDENTIFIER into PLSQL table or records */
              FETCH C_CURRENT_MUSIC BULK COLLECT INTO V_SONG_ID_ARRAY LIMIT 100;
                   EXIT WHEN V_SONG_ID_ARRAY.COUNT = 0;
                   /* to avoid NO DATA FOUND error when pass 100 parameters to OPEN cursor, if the arrary
                      is not fully populated to 100, pad the array with nulls to fill up to 100 cells. */
                   IF (V_SONG_ID_ARRAY.COUNT >=1 and V_SONG_ID_ARRAY.COUNT <> 100) THEN
                        FOR j IN V_SONG_ID_ARRAY.COUNT+1..100 LOOP
                             V_SONG_ID_ARRAY(j) := null;
                        END LOOP;
                   END IF;
              /* pass a batch of 100 to cursor that get price information per SONG_ID and STOREFRONT_ID */
              FOR j IN C_get_music_price_data
                        V_SONG_ID_ARRAY(1) ,V_SONG_ID_ARRAY(2) ,V_SONG_ID_ARRAY(3) ,V_SONG_ID_ARRAY(4) ,V_SONG_ID_ARRAY(5) ,V_SONG_ID_ARRAY(6) ,V_SONG_ID_ARRAY(7) ,V_SONG_ID_ARRAY(8) ,V_SONG_ID_ARRAY(9) ,V_SONG_ID_ARRAY(10) ,
                        V_SONG_ID_ARRAY(11) ,V_SONG_ID_ARRAY(12) ,V_SONG_ID_ARRAY(13) ,V_SONG_ID_ARRAY(14) ,V_SONG_ID_ARRAY(15) ,V_SONG_ID_ARRAY(16) ,V_SONG_ID_ARRAY(17) ,V_SONG_ID_ARRAY(18) ,V_SONG_ID_ARRAY(19) ,V_SONG_ID_ARRAY(20) ,
                        V_SONG_ID_ARRAY(21) ,V_SONG_ID_ARRAY(22) ,V_SONG_ID_ARRAY(23) ,V_SONG_ID_ARRAY(24) ,V_SONG_ID_ARRAY(25) ,V_SONG_ID_ARRAY(26) ,V_SONG_ID_ARRAY(27) ,V_SONG_ID_ARRAY(28) ,V_SONG_ID_ARRAY(29) ,V_SONG_ID_ARRAY(30) ,
                        V_SONG_ID_ARRAY(31) ,V_SONG_ID_ARRAY(32) ,V_SONG_ID_ARRAY(33) ,V_SONG_ID_ARRAY(34) ,V_SONG_ID_ARRAY(35) ,V_SONG_ID_ARRAY(36) ,V_SONG_ID_ARRAY(37) ,V_SONG_ID_ARRAY(38) ,V_SONG_ID_ARRAY(39) ,V_SONG_ID_ARRAY(40) ,
                        V_SONG_ID_ARRAY(41) ,V_SONG_ID_ARRAY(42) ,V_SONG_ID_ARRAY(43) ,V_SONG_ID_ARRAY(44) ,V_SONG_ID_ARRAY(45) ,V_SONG_ID_ARRAY(46) ,V_SONG_ID_ARRAY(47) ,V_SONG_ID_ARRAY(48) ,V_SONG_ID_ARRAY(49) ,V_SONG_ID_ARRAY(50) ,
                        V_SONG_ID_ARRAY(51) ,V_SONG_ID_ARRAY(52) ,V_SONG_ID_ARRAY(53) ,V_SONG_ID_ARRAY(54) ,V_SONG_ID_ARRAY(55) ,V_SONG_ID_ARRAY(56) ,V_SONG_ID_ARRAY(57) ,V_SONG_ID_ARRAY(58) ,V_SONG_ID_ARRAY(59) ,V_SONG_ID_ARRAY(60) ,
                        V_SONG_ID_ARRAY(61) ,V_SONG_ID_ARRAY(62) ,V_SONG_ID_ARRAY(63) ,V_SONG_ID_ARRAY(64) ,V_SONG_ID_ARRAY(65) ,V_SONG_ID_ARRAY(66) ,V_SONG_ID_ARRAY(67) ,V_SONG_ID_ARRAY(68) ,V_SONG_ID_ARRAY(69) ,V_SONG_ID_ARRAY(70) ,
                        V_SONG_ID_ARRAY(71) ,V_SONG_ID_ARRAY(72) ,V_SONG_ID_ARRAY(73) ,V_SONG_ID_ARRAY(74) ,V_SONG_ID_ARRAY(75) ,V_SONG_ID_ARRAY(76) ,V_SONG_ID_ARRAY(77) ,V_SONG_ID_ARRAY(78) ,V_SONG_ID_ARRAY(79) ,V_SONG_ID_ARRAY(80) ,
                        V_SONG_ID_ARRAY(81) ,V_SONG_ID_ARRAY(82) ,V_SONG_ID_ARRAY(83) ,V_SONG_ID_ARRAY(84) ,V_SONG_ID_ARRAY(85) ,V_SONG_ID_ARRAY(86) ,V_SONG_ID_ARRAY(87) ,V_SONG_ID_ARRAY(88) ,V_SONG_ID_ARRAY(89) ,V_SONG_ID_ARRAY(90) ,
                        V_SONG_ID_ARRAY(91) ,V_SONG_ID_ARRAY(92) ,V_SONG_ID_ARRAY(93) ,V_SONG_ID_ARRAY(94) ,V_SONG_ID_ARRAY(95) ,V_SONG_ID_ARRAY(96) ,V_SONG_ID_ARRAY(97) ,V_SONG_ID_ARRAY(98) ,V_SONG_ID_ARRAY(99) ,V_SONG_ID_ARRAY(100)        
              LOOP
                   /* do stuff with data from Song and Pricing Database coming from the two
                        separate cursors, then continue processing more rows...
              END LOOP;
              /* commit after each batch of 100 SONG_IDs is processed */        
              COMMIT;
              EXIT WHEN C_CURRENT_MUSIC%NOTFOUND;  -- exit when there are no more rows to fetch from cursor
         END LOOP; -- bulk fetching loop
         CLOSE C_CURRENT_MUSIC; -- close cursor that was used in bulk collection
         /* commit rows */
         COMMIT; -- commit any remaining uncommitted data.
    END;

    I've got a problem when using passing VARRAY of numbers as parameter to remote cursor: it takes a super long time to run, sometimes doesn't finish even after an hour as passed.
    Continuing with my example in original entry, I replaced the bulk collect into PLSQL table collection with a VARRAY and i bulk collect into the VARRAY, this is fast and I know it works because I can DBMS_OUTPUT.PUT_LINE cells of VARRAY so I know it is getting populated correctly.  However, when I pass the VARRAY containing 100 cells populated with SONG_IDs as parameter to cursor, execution time is over an hour and when I am expecting a few seconds.
    Below code example strips the problem down to it's raw details, I skip the bulk collect and just manually populate a VARRAY with 100 SONG_ID values, then try to pass to as parameter to a cursor, but the execution time of cursor is unexpectedly long, over 30 minutes, sometime longer, when I am expecting seconds.
    IMPORTANT: If I take the same 100 SONG_IDs and place them directly in the cursor query's where IN clause, the SQL runs in under 5 seconds and returns result.  Also, if I pass the 100 SONG_IDs as individual cells of a PLSQL table collection, then it also runs fast.
    I thought that since the VARRAY is used via select subquery that is it queried locally, but the cursor is remote, and that I had a distribute problem on my hands, so I put in the DRIVING_SITE hint to attempt to force the result of query against VARRAY to go to remote server and rest of query will run there before returning result, but that didn't work either, still got slow response.
    Is something wrong with my code, or I am running into a Oracle problem that may require support to resolve?
    DECLARE
         /*  define a parameterized cursor that accepts XXX number of in SONG_IDs and
          retrieves required pricing information
         CURSOR C_get_music_price_data
      p_array_song_ids SYS.ODCInumberList              
         IS
         select  /*+DRIVING_SITE(pd) */
      count(distinct s.EVE_ID)
         from PRICE_DATA@PRICING_DB pd
         where pd.COUNTRY = 'USA'
         and pd.START_DATE <= sysdate
         and pd.END_DATE > sysdate
         and pd.SONG_ID IN
              select column_value from table(p_array_song_ids)
         group by
               pd.SONG_ID
              ,pd.STOREFRONT_ID
      V_ARRAY_SONG_IDS SYS.ODCInumberList := SYS.ODCInumberList();    
    BEGIN
    V_ARRAY_SONG_IDS.EXTEND(100);
    V_ARRAY_SONG_IDS(  1 ) := 31135  ;
    V_ARRAY_SONG_IDS(  2 ) := 31140   ;
    V_ARRAY_SONG_IDS(  3 ) := 31142   ;
    V_ARRAY_SONG_IDS(  4 ) := 31144   ;
    V_ARRAY_SONG_IDS(  5 ) := 31146   ;
    V_ARRAY_SONG_IDS(  6 ) := 31148   ;
    V_ARRAY_SONG_IDS(  7 ) := 31150   ;
    V_ARRAY_SONG_IDS(  8 ) := 31152   ;
    V_ARRAY_SONG_IDS(  9 ) := 31154   ;
    V_ARRAY_SONG_IDS( 10 ) := 31156   ;
    V_ARRAY_SONG_IDS( 11 ) := 31158   ;
    V_ARRAY_SONG_IDS( 12 ) := 31160   ;
    V_ARRAY_SONG_IDS( 13 ) := 33598   ;
    V_ARRAY_SONG_IDS( 14 ) := 33603   ;
    V_ARRAY_SONG_IDS( 15 ) := 33605   ;
    V_ARRAY_SONG_IDS( 16 ) := 33607   ;
    V_ARRAY_SONG_IDS( 17 ) := 33609   ;
    V_ARRAY_SONG_IDS( 18 ) := 33611   ;
    V_ARRAY_SONG_IDS( 19 ) := 33613   ;
    V_ARRAY_SONG_IDS( 20 ) := 33615   ;
    V_ARRAY_SONG_IDS( 21 ) := 33617   ;
    V_ARRAY_SONG_IDS( 22 ) := 33630   ;
    V_ARRAY_SONG_IDS( 23 ) := 33632   ;
    V_ARRAY_SONG_IDS( 24 ) := 33636   ;
    V_ARRAY_SONG_IDS( 25 ) := 33638   ;
    V_ARRAY_SONG_IDS( 26 ) := 33640   ;
    V_ARRAY_SONG_IDS( 27 ) := 33642   ;
    V_ARRAY_SONG_IDS( 28 ) := 33644   ;
    V_ARRAY_SONG_IDS( 29 ) := 33646   ;
    V_ARRAY_SONG_IDS( 30 ) := 33648   ;
    V_ARRAY_SONG_IDS( 31 ) := 33662   ;
    V_ARRAY_SONG_IDS( 32 ) := 33667   ;
    V_ARRAY_SONG_IDS( 33 ) := 33669   ;
    V_ARRAY_SONG_IDS( 34 ) := 33671   ;
    V_ARRAY_SONG_IDS( 35 ) := 33673   ;
    V_ARRAY_SONG_IDS( 36 ) := 33675   ;
    V_ARRAY_SONG_IDS( 37 ) := 33677   ;
    V_ARRAY_SONG_IDS( 38 ) := 33679   ;
    V_ARRAY_SONG_IDS( 39 ) := 33681   ;
    V_ARRAY_SONG_IDS( 40 ) := 33683   ;
    V_ARRAY_SONG_IDS( 41 ) := 33685   ;
    V_ARRAY_SONG_IDS( 42 ) := 33700   ;
    V_ARRAY_SONG_IDS( 43 ) := 33702   ;
    V_ARRAY_SONG_IDS( 44 ) := 33704   ;
    V_ARRAY_SONG_IDS( 45 ) := 33706   ;
    V_ARRAY_SONG_IDS( 46 ) := 33708   ;
    V_ARRAY_SONG_IDS( 47 ) := 33710   ;
    V_ARRAY_SONG_IDS( 48 ) := 33712   ;
    V_ARRAY_SONG_IDS( 49 ) := 33723   ;
    V_ARRAY_SONG_IDS( 50 ) := 33725   ;
    V_ARRAY_SONG_IDS( 51 ) := 33727   ;
    V_ARRAY_SONG_IDS( 52 ) := 33729   ;
    V_ARRAY_SONG_IDS( 53 ) := 33731   ;
    V_ARRAY_SONG_IDS( 54 ) := 33733   ;
    V_ARRAY_SONG_IDS( 55 ) := 33735   ;
    V_ARRAY_SONG_IDS( 56 ) := 33737   ;
    V_ARRAY_SONG_IDS( 57 ) := 33749   ;
    V_ARRAY_SONG_IDS( 58 ) := 33751   ;
    V_ARRAY_SONG_IDS( 59 ) := 33753   ;
    V_ARRAY_SONG_IDS( 60 ) := 33755   ;
    V_ARRAY_SONG_IDS( 61 ) := 33757   ;
    V_ARRAY_SONG_IDS( 62 ) := 33759   ;
    V_ARRAY_SONG_IDS( 63 ) := 33761   ;
    V_ARRAY_SONG_IDS( 64 ) := 33763   ;
    V_ARRAY_SONG_IDS( 65 ) := 33775   ;
    V_ARRAY_SONG_IDS( 66 ) := 33777   ;
    V_ARRAY_SONG_IDS( 67 ) := 33779   ;
    V_ARRAY_SONG_IDS( 68 ) := 33781   ;
    V_ARRAY_SONG_IDS( 69 ) := 33783   ;
    V_ARRAY_SONG_IDS( 70 ) := 33785   ;
    V_ARRAY_SONG_IDS( 71 ) := 33787   ;
    V_ARRAY_SONG_IDS( 72 ) := 33789   ;
    V_ARRAY_SONG_IDS( 73 ) := 33791   ;
    V_ARRAY_SONG_IDS( 74 ) := 33793   ;
    V_ARRAY_SONG_IDS( 75 ) := 33807   ;
    V_ARRAY_SONG_IDS( 76 ) := 33809   ;
    V_ARRAY_SONG_IDS( 77 ) := 33811   ;
    V_ARRAY_SONG_IDS( 78 ) := 33813   ;
    V_ARRAY_SONG_IDS( 79 ) := 33815   ;
    V_ARRAY_SONG_IDS( 80 ) := 33817   ;
    V_ARRAY_SONG_IDS( 81 ) := 33819   ;
    V_ARRAY_SONG_IDS( 82 ) := 33821   ;
    V_ARRAY_SONG_IDS( 83 ) := 33823   ;
    V_ARRAY_SONG_IDS( 84 ) := 33825   ;
    V_ARRAY_SONG_IDS( 85 ) := 33839   ;
    V_ARRAY_SONG_IDS( 86 ) := 33844   ;
    V_ARRAY_SONG_IDS( 87 ) := 33846   ;
    V_ARRAY_SONG_IDS( 88 ) := 33848   ;
    V_ARRAY_SONG_IDS( 89 ) := 33850   ;
    V_ARRAY_SONG_IDS( 90 ) := 33852   ;
    V_ARRAY_SONG_IDS( 91 ) := 33854   ;
    V_ARRAY_SONG_IDS( 92 ) := 33856   ;
    V_ARRAY_SONG_IDS( 93 ) := 33858   ;
    V_ARRAY_SONG_IDS( 94 ) := 33860   ;
    V_ARRAY_SONG_IDS( 95 ) := 33874   ;
    V_ARRAY_SONG_IDS( 96 ) := 33879   ;
    V_ARRAY_SONG_IDS( 97 ) := 33881   ;
    V_ARRAY_SONG_IDS( 98 ) := 33883   ;
    V_ARRAY_SONG_IDS( 99 ) := 33885   ;
    V_ARRAY_SONG_IDS(100 ) := 33889  ;
        /* do stuff with data from Song and Pricing Database coming from the two
      separate cursors, then continue processing more rows...
      FOR i IN C_get_music_price_data( v_array_song_ids ) LOOP
      . (this is the loop where I pass in v_array_song_ids
      .  populated with only 100 cells and it runs forever)
      END LOOP; 
    END;

  • Database open (recovery) taking too long

    Hi,
    Ive been using your awesome BerkeleyDB Java Edition for a couple of years, and have been very happy with it.
    I am currently facing an issue with trying to open the database after a disk-full issue (which resulted in the database being unable to write, and hence not closed properly).
    While recovery seems to be operating, it has been taking an inordinate amount of time - 16 hours so far. My database has data of around 200GB, which inflated to over 450GB during deletion of entries, hence gobbling up all free space on disk.
    My questions are:
    * Should i continue to wait for recovery?
    * Is there any chance that recovery is looping?
    * Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?
    Some other information that may help:
    * The recovery has decreased the size of the last significant file, and created 3 new files since it started running.
    * I have been monitoring the open files (using lsof), and they change every now and then to other files, though a good amount of its time is spent near the end of the database.
    Thus, i feel like recovery is running normally, just taking too long. Please let me know your opinion.
    A few other things i should mention regarding my issue:
    * The database was, till yesterday, running on bdb java 3.3.75. After running several hours of recovery, i upgraded to 4.1.10 (since i read about a possible recovery looping bug in one of the versions)
    * Once 4.1.10 started recovery, it spat out errors regarding the last 2 files. Only on deleting those 2 files (the last being 0 bytes, the 2nd-last being about 5k) did the recovery start. Note that the older 3.3.75's recovery never complained about those files. I can post the errors here if relevant.
    * Some of the jdb files (about 500 files out of the 47,000 files that make up the database) are 100 MB files, since i had experimented with larger sized files for a few days, then reverted the setting.
    Would any of these above affect a successful recovery?
    My setup is:
    OS:Linux CentOS 5.2, 64-bit, kernel 2.6.18-92.el5
    JVM: Sun Java 1.6.0_20, 64-bit
    Memory: 16 GB RAM, of which 8 GB is allocated to the java process (-Xmx8000M -Xms8000M)
    BDB cache set to use 6GB RAM (envconfig.setCacheSize(6000000000))
    Only the BDB basic API is being used (Environment, database, cursors). We do not use DPL, or HA features.
    Awaiting your kind response,
    Sushant A

    Hi Sushant,
    * Should i continue to wait for recovery?* Is there any chance that recovery is looping?>
    I'm not aware of a bug that would cause recovery to loop, however, you may want to take thread dumps to see if it is progressing. It isn't easy to tell, however, since each phase of recovery is in fact a loop. What you can tell easily from the thread dumps is whether recovery is blocked (completely stopped) for some reason. I don't know of a bug that would cause this, but it's something I would check for.
    Assuming it is not blocked, I suggest that you leave recovery running, and additionally (in parallel) try to obtain some information about your log. While recovery is running you can run the DbPrintLog utility, which does not itself run recovery. I suggest running the following command, which will tell us in general what your log looks like and in particular how far apart the checkpoints are:
    java -jar je-x.y.z.jar DbPrintLog -h <envHome> -S > <output>Please post the output.
    If checkpoints are not running in your application for some reason, or they are running very infrequently, this can cause VERY long recoveries. Unfortunately, you may have such a problem in your app and not be aware of it, until you crash and have to recover. To guard against this sort of thing in the future, you should keep an eye on the checkpoint frequency. EnvironmentStats.getNCheckpoints and getEndOfLog can together be used to tell how much log is written between checkpoints. We will also be able to see this from the DbPrintLog -S output.
    * Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?DbDump normally runs recovery. DbDump with the -r or -R option does not run recovery, but has other drawbacks. With -r, a large amount of memory may be necessary to dump an accurate representation of your data set. If this fails because you run out of memory, -R can be used, but this will dump multiple versions of each record and it will be up to you to interpret the output.
    If regular recovery does not succeed, then DbDump -r is the next thing to try.
    Would any of these above affect a successful recovery?No, I don't believe so.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to get the number of rows in a DB-Cursor

    When i open a database cursor i do not know how much rows are stored in the cursor. To solve this problem i have to send a 'select count(1) ...' to get the number of rows before i set up the cursor. I think that this is not very clever and could possibly cause performance problems. But i need the num of rows to show the percentage of processed rows. Is there any other way to get the num of rows? May be by checking the cursor directly?
    Please help!
    Thanx a lot

    In order to find out how may rows are going to be processed, oracle has to visit every row. So with a cursor, there is no property that will accurately reflect the number of rows until you get to the last one. That said, you could use
    select count(*) over() as row_count, <rest of your columns> FROM <your table>
    which will give you the total row count agaist each row in the result set. There are performance penalties involved but they will be less than issuing the query twice, once to get the count and once to get the rows.
    Have a look on asktom for some very usefull info about all this.
    HTH

  • Data Mapper to relational database

    I've been reading the "Patterns of Enterprise Application Architecture" book and it has opened my mind to a lot of different designs.
    I'm trying to design a large accounting system in OO. I'm unclear on how I should design the mapping inbetween the domain logic and database.
    Should the controller classes talk to the data mappers?
    In inheritance if my classes don't directly relate to tables should I have one data mapper or one for each class?
    When I'm using a data mapper and I select all from a table does the mapper create multiple objects or does it return a result set?
    If it creates multiple objects how does the domain logic sort this data?
    Do I really need a Unit of Work to manage my data mappers? How hard are they to create? Is there only one Unit of Work class for all data mappers?

    I've been reading the "Patterns of Enterprise
    Application Architecture" book and it has opened my
    mind to a lot of different designs.Usually the worst time to apply the stuff, because you're looking to use it before you really understand. But there must be a first time for everything. Great book, though.
    I'm trying to design a large accounting system in OO.Wow, is this for personal education or a paid gig? I hope it's the former. If not, make sure that "build versus buy" is part of the decision. Would anybody write an accounting package in this day and age, with SAP, Oracle, QuickBooks, Great Plains, and a thousand others ready to buy?
    I'm unclear on how I should design the mapping
    inbetween the domain logic and database.
    Should the controller classes talk to the data mappers?Depends on what you call controllers. I like to think of them as the classes that implement the service interfaces. The methods in the service interface are the use cases. I think the distinction between controller and service is very important, because a web-based app will certainly have a controller. If you fall into the trap of putting all the logic into that class, you can't use it WITHOUT the web controller. It also puts you right in line to turn this into a service oriented architecture.
    The services will certainly call the persistence interface. Make sure you have one.
    In inheritance if my classes don't directly relate to
    tables should I have one data mapper or one for each
    class?That's one way to do it.
    When I'm using a data mapper and I select all from a
    table does the mapper create multiple objects or does
    it return a result set?I think it ought to return a Collection of objects. You should never return a ResultSet. That should be instantiated and closed within the scope of the persistence layer. ResultSets are database cursors, a scarce resource. They should be kept open for the shortest time and scope possible.
    If it creates multiple objects how does the domain logic sort this data?You won't ask for all of them unless you have something sensible to do with them. If you're referring to the literal "sort", it's an easier question. Have the database do an ORDER BY in the SELECT. Hard to tell what you mean by this.
    Do I really need a Unit of Work to manage my data mappers? You do if you need ACID properties for transactions.
    So you're going to try to do all of this by hand? A hard job, indeed. You're talking about writing your own transaction manager.
    How hard are they to create? Is there only
    one Unit of Work class for all data mappers?I haven't read Fowler's book in a while, and I don't have it handy, but it's one Unit of Work per transaction when I think about the term. What you need is something that will demark transaction boundaries.
    If you weren't writing this yourself, you'd be using JTA (Java Transaction Architecture.) That's what I'd recommend that you do.
    But if this is an educational effort, knock yourself out.
    %

  • Opened cursors

    Hello everybody,
    i need the number of opened database cursors (with the OPEN CURSOR statement) in an Abap process (programm). Does anyone know if i can get the number in Abap in runtime??
    For example something like the VBLOG table cluster.
    Regards Ioan.

    Hi , Please go throuw below info and example.
    << Moderator message - Cut and paste response from [Using a Cursor to Read Data|http://help.sap.com/SAPhelp_nw04/helpdata/en/fc/eb3b23358411d1829f0000e829fbfe/content.htm] removed. Plagiarism is not allowed in SCN >>
    Edited by: Rob Burbank on Feb 2, 2011 11:44 AM

  • SQLException database problem

                while(rec.next()) {
                        for(int i=0;i<13;i++) {
                             if(rec.getString(i + 1) != null) {
                                  tempArray[i] = rec.getString(i+1);
                             } else {
                                  tempArray[i] = "";
                }this piece of code is giving me an sql exception at runtime, saying no data is found. When all my fields in the row in question in my database are full, it runs ok, but when one field is empty, i get this exception. How can i get over this?
    thanks

    Store the result of getString(n) in a String variable. Check that for null, and then use it if it is not null. I think you might be have database-specific issues with re-calling getString(n). Remember, the ResultSet is basically a database cursor. So trying to re-fetch values may cause issues.
    That is one reason many implementations take all the data out of the ResultSet and place it into either a two-dimensional array or set of collections of combination thereof. That way, you can fetch all the data in one go and drop the connection to the database while procsesing in your domain model. You can use the data in memory in any fashion desired. The other main benefit is freeing up scarce database resources faster, at the expense of eating up some memory.
    - Saish
    "My karma ran over your dogma." - Anon

  • Frequest process chain issue - Database selection was interrupted

    hi experts,
    we are running master data meta chains daily, and we have PP,SD,MM,LO,FI,PS masterdata meta chains running daily, and everyday 2-3 process chains are failing, it is extracting only 50000 records and the requests becomes RED.
    if i check the long text of the error message, it is giving the below error message.
    Database selection was interrupted (see long text)
    Message no. RSBK072
    Diagnosis
    You are extracting data from a DataSource or an InfoProvider. The selection is made with an open database cursor. This cursor has been invalidated by one of the following ABAP statements:
    COMMIT WORK
    ROLLBACK WORK
    CALL TRANSACTION
    SUBMIT
    The ABAP statement responsible for termination probably ran in a transformation routine programmed by you.
    System Response
    Request processing has ended with errors.
    Procedure
    Change the code for the transformation in which one of the specified ABAP statements is executed. If you are using statement COMMIT WORK, replace it with:
    CALL FUNCTION 'DB_COMMIT'.
    again if i delete the request from the target and repeat it is working fine but taking lot of time. but there are no routines in the transformations.
    please provide us the solution to rectify the issues as it is causing lot of reporting issues.
    regards
    venuscm

    hi experts,
    here is the source 0MAT_PLANT_TEXT and the target is 0MAT_PLANT.
    info  package load is very fast, but DTP it is taking lot of time if i execute the DTP Manually.
    The data packets are processed very fast, but after that at the bottom
    Set technical status to green and
    set overall status to green in yellow very long time, due to this DTP Load for above master data is taking long time.
    how to resolve this?
    any solutionz pl.
    Regards
    venuscm
    Edited by: venu WAD on Oct 24, 2011 6:37 AM

  • Concurrent access of a primary record referenced by secondary database

    Hello
    We need to implement the trick :
    get record from secondary database, then update it in primary database, so the key for secondary database will be modified
    we are facing strange issue - when we are working in multi-threaded environment, several threads can access same record in secondary database and update it, while only one thread should be allowed to do this,
    we are using LockMode.RMW in secondary cursor searches on secondary database (cursor.getSearchRange), we were assuming that will locks associated record in primary database - but it seems it doesn't.
    Do we miss something?
    Thank you in advance!

    I have reproduced this and the fix is well underway but not completely done.
    In the meantime, a work around is to use a transaction. If you read via a secondary with an explicit transaction, the secondary and primary will both be locked by the transaction.
    Mark

  • Problem with fetch cursor statement

    Hi,
    I am using FETCH CURSOR statement to fetch the data from a database table with package size. For the fetched records I am doing parallel processing using parallel processing frame work in banking system.
    Here the problem is for the first iteration it works fine but when it comes to FETCH NEXT CURSOR in the second iteration , programs gets dumping by saying that 'CURSOR already closed'.
    I am not closing the cursor in the program but some how it got closed some where in the standard function module which I used for parallel processing.
    I used WITHHOLD also along with FETCH CURSOR but no use. Please let me know how to avoid the cursor to get close.
    Below is my code
    IF NOT l_tab_product IS INITIAL.
        OPEN CURSOR WITH HOLD lv_cursor FOR
         SELECT contract_int prodint cn_currency mig_grp
              INTO TABLE gt_cont
                FROM bca_contract
                FOR ALL ENTRIES IN l_tab_product
                WHERE prodint = l_tab_product-prodint
                AND   mig_grp IN s_migrp.
        DO.
          FETCH NEXT CURSOR lv_cursor
                            INTO TABLE gt_cont
                                 PACKAGE SIZE lv_size.
          IF sy-subrc <> 0.
            CLOSE CURSOR lv_cursor.
            EXIT.
          ELSE.
    parallel processing logic
    ENDDO.
    ENDIF.

    Using Withhold will not make sure that the cursor will not get closed because of commits.
    SAP Doc says
    If the addition WITH HOLD is specified, the database cursor is not closed by a database commit executed using Native SQL. The addition does not have an influence, however, on implicit database commits or on any rollbacks which always close the database cursor.
    You have to check the part written in your parallel processing logic.
    As Brad said please donot dump your old threads like this.

  • Cursors using a function pointer iterator

    I'm implementing an iterator to go over the records from a Berkeley DB. However, it seems I need to set the DB_DBT_USERMEM flag before the call to cursor->get with DB_NEXT.  Doing it that way would make my iterator less cohesive and will have to implement multiple iterators for each data type I want to retrieve.
    Is there a way to have a generic iterator that can traverse structures w/o pointers, and basic types? Here's what I'm trying to achieve.
    #include <stdio.h>
    #include <string.h>
    #include <db.h>
    // let this function handle integers and use DB_DBT_USERMEM for memory alignment
    void integer_items(DBT key, DBT data) {
            int number = 0;
            data.data = &number;
            data.flags = DB_DBT_USERMEM;
            data.ulen = sizeof(int);
            printf("key is: %s, data is: %d\n", (char *) key.data,number);
    // let this function handle pointer structs. No need for DB_DBT_USERMEM
    void ptr_struct_items(DBT key, DBT data) {
            // MY_STRUCT user;
            // marshall struct...
            // buffsize = sizeof(int) +(strlen(user.familiar_name) + strlen(user.surname) + 2);
            // databuff = malloc(buffsize);
            // memset(databuff, 0, buffsize);  
            // printf("key is: %s, data is: %d\n", (char *) key.data,number);
    int iterator(DB *database, void(*function)(DBT key, DBT data)) {
            DBT key, data;
            DBC *cursor;
            memset(&key, 0, sizeof(DBT));
            memset(&data, 0, sizeof(DBT));
            database->cursor(database, NULL, &cursor, 0);
            while(cursor->c_get(cursor, &key, &data, DB_NEXT) == 0){
                    (*function)(key, data);
            cursor->c_close(cursor);
            return 0;
    int main() {
            DB_ENV *myEnv;
            DB *dbp;
            DBT key, data;
            int r, v = 10;
            char *k = "Test";
            db_env_create(&myEnv, 0);
            myEnv->open(myEnv, "./", DB_CREATE | DB_INIT_MPOOL, 0);
            db_create(&dbp, myEnv, 0);
            dbp->open(dbp, NULL, "test.db", NULL, DB_HASH, DB_CREATE, 0664);
            memset(&key, 0, sizeof(key));
            memset(&data, 0, sizeof(data));
            key.data = k;
            key.size = strlen(k) +1;
            data.data = &v;
            data.size = sizeof(int);
            if((r=dbp->put(dbp, NULL, &key, &data, 0)!=0))
                    fprintf(stderr, "%s\n", db_strerror(r));
            iterator(dbp, integer_items);
            iterator(dbp, ptr_struct_items);
            return 0;

    Mike, Thanks for your response. From your sample struct, if I normalize my data by using a structure, then I assume I won't need DB_DBT_USERMEM? since all the data would be stored and packed into a single location in memory, i.e. databuff:
           buffsize = sizeof(int) + sizeof(int);
           char * databuff = malloc(buffsize);
            // copy everything to the buffer
            memcpy(databuff, &(user.data_type_indicator), sizeof(int));
            bufflen = sizeof(int);
            memcpy(databuff, &(user.data_size), sizeof(int));
            memcpy(databuff + bufflen, user.data_size, sizeof(int));
            bufflen += sizeof(int);
            data.data = databuff;
            data.size = bufflen;
    Is that what you are referring to?

Maybe you are looking for

  • How do I set up an Airport Extreme Base Station ("Snow") for use as fax (on

    I'm on a PowerMac G4 Digital Audio version without a built-in modem. I already have a Linksys wireless network set up between my Mac and PC laptop. So... I don't want to use the wireless networking on the Airport... I just want to configure the Airpo

  • How to change the variable value format in Ip workbooks

    Dear Experts, How to add zeros to a vaiable value before passing to a planning function in BEx analyzer? eg: 'ZPRCTR_VAR1' (For Infoobject 0PROFIT_CTR) User enters value 20 for 'ZPRCTR_VAR1' , but the system know only '0000000020'. I wrote a user exi

  • Put SSD to Hp Pavilion dv6

    Hi, I know I have to put the SSD in the main caddy and the HDD wil be put in the optical bay. Is it compatible with my notebook? http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item​=180911636451 I have a new Hp Pavilion dv6-6c06eh notebook.

  • Table entries in the transport

    Hi, What are the steps to add Table entries(SM30) in the transport? I have added the entries using SM30 for the table, but it doesnt appear in the transport request. Please help. Thanks & regards, Yogita

  • How can I vary the volume within an audio clip, not just the ends, but throughout?

    This is what was easily done in FCE by clicking to make points on the volume line, then dragging them up and down to fade music up and down during voice overs. If it's the the manual, I could not decipher it. Thank/s for any help.