Indexing on lakhs of records

Hi,
I will be inserting daily 40000 records in a table. During insert i will match all the columns and will insert accordingly. I read about indexing concepts so setting index will increase the search option faster . But
setting indexing for all columns will increase the timeout for insert . Day by day the records gets incremented once it reaches certain lakhs, we will be deleting the records.So when the record was in lakhs , setting indexing for all columns will be a better
one or not.
 Awaiting response.
~Selva

Your question is wide open one. You talked about both inserts and updates and deletes with no information about table structure DDL,DML query so people would just assume and give you answer. Creating index is easy making sure query effectively utilities
is what is the task. Whatever index you create it SHOULD benefit the query otherwise there is no point in creating it.
Look at query, look at where clause. look at joins, create index , see execution plan and see if its benefits the query then you are good to go
Please see below video on indexing strategy
https://technet.microsoft.com/en-us/sqlserver/gg545006.aspx
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP

Similar Messages

  • How to get the index of the error record when uploaded using standard LSMW

    Hi Experts,
    When uploading data using lsmw standard batch input method which creates an error session if any errors,how can we able to find the index of the error record
    eg: If 3rd record in input file is incorrect then error session is getting created with the error and index as 1.But I want to know that 3rd record in my input file is incorrect.
    Please help me out.

    Srinivas,
    In global data, i've written like this.
    TYPES: BEGIN OF TYPE_ERROR,
                 INDEX TYPE I,
                 END OF TYPE_ERROR.
    DATA: GT_ERROR TYPE TABLE OF TYPE_ERROR,
               GS_ERROR TYPE TYPE_ERROR,
               GV_TOTAL_COUNT TYPE I, GV_INDEX TYPE I,
               GV_STRING TYPE STRING,
              GV_FILE TYPE STRING VALUE '/usr/sap/interfaces/LSMW/Asseterror
    In Begin of transaction,
    GV_TOTAL_COUNT = GV_TOTAL_COUNT + 1.
    In End of processing,
    OPEN DATASET GV_FILE FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
    IF SY-SUBRC = 0.
    CLEAR GS_ERROR.
    LOOP AT GT_ERROR INTO GS_ERROR.
    CONCATENATE 'INDEX OF THE ERROR RECORD IS:' GS_ERROR INTO GV_STRING.
    TRANSFER GV_STRING TO GV_FILE.
    CLEAR: GV_STRING, GS_ERROR.
    ENDLOOP.
    ENDIF.
    How and where will I be assigning error count and append it to the internal table.
    Please help me out.

  • How to upload 15 lakhs of record onto ztable with performance

    I have 15 to 20 lakhs of records in my internal table. I'm uploading it onto ztable in se11.
    While doing so, it's taking lot of time.
    I'm writing the following query:
    MODIFY ZTABLE       FROM TABLE  ITAB(INTERNAL TABLE NAME).
    Please let me know if there is any other alternative which gives better performance.
    Moderator message - Moved to the correct forum
    Edited by: Rob Burbank on Feb 2, 2010 11:59 AM

    Hi Rob,
    just trace a
    MODIFY ZTABLE FROM TABLE ITAB
    with ST05 and you will see what the DBI / DB does with this statement.
    If i remember right you are on DB6. Given current releases it migth be possible
    that a MERGE (which can handle arrays) will be used instead of UPDATE / INSERT on
    a row by row basis.
    On most platforms the modify from table is done row by row like this:
    UPDATE dbtab_row.
    IF sy-subrc NE 0.
       INSERT dbtab_row.
       IF sy-subrc NE 0.
          UPDATE dbtab_row.
       ENDIF.
    ENDIF.
    if the rows exist you will only see updates.
    if the rows don't exist you will see updates (no record) followed by an insert (1 record).
    In rare cases you might see update (no record), insert (no record, somebody else inserted this record after the first update), update (1 record).
    Kind regards,
    Hermann

  • Loop is better or while is better for lakhs of records?

    hi experts,...............
    let us suppose we have lakhs of records in then using loop is better or while is better?
    instead of using loop inside the loop what we will use?
    thanks in adavance

    Moderator message - Your questions are very basic - please search before asking - post locked

  • OBIEE Performance with lakhs of records

    Hi,
    I have 4 laks records in my OBI Report. The default no of records for display is 25 in OBI.
    When I try to expand it to all pages, the page gets struck and it is not displaying all the pages.
    It does not respond and I need to manually kill he process via task manager.
    What can be done here? If I enable the cache will it solve the purpose?
    Ideally, How reports with lakhs of records are mainitained in OBI.
    If i download the data , then I am able to see all records but still the size of the file is 120 MB.
    Cant I manually see all the 4 lakh records in OBI witthout download? If I can, what needs to be done here?
    Please help....

    Hi,
    Increase the ResultRowLimit in instanceconfig.xml file and check once.
    Refer : http://obiee101.blogspot.com/2010/01/obiee-resultrowlimit.html
    Regards,
    Srikanth

  • Invalid Descriptor Index After First Record

                   recordCount = 1;
                   if(rs != null)
                        rs.last();
                        totalRows = rs.getRow();
                        rs.first();
                        System.out.println("Total Rows: "+totalRows);
                   if (rs != null) while ( rs.next() )
                        hasResults = true;
                        for(rowCount = 0; rowCount < totalRows; rowCount++)
                             for(columnCount = 0; columnCount < 9; columnCount++)
                                  SearchMenu.fieldArray[rowCount][columnCount].setText(rs.getString(recordCount));
                                  recordCount++;
                        }I am using the above code to read and display data in a 20 row 9 column array of text fields. The database has 9 columns. The first record in a query is displaying correctly, but then I get Invalid Descriptor Index (and no more records are displayed). Can anyone help? Thanks a bunch.

    I went ahead and tried resetting the recordCount to 1
    after each row like you suggested, but that produced
    the same error and problem. Then you have another problem. But I doubt this is happening.
    It is my understanding
    that the first object in the resultset's second row
    is one past the last object of the first row, so
    resetting this count doesn't make sense to me
    anyway.I don't know what you mean exactly but I am pretty sure this is wrong.
    Try this.
    recordCount = 1;
                   if(rs != null)
                        rs.last();
                        totalRows = rs.getRow();
                        rs.first();
                        System.out.println("Total Rows: "+totalRows);
                   if (rs != null) while ( rs.next() )
                        hasResults = true;
                                                         recordCount = 1;//new line here!
                        for(rowCount = 0; rowCount < totalRows; rowCount++)
                             for(columnCount = 0; columnCount < 9; columnCount++)
                                  SearchMenu.fieldArray[rowCount][columnCount].setText(rs.getString(recordCount));
                                  recordCount++;
                        }

  • Database hangs after retrieving some records....

    hi,
    I create a two level B+-tree index, for the first level i'm using key as logical database name, for the second level B+-tree i'm using some other field in my data. I am retrieving the records based on logical database name. I am using C++ to implement my index.
    In the following code i'm retrieving records from database. Program is behaving differently for different logical database names. For example in my database i have around 4 lakhs records with logical database name 'A' and around 1 lakh of records with logical database name 'B'. The following code displays all B records but programs hangs after retrieving some A records.
    I'm using PAGE_SIZE=8192, numBuffers=2048, and runnig in Fedora Core3.
    DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         myEnv.set_data_dir("/home/raviov/new/database");
              try {
                   myEnv.open("./", DB_CREATE | DB_INIT_MPOOL, 0);
              catch (DbException &e) {
                   cerr << "Exception occurred: " << e.what() << endl;
                   exit(1);
              db=new Db(&myEnv,0);
              db->set_pagesize(PAGE_SIZE);     
              db->set_bt_compare(compare_int);
         dbname=itoa(p);
         try
         db->open(NULL,
                   "treedb.db",
                   dbname,
                   DB_BTREE,
                   0,
                   0);
         catch(DbException &e)
              cerr << "Failed to open DB object" << endl;
              exit(1);
         db->cursor(NULL,&cursorp,0);
         key=new Dbt();
         data=new Dbt();
         while(ret=(cursorp->get(key,data,DB_NEXT))==0)
              j=*((int*)key->get_data());
              q=(*((struct tuple*)data->get_data())).startPos;
              r=(*((struct tuple*)data->get_data())).endPos;
              s=(*((struct tuple*)data->get_data())).level;
              cout<<"position : "<<j<<"\t";
              cout<<"start : "<<q<<"\t";
              cout<<"end : "<<r<<"\t";
              cout<<"level : "<<s<<"\n";
         if(ret==DB_NOTFOUND)
              cout<<"no records";
              exit(1);
         if(cursorp!=NULL)
              cursorp->close();
         try
              db->close(0);
         catch(DbException &e)
              cerr << "Failed to close DB object" << endl;
              exit(1);
         myEnv.close(0);

    HI Andrei,
    thank you for giving reply, I'm not using any secondary indecies, subdatabases and not using any threads.
    the following code displays index...
    #include <stdio.h>
    #include <db_cxx.h>
    #include <iostream.h>
    #include <db.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/types.h>
    int PAGE_SIZE=8192;
    int numBuffers=2048;
    char *itoa(const int x)
          char buf[100];
          snprintf(buf, sizeof(buf), "%d", x);
           return strdup(buf);
    int compare_int(Db dbp, const Dbt a, const Dbt *b)
      int ai;
      int bi;
      memcpy(&ai,a->get_data(),sizeof(int));
      memcpy(&bi,b->get_data(),sizeof(int));
      return(ai-bi);
    struct tuple
           int startPos;
           int endPos;
           int level;
    main()
         FILE *fp;
         int i,j,k,l,m,n,total=0;
         char dbname[500],filename[500],str [500];
         tuple t;
         Db *db;
         Dbt key,data;
         DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         try {
                myEnv.open("/home/raviov/Desktop/example", DB_CREATE | DB_INIT_MPOOL, 0);
           catch (DbException &e) {
                  cerr << "Exception occurred: " << e.what() << endl;
                  exit(1);
         for(n=0;n<=84;n++)
              db = new Db(&myEnv, 0);     // Instantiate the Db object
                    db->set_bt_compare(compare_int);
               db->set_pagesize(PAGE_SIZE);
              strcpy(filename,"/home/raviov/Desktop/GTCReport/code/sequence/splitter/tree/");
              strcat(filename,itoa(n));
              fp=fopen(filename,"r");
              if(fp==NULL)
                   cout<<"error in opening the file";
                   exit(0);
              while(fgets (str , 500 , fp)!=NULL)
                   sscanf(str,"%d(%d,%d,%d,%d)",&i,&j,&k,&l,&m);
                   key=new Dbt(&i,sizeof(int));
                   if(total==0)
                        strcpy(dbname,itoa(j));
                   t.startPos=k;
                   t.endPos=l;
                   t.level=m;
                   data=new Dbt(&t,sizeof(t));     
                   if(total==0)
                        try
                             db->open(NULL,
                             "tree.db",
                                       dbname,
                              DB_BTREE,
                                DB_CREATE,
                                0);
                        catch(DbException &e)
                               cerr << "Failed to create DB object" << endl;
                             exit(1);
                        total=99;
                   int ret=db->put(NULL,key,data,DB_NOOVERWRITE);
                   if(ret==DB_KEYEXIST)
                        cout<<"key already exist\n";
                        exit(1);
                   delete key;
                   delete data;
              total=0;
              fclose(fp);
              try
                   db->close(0);
              catch(DbException &e)
                     cerr << "Failed to close DB object" << endl;
                   exit(1);
         myEnv.close(0);
    }The following code retrieves the records from database that we had built above...
    #include <stdio.h>
    #include <db_cxx.h>
    #include <iostream.h>
    #include <db.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/types.h>
    int PAGE_SIZE=8192;
    int numBuffers=2048;
    char *itoa(const int x)
          char buf[100];
          snprintf(buf, sizeof(buf), "%d", x);
           return strdup(buf);
    int compare_int(Db dbp, const Dbt a, const Dbt *b)
      int ai;
      int bi;
      memcpy(&ai,a->get_data(),sizeof(int));
      memcpy(&bi,b->get_data(),sizeof(int));
      return(ai-bi);
    struct tuple
           int startPos;
           int endPos;
           int level;
    main()
         FILE *fp;
         int i,j,k,l,m,n,total=0;
         char *dbname;
         char filename[200],str [100];
         tuple t;
         Db *db;
         Dbt key,data;
         Dbc *cursorp=NULL;
         int ret,x=4;
         char *ravi;
         int p=84763;
         int y=2134872;
         int r,s,q,count=0;
         DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         myEnv.set_data_dir("/home/raviov/new/database");
              try {
                      myEnv.open("./", DB_CREATE | DB_INIT_MPOOL, 0);
               catch (DbException &e) {
                      cerr << "Exception occurred: " << e.what() << endl;
                      exit(1);
              db=new Db(&myEnv,0);
              db->set_pagesize(PAGE_SIZE);     
              db->set_bt_compare(compare_int);
         dbname=itoa(p);
         try
              db->open(NULL,
                    "tree.db",
                     dbname,
                     DB_BTREE,
                     0,
                     0);
         catch(DbException &e)
                cerr << "Failed to open DB object" << endl;
              exit(1);
         db->cursor(NULL,&cursorp,0);
         key=new Dbt();
         data=new Dbt();
         while(ret=(cursorp->get(key,data,DB_NEXT))==0)
              j=*((int*)key->get_data());
              q=(*((struct tuple*)data->get_data())).startPos;
              r=(*((struct tuple*)data->get_data())).endPos;
              s=(*((struct tuple*)data->get_data())).level;
              cout<<"position   : "<<j<<"\t";
              cout<<"start : "<<q<<"\t";
              cout<<"end   : "<<r<<"\t";
              cout<<"level   : "<<s<<"\n";
              delete key;
              delete data;
         if(ret==DB_NOTFOUND)
              cout<<"no records";
              exit(1);
         if(cursorp!=NULL)
              cursorp->close();
         try
              db->close(0);
         catch(DbException &e)
                cerr << "Failed to close DB object" << endl;
              exit(1);
         myEnv.close(0);     
    }

  • Best way to bulk insert records to sqlserver ??

      Hi, Actually i encountered an issue regarding inserting the lakhs of records(10lakhs)  in to sql server table using c#. the appropriate ways i know is using SqlBulkCopy or using parallel programming(LINQ) .
    i want to know which one is best according performance wise.  is there any other approach like "ssis".
    please suggest.
    a.kesavamallikarjuna

    Either use SqlBulkCopy or SSIS.
    Using a parallel approach with 10lakhs rows will create a huge load on your SQL Server, which may lead to issues on the server side. This is not a general "don't' do it", cause with the appropriate table, index and disk structure, this may work.
    SSIS is often the best solution, cause it is built for this kind of operation.
    But concrete advice depends on your concrete problem (format, reoccurance, scheduling etc.).

  • Space occupied by clustered index Vs non-clustered index

    I am trying to understand the indexes. Does clustered index occupy more space than a non-clustered index because it carries the information about rest of the other columns also. Could you guys please help me understand this. Thanks in advance.
    svk

    Hi czarvk,
    Clustered index in SQL Server takes up more space than non-clustered indexes.
    Clustered index arranges the way records are stored in a table putting them in order (key, value), all the data are sorted on the values of the index.
    A non-clustered index is a completely different object in a table, containing only a subset of columns and a row locator to the table’s rows or to the clustered index’s key.
    So clustered index in SQL Server takes up more space than non-clustered indexes.
    If you have any question, please feel free to let me know.
    Regards,
    Donghui Li

  • Compound index vs separate indexes

    Hi,
    I am having a table with 12 million records.
    My query will be always having 2 conditions in WHERE clause.
    SELECT * FROM TABLE WHERE c1='some integer' and c2 in (some values)In this case which index will be appropriate.?
    a)Creating separate indexes on c1 and c2
    b)creating a compound index on (c1,c2)
    What are the things i need to consider before creating index in this case>
    Thanks

    >
    In this case which index will be appropriate.?
    a)Creating separate indexes on c1 and c2
    b)creating a compound index on (c1,c2)
    What are the things i need to consider before creating index in this case>
    >
    Well I must say I disagree with ALL of the answers provided so far because none of them take into account those 'things you need to consider' before creating an index (or indexes).
    1. What is the selectivity of column C1? How many distinct values? How are the data values clustered?
    2. What is the selectivity of column C2? How many distinct values? How are the data values clustered?
    3. What will the clustering factor of the index be? That is, is the table data in any particular order that would result in an exceptionally good, or poor, cluster factor for the index?
    4. Will any queries only need column 'c2' for filtering?
    5. Will any queries only need column C2 for filtering?
    Without knowing at least the above information there is no way to tell if ANY index is appropriate, let alone how many or the ordering of columns for a composite index.
    Just consider point #1 above. If there are only two values for C1 and 6 million records have each value what good would an index be?
    But if column C1 is unique then an index would be very selective and you don't need to consider the C2 value at all.
    Likewise if the cardinality of C1 is low (few records have the same value) but cardinality of C2 is high the index should be on C1 only since including C2 would dilute the index and require more records to be read to get the ROWIDs of interest. And in cardinality of C2 is low but C1 is high then C2 might be better as the lead colum since an INDEX SKIP SCAN could be used.
    If you can provide the information asked for above then better recommendations can be provided.

  • Fast index creation suggestions wanted

    Hi:
    I've loaded a table with a little over 100,000,000 records. The table has several indexes which I must now create. Need to do this as fast as possible.
    I've read the excellent article by Don Burleson (http://www.dba-oracle.com/oracle_tips_index_speed.htm) but still have a few questions.
    1) If the table is not partitioned, does it still make sense to use "parallel"?
    2) The digit(s) following "compress" indicate the number of consective columns at the head of the index that have duplicates. True?
    3) How will the compressed index effect query performance (vs not compressed) down the line?
    4) In the future I will be doing lots and lots of updates on the indexed columns of the records as well as lots of record deletes and inserts into/out-of the table. Will these updates/inserts/deletes run faster or slower given that the indexes are compressed vs if they were not?
    5) In order to speed up the sorting, is it better to add datafiles to the TEMP table or create more TEMP tables (remember, running "parallel")
    Thanks in Advance

    There are people who would argue that excellent and Mr. Burleson do not belong in the same sentence.
    1) Yes, you can still use parallel (and nologging) to create the index, but don't expect 20 - 30 times faster index creation.
    2) It is the number of columns to compress by, they may not neccesarily have duplicates. For a unique index the default is number of columns - 1, for a non-unique index the default is the number of columns.
    3) If you do a lot of range scans or fast full index scans on that index, then you may see some performance benefit from reading fewer blocks. If the index is mostly used in equality predicates, then the performance benefit will likely be minimal.
    4) It really depends on too many factors to predict. The performance of inserts, updates and deletes will be either
    A) Slower
    B) The same
    C) Faster
    5) If you are on 10G, then I would look at temporary tablespace groups which can be beneficial for parallel operations. If not, then allocate as much memory as possible to sort_area_size to minimize disk sorts, and add space to your temporary tablespace to avoid unable to extend. Adding additional temporary tablespaces will not help because a user can only use one temporary tablespace at a time, and parallel index creation is only one user.
    You might want to do some searching at Tom Kyte's site http://asktom.oracle.com for some more responsible answers. Tom and Don have had their disagreements in the past, and in most of them, my money would be on Tom to be corerct.
    HTH
    John

  • Index creation of 0figl_o02 taking too long.

    Hi,
    we load data to a ODS 0FIGL_O02 approx .5 million records daily. and recreate the index using the function module - RSSM_PROCESS_ODS_CREA_INDEXES.
    The index creation job used to last for 3 -4 hours six months ago. but now the job runs for 6 hours, is there a way to decrese the  job time.
    the no of records in the active table of ODS  is 424 million.

    hi,
    this DSO is based on DS 0FI_GL_4 which is delta enabled.
    Do you mean to say that you are receiving .5 million data daily?
    if yes then there is not much that you can do, as the program will try to create the incremental index and will have to find the current index of 240 million records. One thing you can do is that as a regular Monthly activity you can completely delete the index of the cube and recreate it(this may take huge time but it would correct some of the corrupt indexes).
    this below sap note might help you.
    Note 1413496 - DB2-z/OS: BW: Building indexes on active DSO data
    If you are not using to report or lookup on this DSO then please do not create secondary indexes.
    regards,
    Arvind.
    Edited by: Arvind Tekra on Aug 25, 2011 5:18 PM

  • Problem with indexing a PDF file

    Hi all,
    We can most of the time successfully index PDF files, but sometimes it simply fails to do so. Nothing wrong with the code, but it looks to me that some PDF files are not accepted. Is there some sort of a list from Oracle which tells me what version of PDF is accepted or how they should be made so that Oracle can index the content?
    Nitai
    Message was edited by:
    Nitai

    Hi,
    Reproduced.
    I indexed 5 pdf's downloaded from the internet and yours. Your one pdf is the only one not indexed. All are version 1.4 (right-click | properties | pdf) which is fully supported (see http://www.verity.com/cms/groups/public/documents/collateral/mk0459.pdf for a list of supported formats for the keyview filter, referred to as auto_filter in 10g).
    The differences between the files are:
    Works:
    ============
    Application PScript5.dll Version 5.2
    PDF Producer: Acrobat Distiller 5.0.5 (Windows)
    Fast Web View: No
    Doesn't
    ============
    Application: Adobe InDesign CS2(4.0)
    PDF Producer: Adobe PDF Library 7.0
    Fast Web View: Yes
    I checked to see that the pdf was not simply a scanned image and it wasn't. No errors in CTX_USER_INDEX_ERRORS and the pending record disappears post index sync. No records in the $I table.
    A few possible things to check with support:
    1) If custom fonts are used check with Oracle support on the filter's ability to extract using the custom font.
    2) Are there any known issues with PDF docs generated using InDesign CS2 or with docs created for Fast Web View?
    See bug 3814696 and reference it with your service request. The outcome is not published so I don't know how/if they resolved it. You may want to mention this with your TAR since the scenario is similar, and toss in the differences between the "works" and "doesn't" scenarios that I mention above.
    Hope it helps,
    -Ron

  • Want last record in internal table

    Hi All,
    I have two internal table. in one internal table all data are available. now i want last record on first internal table and store in second internal table.... so give me some logic or sample code......
    Thanks
    zeni

    get the latest record from an internal table.
    if you are using function module RH_READ_INFTY_1001 then <itab
     > will always contain all the records betweebn specified period (taken from selection screen), but to retrieve the latest, sort the records by ENDDA and you will get it on top, then use INDEX 1 for that record only, and store the record in a <work_area>, finally get that record from <work_area> to <itab> it self. thats way your <itab> will hold only latest record.  
    (you can store <work_area> in <work_area_2> and then pass the record in to <itab>
    or, you can store the <work_area> to a different <itab_2>).
       call function RH_READ_INFTY_1001
       tables
          i1001         = it_hrp1001
      sort  it_hrp1001 by endda descending.
      read table it_hrp1001 into wa_hrp1001 INDEX 1.
      refresh it_hrp1001.
      append wa_hrp1001 to it_hrp1001.

  • Delete all record in a table, the insert speed is not change.

    I have an empty table, and i insert a record need 100ms,
    when this table has 40,0000 record, i insert a record need 1s, this is ok, because i need do a compare based an index before insert a record, so more record, need more time.
    The problem is when i delete all record in this table, the insert time is still 1s, not reduce to 100ms.Why?

    Hello,
    Read through this portion of oracle documentation
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/logical.htm#CNCPT004
    The reason is still taking 1s, because when you inserted 400K record because HWM (The high water mark is the boundary between used and unused space in a segment.) moved up. And when you deleted all the records your HWM still at the same marker and didn't get to reset to 0. So when you insert 1 record it lookings for free space and after finding (generally a regular inserts got 6 steps it inserts data). If you truncate your table you and try it again it will be faster as your HWM is reset to 0.
    Regards

Maybe you are looking for

  • How do I turn off conversation mode in text messaging.

    How do I turn off conversation mode in text messaging. i want my messages in sequential order. It is a galaxy s3

  • Using a .png file as OLE objet

    I am trying to use a png file in my report.  I have it linked in but it only will display the file name.  If I use the same file converted to pdf, it works perfect.  is there a switch somewhere that defines what ole objects can be used?  i need to us

  • Quicktime 7.2 doesn't  let me resize window

    I am running osX 10.4. When I upgraded to Quicktime 7.2, the green button no longer enlarged the playback window. I now have to drag it larger. Was this a purposeful change? Can I go back to an earlier version? Thanks, Marc

  • My ipad was stolen....what can I do?

    so...here we go....I was at the shopping, my car was in the parking lot...and my ipad among other things was in th car.,....someone took my backpack with all in it....and this is it....can I do anything to localize my ipad?

  • How to trigger the BIOS CRISIS Recovery mode for Thinkpad T61.

     For any other laptop, there's a possibility to run a BIOS crisis recovery, by holding Win-B or Fn-B and a few other conditions, one can make the laptop to boot up from a USB floppy drive and restore BIOS even if it's not working. But it seems that i