Cursor & large number of rows

hello !
i have an application wich interrogates my Oracle 8i Databse using OCI, my problem is that the number of rows returned is so large ( the totality of rows is 1 million) , how can i have rows returned by packets (as the same way as results returned by a search tool)

Use bulk binds (see http://technet.oracle.com/sample_code/tech/pl_sql/htdocs/bulkdemo.txt)
hello !
i have an application wich interrogates my Oracle 8i Databse using OCI, my problem is that the number of rows returned is so large ( the totality of rows is 1 million) , how can i have rows returned by packets (as the same way as results returned by a search tool)

Similar Messages

  • Oracle Error 01034 After attempting to delete a large number of rows

    I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
    SQL Command I ran:
    delete from oss_cell_main where time < '30 jul 2009'
    If I try to connect to the database now I get the following error:
    ORA-01034: ORACLE not available
    df -h returns the following:
    Filesystem size used avail capacity Mounted on
    /dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
    /dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
    /dev/md/dsk/d8 42G 42G 0K 100% /dbo
    I tried to get the space back by deleting all the data in the table oss_cell_main :
    drop table oss_cell_main purge
    But no change in df output.
    I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
    du -h :
    du -h8K ./lost+found
    1008M ./system/69333
    1008M ./system
    10G ./rollback/69333
    10G ./rollback
    27G ./data/69333
    27G ./data
    1K ./inx/69333
    2K ./inx
    3.8G ./tmp/69333
    3.8G ./tmp
    150M ./redo/69333
    150M ./redo
    42G .
    I think its the rollback folder that has increased in size immensely.
    SQL> show parameter undo
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 10800
    undo_tablespace string UNDOTBS1
    select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
    TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
    MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
    ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
    UNDOTBS1 8192 65536 1
    2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
    SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
    Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.

    Check the alert log for errors.
    Select file_name, bytes from dba_data_files order by bytes;
    Try to shrink some datafiles to get space back.

  • JDev: af:table with a large number of rows

    Hi
    We are developing with JDeveloper 11.1.2.1. We have a VO that returns > 2.000.000 of rows and that we display in a af:table with access mode 'scrollable' (the default) and 'in Batches of' 101. The user can select one row and do CRUD operations in the VO with popups. The application works fine but I read that scroll very large number of rows is not a good idea because can cause OutOfMemory exception if the user uses the scroll bar many times. I have tried with access mode in 'Range Paging' but the application works in strange ways. Sometimes when I select a row to edit, if the selected row is the number 430 in the popup is show it the number 512 and when I want to insert a new row throws this exception:
    oracle.jbo.InvalidOperException: JBO-25053: No se puede navegar con filas no enviadas en RangePaging RowSet.
         at oracle.jbo.server.QueryCollection.get(QueryCollection.java:2132)
         at oracle.jbo.server.QueryCollection.fetchRangeAt(QueryCollection.java:5430)
         at oracle.jbo.server.ViewRowSetIteratorImpl.scrollRange(ViewRowSetIteratorImpl.java:1329)
         at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStartWithRefresh(ViewRowSetIteratorImpl.java:2730)
         at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStart(ViewRowSetIteratorImpl.java:2715)
         at oracle.jbo.server.ViewRowSetImpl.setRangeStart(ViewRowSetImpl.java:3015)
         at oracle.jbo.server.ViewObjectImpl.setRangeStart(ViewObjectImpl.java:10678)
         at oracle.adf.model.binding.DCIteratorBinding.setRangeStart(DCIteratorBinding.java:3552)
         at oracle.adfinternal.view.faces.model.binding.RowDataManager._bringInToRange(RowDataManager.java:101)
         at oracle.adfinternal.view.faces.model.binding.RowDataManager.setRowIndex(RowDataManager.java:55)
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlHierBinding$FacesModel.setRowIndex(FacesCtrlHierBinding.java:800)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    <LoopDiagnostic> <dump> [8261] variableIterator variables passivated >>> TrackQueryPerformed def
    <LifecycleImpl> <_handleException> ADF_FACES-60098:El ciclo de vida de Faces recibe excepciones no tratadas en la fase RENDER_RESPONSE 6
    What is the best way to display this amount of data in a af:table and do CRUD operations?
    Thanks
    Edited by: 972255 on 05/12/2012 09:51

    Hi,
    honestly, the best way is to provide users with an option to filter the result set displayed in the table to reduce the result set size. No-one will query 2.00.000 rows using the table scrollbar.
    So one hint for optimization would be a query form (e.g. af:query)
    To answer your question "srollable" vs. "page range", see
    http://docs.oracle.com/cd/E21043_01/web.1111/b31974/bcadvvo.htm#ADFFD1179
    Pay attention to what is written in the context of +"The range paging access mode is typically used for paging through read-only row sets, and often is used with read-only view objects.".+
    Frank

  • How to Capture a Table with large number of Rows in Web UI Test?

    HI,
    Is there any possibility to capture a DOM Tabe with large number of Rows (say more than 100) in Web UI Test?
    Or is there any bug?

    Hi,
    You can try following code to capture the table values.
    To store the table values in CSV :
    *web.table( xpath_of_table ).exportToCSVFile("D:\exporttable.csv", true);*
    TO store the table values in a string:
    *String tblValues=web.table( xpath_of_table ).exportToCSVString();*
    info(tblValues);
    Thanks
    -POPS

  • Af:table Scroll bars not displayed in IE11 for large number of rows

    Hi. I'm using JDeveloper 11.1.2.4.0.
    The requirements of our application are to display a table potentially displaying very large numbers of rows (sometimes in excess 3 million). While the user does not need to scroll through this many rows, the QBE facility allows drill-down into specific information in the rowset. We moved up to JDeveloper 11.1.2.4.0 primarily so IE11 could be used instead of IE8 to overcome input latency in ADF forms.
    However, it seems that IE11 does not enable the vertical or horizontal scroll bars for the af:table component when the table contains greater than (approx) 650,000 rows. This is not the case when the Chrome browser is used. Nor was this the case on IE8 previously (using JDev 11.1.2.1.0).
    When the table is filtered using the QBE (to a subset < 650,000 rows), the scroll bars are displayed correctly.
    In the code the af:table component is surrounded by an af:panelCollection component which is itself surrounded by an af:panelStretchLayout component.
    Does anyone have any suggestions as to how this behaviour can be corrected? Is it purely a browser problem, or might there be a programmatic workaround in ADF?
    Thanks for your help.

    Thanks for your response. That's no longer an option for us though...
    Some further investigation into the generated HTML has yielded the following information...
    The missing scroll bars appear to be as a consequence of the setting of a style for the horizontal and vertical scroll bars (referenced as vscroller and hscroller in the HTML).  The height of the scrollbar appears to be computed by multiplying the estimated number of rows in the iterator on which the table is based by 16 to give a scrollbar size proportional to the amount of data in the table, although it is not obvious why that should be done for the horizontal scroller.  If this number is greater than or equal to 10737424 pixels then the scroll bars do not display in IE11.
    It would seem better to be setting this height to a sensible limiting number of pixels for a large number of rows?
    Alternatively, is it possible to find where this calculation is taking place and override its behaviour?
    Thanks.

  • Deleing large number of rows from table

    Hi,
    Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
    Method I have used:
    1. Created Temp table
    2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
    delete from B where exists (select 1 from temp where b.col1=temp.col1);
    3. please suggest options for me why it is taking too much time for deleting records in table A.
    Is there any thing that during deleting data from such master table, it is reffering to all dependent tables even if dependent data is not present ??? If yes then couls you please please suggest options for me to remove this ? I hope it won't go for CHECK constraints during deleting data.
    Thanks,
    Avinash
    Edited by: user12952025 on Apr 30, 2013 2:55 AM
    Edited by: user12952025 on Apr 30, 2013 2:56 AM
    Edited by: user12952025 on Apr 30, 2013 2:57 AM

    user12952025 wrote:
    Hi,
    Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.What attribute of the Foreign key is specified? Is it On Delete Cascade? If yes, then in a way, deleting data fro Child tables is un-necessary. Only a Delete from parent shall suffice.
    >
    Method I have used:
    1. Created Temp table
    2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
    delete from B where exists (select 1 from temp where b.col1=temp.col1);
    3. please suggest options for me why it is taking too much time for deleting records in table A.
    Is there any thing that during deleting data from such master table, it is reffering to all dependent tables even if dependent data is not present ??? If yes then couls you please please suggest options for me to remove this ? I hope it won't go for CHECK constraints during deleting data.One another way is to "Switch-Off" the relationship while deleting the data.
    ALTER TABLE table_name
    disable CONSTRAINT constraint_nameAnd then Delete the data from each of tables.
    You did specify the number of rows in each table, it would have been better to mention the number of rows to be deleted.
    It is not a hard-and-fast way, but would generally perform better, to copy the data (to be retained) from Parent Table into a Temporary Table, Drop Parent Table and rename teh Temporary table to parent table. Similar can be performed on Child tables.
    You may then Enable the Foreign key constraints.

  • Java executed SQL statement to XML - large number of rows

    Hello,
    I have to write some code to generate XML as the result of a query. AFAIK I am using the latest versions of the relevant Java libraries etc.
    I have found that using a max_size function above 2000 results in 'Exception in thread "main" java.lang.OutOfMemoryError' errors
    I thought I could overcome this by reading the data in batches using the skip_rows functionality.
    I have included the code I am using below and would appreciate any help.
    Best Regards,
    Mark Robbins
    import java.sql.*;
    import java.math.*;
    import oracle.xml.sql.query.*;
    import oracle.xml.sql.docgen.*;
    import oracle.jdbc.*;
    import oracle.jdbc.driver.*;
    public class XMLRetr
    public static void main(String args[]) throws SQLException
    String tabName = "becc";
    String user = "test/test";
    DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
    //initiate a JDBC connection
    Connection conn =
    DriverManager.getConnection("jdbc:oracle:oci8:"+user+"@test");
    // initialize the OracleXMLQuery
    OracleXMLQuery qry = new OracleXMLQuery(conn,"select * from "+tabName+" where TRUNC(SAMPLED) < \'14-NOV-99\'" );
    // structure the generated XML document
    System.out.println("Before setting parameters");
    qry.setMaxRows(8000); // set the maximum number of rows to be returned
    qry.setRowsetTag("ROOTDOC"); // set the root document tag
    qry.setRowTag("DBROW"); // sets the row separator tag
    qry.setRaiseException(true);
    qry.keepCursorState(true);
    // create the document generation factory. Note: there are methods in OracleXMLQuery
    // which don't require that an OracleXMLDocGen object be passed in; but rather, they
    // create an OracleXMLDocGen object of their own.
    OracleXMLDocGen doc = (OracleXMLDocGen) new OracleXMLDocGenString();
    for (int rowCnt = 1; rowCnt < 8000; rowCnt = rowCnt + 1000)
    // get the xml
    System.out.println("Before skip rowCnt:"+rowCnt);
    qry.setSkipRows(rowCnt); // process next batch
    System.out.println("Before getXML");
    // this is where I get the exception on the second iteration of the loop
    qry.getXML(doc);
    System.out.println("Displaying document");
    // System.out.println(doc.getXMLDocumentString());
    System.out.println("Row number:"+rowCnt);
    System.out.flush();
    System.out.println("End of document");
    qry.close();
    null

    I used qry.getXMLString()
    But called it for every row, ie, set
    qry.maxRows(1) and qry.skipRows(rowcount-1).
    The down side is I had to postprocess the String to remove the processing instruction and doc level tags.

  • Export to Excel - Large number of Rows

    Hi All,
    I have a report in Answers which returns around 150,000 to 200,000 records. I tried to export this report in Excel as well Excel 2000 format.
    I got only 65,004 records.But in the page navigator shows like this Records 1-65000. When i click Update Row Count in the Physical Layer it shows 170,560 records. When i click 'All Pages' button in the Page navigator, page gets hung. How to fix this issue?
    Also the Excel has a limit of 65,536 rows per sheet. If the no of records exceeds the excel limit then one more sheet need to be added automatically with the records. How to configure this setting?

    Excel has the limit of accomodating 65k rows,i don't think we can change excel limit. when it comes to OBIEE report you can change the instance config file(OracleBI Data/web/config to accomodate any number of rows as per your convenience.
    Thanks

  • Large number of rows in ResultSet

    Hai,
    I have huge number of rows retrive from database using ResultSet but it seems to be slow to get records. When it retrive data from database can we read data from ResultSet?.
    Advance wishes,
    Daiesh

    No, he can't, but who's to say that's what's slowing
    him down? Could be bad JDBC code, old JDBC drivers,
    or bad queries. Could be.
    Honestly why does it matter to you? It matters a great deal.
    If he's talking about a web page, and LOTS of people who ask this question are bringing that data down to the browser, my answer would be that no user wants to deal with 500K records all at once. A well-placed WHERE clause or filter, or paging through 25 at a time a la Google, would be my recommendation in that case.
    Did you have an
    answer all prepared to go so long as he gave you a
    reason that was up to your rigorous standards? Yes, see above.
    You
    could have said "make sure you really need all that
    data, and if you do, then try X, Y, and Z to speed it
    up". But to just throw that out with no evidence
    you're prepared to help regardless? Why bother?See above. There was a reason for the question.
    Oh wait... I get it ... did I mistake your submission
    to "The Most Useless, Unhelpful, and Arrogant Post
    Contest" for a real post? My bad...Nope, that would be yours, dumb @ss.
    %

  • Issue in updating large number of rows which is taking a long time

    Hi all,
    Am new to oracle forums. First I will explain my problems as below:
    1) I have a table of 350 columns for which i have two indexes. One is for primary key's id
    and the other is the composite id (combination of two functional ids)
    2) Through my application, the user can calculate some functional conditions and the result
    is updated in the same table.
    3) The table consists of all input, intermediate and output columns.
    4) The only way of calculation is done through update statements. The problem is, for one
    complete process, the total number of update statement hits the db is around 1000.
    5) From the two index, one indexed column is mandatory in all update where clause. So one
    will come at any case but the other is optional.
    6) Updating the table is taking a long time if the row count exceeds 1lakh.
    7) I will now explain the scenario:
    a. Say there is 5lakh 100 records in the table in which mandatory indexed column id 1 has
    100 records and id 2 has 5 lakhs record.
    b. If I process id 1, it is very fast and executed within 10seconds. But if I process id 2,
    then it is taking more than 4 minutes to update.
    Is there any way to increase the speed of the update statement. Am using oracle 10g.
    Please help me in this, Since I am a developer and dont have much knowledge in oracle.
    Thanks in advance.
    Regards,
    Sethu

    refer the link:
    http://hoopercharles.wordpress.com/2010/03/09/vsession_longops-wheres-my-sql-statement/

  • How do I process a large number of rows using ADO?

    I need to iterate through a table with about 12 million records and process each row individually.    The project I'm doing cannot be resolved with a simple UPDATE statement.
    My concern is that when I perform the initial query that so much data will be returned that I'll simply crash.
    Ideally I would get to a row, perform my operation, then go to the next row ... and so on and so on.
    I am using ADO / C++

    I suggest you simply use the default fast-forward read-only (firehose) cursor to read the data. This will stream data from SQL Server to your application and client memory usage will be limited to the internal API buffers without resorting to paging. 
    I ran a quick test of this technique using ADO classic and the C# code below and it ran in under 3 minutes (35 seconds without the file processing) on my Surface Pro against a remote SQL Server.  I would expect C++ to be a significantly faster
    since it won't incur the COM interop penalty.  The same test with SqlClient ran in under 10 seconds.
    static void test()
    var sw = Stopwatch.StartNew();
    Console.WriteLine(DateTime.Now.ToString("HH:mm:ss.fff"));
    object recordCount;
    var adoConnection = new ADODB.Connection();
    adoConnection.Open(@"Provider=SQLNCLI11.1;Server=serverName;Database=MarketData;Integrated Security=SSPI");
    var outfile = new StreamWriter(@"C:\temp\MarketData.txt");
    var adoRs = adoConnection.Execute("SELECT TOP(1200000) Symbol, TradeTimestamp, HighPrice, LowPrice, OpenPrice, ClosePrice, Volume FROM dbo.OneMinuteQuote;", out recordCount);
    while(!adoRs.EOF)
    outfile.WriteLine("{0},{1},{2},{3},{4},{5},",
    (string)adoRs.Fields[0].Value.ToString(),
    ((DateTime)adoRs.Fields[1].Value).ToString(),
    ((Decimal)adoRs.Fields[2].Value).ToString(),
    ((Decimal)adoRs.Fields[3].Value).ToString(),
    ((Decimal)adoRs.Fields[4].Value).ToString(),
    ((Decimal)adoRs.Fields[5].Value).ToString());
    adoRs.MoveNext();
    adoRs.Close();
    adoConnection.Close();
    outfile.Close();
    sw.Stop();
    Console.WriteLine(DateTime.Now.ToString("HH:mm:ss.fff"));
    Console.WriteLine(DateTime.Now.ToString(sw.Elapsed.ToString()));
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • HOW TO - Insert large number of rows fast

    I have a tables A , B and C and I select some data from A, B and C using some complex criteria.
    Table A B and C has 10mil rows.
    final rows to be insert into table D is about 3mil.
    Currently the rows are inserted one at a time and there are 3 mil inserts in the plsql.
    What is the best way to create these rows.
    psudocode
    begin
    for loop ..... loop
    --compled selection criteria.
    insert into D ..... ;
    end loop ;
    end ;
    SS

    is there a way to optimize the inserts.The inserts takes very little time
    Re: Insert Statement Performance Degradation
    In this example the same number of inserts into the same table takes 0.03 seconds to insert 100,000 rows, and 3.06 seconds when looped. If the entire insert operation was optimized away to take zero time, the loop would still take 3.03 seconds, which represents a performance increase of a little under 1%.
    As I said it is not a single query by which I build a row for insert into table D, It is
    a complex operation which is not necessary to explain here.This is the slow part that needs optimizing.

  • CLR Procedure and returning large number of rows

    I have a CLR stored procedure coded in C# that retrieves data from a web service, and returns that data using SendResultsStart/SendResultsRow/SendResultsEnd. This all works fine, except when the data from the web service is tens or even thousands of records.
    The code itself takes about 3 minutes on average to do all it's work with around 50000-60000 records, but the procedure does not return in SSMS for about another 10-15 minutes, during which time the CPU and memory usage go up significantly.
    To rule out any of the CLR code as the culprit, I created a very simple CLR procedure that just loops to return 100000 records with int and nvarchar(256) fields with the current count, and "ABC" followed by the count.  Here is the code:
    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void ABC()
    System.Diagnostics.Stopwatch ExecuteTimer = System.Diagnostics.Stopwatch.StartNew();
    SqlMetaData[] ResultMetaData = new SqlMetaData[2];
    ResultMetaData[0] = new SqlMetaData("count", SqlDbType.Int);
    ResultMetaData[1] = new SqlMetaData("text", SqlDbType.NVarChar, 256);
    SqlContext.Pipe.SendResultsStart(new SqlDataRecord(ResultMetaData));
    for (int x = 0; x < 100000; x++)
    SqlDataRecord ResultItem = new SqlDataRecord(ResultMetaData);
    ResultItem.SetValue(0, x);
    ResultItem.SetValue(1, "ABC" + x.ToString());
    SqlContext.Pipe.SendResultsRow(ResultItem);
    SqlContext.Pipe.SendResultsEnd();
    TimeSpan ExecTime = ExecuteTimer.Elapsed;
    SqlContext.Pipe.Send("Elapsed Time: " + ExecTime.Minutes.ToString() + ":" + ExecTime.Seconds.ToString() + "." + ExecTime.Milliseconds.ToString());
    I then executed procedure ABC in SSMS, and it took 21 minutes to return.  All of the data rows were visible after just a couple of seconds, but the query continued to run as the CPU and memory went up again.
    Is this really how long it should take to return 100000 rows, or am I missing something?  Is there a better approach than using SendResultsStart/SendResultsRow/SendResultsEnd?
    I've googled this to death and haven't found anything that helped or even explained why this is.
    I would greatly appreciate any suggestions or alternate methods to achieve this faster.
    Thanks!
    Alex

    When you create a new object, space on the garbage-collected heap is allocated for that object, and the address will be stored in a reference. Some time later, there will no longer be any references that hold the address of the allocated object. It doesn't
    matter whether the reference count went to 0 because the reference was set to null or because the reference was on the stack and is no longer in lexical scope, the end result is the same: the garbage collector will, at some point in time, have to perform the
    book-keeping operations necessary to identify that the space allocated for that now-unreferenced object can be re-used. When, on the other hand, you only create a single SqlDataRecord object and hold onto the reference, all of the book-keeping operations
    associated with creating 100,000 objects are eliminated. This is why the documentation for the SqlDataReader class advises that:
    When writing common language runtime (CLR) applications, you should re-use existing
    SqlDataRecord objects instead of creating new ones every time. Creating many new
    SqlDataRecord objects could severely deplete memory and adversely affect performance.

  • Jtable on JScrollPane get corrupted for large number of rows

    Hi I have problem with vertical scroll bars of JScrollPane.
    When move the scroll bar(faster) for a Jtable (with 2000 rows) the rows get
    corupted.
    Please let me know how can I fix this problem?

    Hi,
    I have just recompiled my (previously 1.3.1) application with 1.4.2 and notice the same problem. The problem starts somewhere between 1700 and 2500 rows.
    Its not just the scroll bar for me - the display corrupts whereever i click the mouse on the table area.
    Did you manage to diagnose??
    Thanks, Dave

  • Update large number of rows

    I have a query as follows:
    UPDATE TABLE_1 A SET COLUMN_1 = (SELECT COLUMN_1 FROM TABLE_2 B WHERE A.COLUMN_2 = B.COLUMN_2)
    Both tables have 400k to 500k rows and the update is taking a long time. How can I improve this update statement? Can I use a parallel query? How about using hints?
    Thanks

    How can I improve this update statement?You can add a WHERE clause to make sure you don't update an existing column to null if no row is found in the subquery:
    UPDATE TABLE_1 A
    SET    a.COLUMN_1 = (SELECT b.COLUMN_1
                         FROM   TABLE_2 B
                         WHERE  A.COLUMN_2 = B.COLUMN_2)
    WHERE  EXISTS       (SELECT b.COLUMN_1
                         FROM   TABLE_2 B
                         WHERE  A.COLUMN_2 = B.COLUMN_2)
    ;For the performance...you'll need to look at (and post) an explain plan to see what's going on.

Maybe you are looking for

  • Two accounts on the same computer!

    I have an IPOD Nano and an IPOD Touch. My wife likes different music than I do.Is it possiable to have two I Tunes accounts on the same computer?

  • Keep getting popup that says "Firefox requires that you type your Kerberos password"

    While using Firefox, a window will start popping up asking me to type in my Kerberos password. The fields they give are for Name, Realm and Password. Does anyone know how to make this stop? If I keep clicking "Cancel," it will eventually stop popping

  • Excess material remain with sub contracting vendor

    HI expert I have made subcontracting po for which migo, miro had done, we had made manual BOM in the po, usually my BOM ratio 1:1, But now end user had done mistake, by taking 1:3 ratio, tht mean for 100 he taken  300. My 200 stock I lying with sub c

  • Suitable version of iLife?

    Hi, am I right in thinking that iLife 8 would be suitable software to install on a Mac PowerBook G4 operating on OS X 10.5.8? It is predominantly iPhoto that I am interested in. Thank you for your help :-)

  • Airport Express and PC connectivity

    I have a time capsule as my main wireless router and a airport express to extend the range of the network. My PC connects wirelessly when the PC is near the time capsule but if I move away and near the airport express I only get local connection but