Deleting bulk amount of rows

Hi.
i got to delete more than 1 lakhs records from different tables in daily basis based on certain condition.. So I decided to create a procedure and schedule it using DBMS_JOB.
But my concern is what are the exceptions should i considered at the time of bulk deletion and what’s the best method to do this, so that the deletion process will work very fast..
Please let me know.
Regards
Benn

The scenario is a little vague.
Did you try a regular delete statement?
I would suggest this link for you.
http://tkyte.blogspot.com/2006/10/slow-by-slow.html
In there Tom says:
My mantra, that I'll be sticking with thank you very much, is:
* You should do it in a single SQL statement if at all possible.
* If you cannot do it in a single SQL Statement, then do it in PL/SQL.
* If you cannot do it in PL/SQL, try a Java Stored Procedure.
* If you cannot do it in Java, do it in a C external procedure.
* If you cannot do it in a C external routine, you might want to seriously think about why it
is you need to do it…
think in sets...
learn all there is to learn about SQL...
Rahul.

Similar Messages

  • How to reduce the database size after deleting huge amount of rows?

    Hi,
    I have a large database. I removed almost half of the data/rows. Now i need to reduce the size of the database file as I need more disk space for the database file.
    What should I do in details please.
    Thanks.

    Hi,
    I have a large database. I removed almost half of the data/rows. Now i need to reduce the size of the database file as I need more disk space for the database file.
    What should I do in details please.
    Thanks.
    Deleting large data would have ultimately put ghost cleanup into action. You would not be able to free space using shrink operation untill it completes. So wait for time and then start shrinking. You should note that shrinking would cause fragmentation and
    you would have to rebuild indexes after shrinking is completed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Deleting large amounts of data

    All,
    I have several tables that have about 1 million plus rows of historical data that is no longer needed and I am considering deleting the data. I have heard that deleting the data will actually slow down performance as it will mess up the indexing, is this true? What if I recalculate statistics after deleting the data? In general, I am looking for advice what is best practices for deleting large amounts of data from tables.
    For everyones reference I am running Oracle 9.2.0.1.0 on Solaris 9. Thanks in advance for the advice.
    Thanks in advance!
    Ron

    Another problem with delete is that it generates a vast amount of redo log (and archived logs) information . The better way to get rid of the unneeded data would be to use TRUNCATE command:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_107a.htm#2067573
    The problem with truncate that it removes all the data from the table. In order to save some data from the table you can do next thing:
    1. create another_table as select * from <main_table> where <data you want to keep clause>
    2. save the indexes, constraints, trigger definitions, grants from the main_table
    3. drop the main table
    4. rename <stage_table> to <main_table>.
    5. recreate indexes, constraints and triggers.
    Another method is to use partitioning to partition the data based on the key (you've mentioned "historical" - the key could be some date column). Then you can drop the historical data partitions when you need it.
    As far as your question about recalculating the statistics - it will not release the storage allocated for index. You'll need to execute ALTER INDEX <index_name> REBUILD :
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_18a.htm
    Mike

  • Delete bulk data from multiple tables

    Hi,
    I am having two tables from which data needs to be deleted based on some where clause.
    Can anyone help me how to delete bulk data means more than 10000 rows at a time to improve the performance.
    Regards,
    Dinesh

    LPS wrote:
    This will be simple delete statement and make sure whether the where clause reffered columns are indexed.Indexing may or may not help. If he is deleting 10000 rows out of 20000 it won't help at all. In fact, indexing may
    make things worse as it will slow down the delete.

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Rounding Tax amount in row

    Hi Experts,
    One of our customer wants to round the tax amount in each row of marketing documents.
    When searched the help of 'Document Settings:General Tab",  we found a setting field for this called 'Round tax amount in rows'.
    But the same is missing from the Document Settings window, General Tab.
    We are using SAP Business One 2007B, PL13
    Please Help.
    Thanks and Regards
    Ajith G

    Hi Ajith,
    Tax Definition Functions.
    Choose ->> Administration.
    -> Setup.
    -> Financials.
    -> Tax.
    -> Tax engine config.
    Tax Code Determination.
    ->> Tax Formula.
    Select your Formula which you want to ROUND OFF
    then pull down from Operation Round and kip your variable.
    Simple example,
    Service_BaseAmt=Total
    Service_TaxAmt=Round(Service_BaseAmt*Service_Rate,0)
    It will round off 0.5 to 1.0 and less than 0.5 to 0.0.............
    IF you want to Round off final Tax amount(ex. VAT_TaxAmt).
    see the example Tax Creation,
    Service_BaseAmt=Total
    Service_TaxAmt=Service_BaseAmt*Service_Rate
    HigheCess_ST_BaseAmt=Service_TaxAmt
    HigheCess_ST_TaxAmt=HigheCess_ST_BaseAmt*HigheCess_ST_Rate
    VAT_BaseAmt=Total(Total*Service_Rate)((TotalService_Rate))Cess_ST_Rate+((TotalService_Rate)HigheCess_ST_Rate)
    [VAT_TaxAmt= Round(VAT_BaseAmt*VAT_Rate,0)]
    Regards,
    Madhan.

  • How do I delete large amounts of photos from my ipad?

    a

    Are you using iPhoto on your iPad? Then deleting photos will depend on the album they are in and how they have been added to the iPad.
    See this document for IOS 6 and earlier:   iPhoto for iOS (iPad): Delete photos from iPhoto
    If you also have a Mac, you can delete quickly large amounts of photos from your camera roll by connecting your iPad to your computer using USB and importing the photos into Image Capture, iPhoto, or Aperture. Then set the option to delete the photos after importing.
    Photos you synced to your iPad can only be deleted by syncing again.
    If you are using IOS 7 you can delete larger amounts of photos from your Camera Roll in the "Moments" view:
    n the Photos.app you can select all photos in a "Moment" at once, by pressing "Select" to the right of the "Moment" name. Then press the Trash icon.
    For example:
    Regrads
    Léonie

  • How do i delete large amount of emails with the iPad mini

    how do i delete large amounts of emails using the ipad mini

    To delete multiple emails touch Edit, select all the emails and then touch Trash.
    There is no way to delete all emails from the Inbox. One can delete all emails from Trash.

  • Delete a range of rows in Oracle Table

    How do i delete a range of values in Oracle Table using SQL Pus. For example i have iwanted to delete rows with user_id from 12344 ~ 12399

    ustin thanks for the earlier help. Part of the requiremnt is met, now the tester can delete the rage of rows from 'dropbox', table. I am quiet not done in my second of the requirement. And that is pull the first 100 or 200 etc rows from the parent table (user_auth), and delete them from (dropbox). user_auth does not get affected. I able to user rank and dense_rank by create_time against user_auth table, and get the first 100 or 200 rows. However when join with dropbox, for example dropbox.user_id=user_auth.user_id. Query get hunged. I am not sure do i need to write a procedure for this, or a can be acieved by SQL. This is a test env.
    SQL> desc user_auth
    Name Null? Type
    USER_ID NOT NULL NUMBER(10)
    USER_NAME NOT NULL VARCHAR2(32)
    ACCOUNT_STATUS_CODE NOT NULL NUMBER(3)
    PASSWD VARCHAR2(13)
    DEVICE_ID_TYPE VARCHAR2(10)
    DEVICE_ID VARCHAR2(36)
    OLD_MSISDN NUMBER(15)
    MASTER_LOCK_CODE VARCHAR2(3)
    PARTNER_ID NOT NULL NUMBER(3)
    CREATE_TIME NUMBER(10)
    MAIL_PROVISIONED NOT NULL NUMBER(1)
    PASSWD_TYPE_ID NUMBER(3)
    USER_ID is primary key.
    INDEX_NAME INDEX_TYPE UNIQUENES
    PK_USER_AUTH_USER_ID NORMAL UNIQUE
    I_USER_AUTH_USER_NAME_PARTNER NORMAL UNIQUE
    I_USER_AUTH_DEVICE NORMAL UNIQUE
    I_USER_AUTH_MSISDN NORMAL UNIQUE
    DROPBOX
    Name Null? Type
    USER_ID NOT NULL NUMBER(10)
    DROPBOX_ID NUMBER(10)
    UPDATE_TIME NUMBER(10)
    MESSAGE_TYPE VARCHAR2(64)
    MESSAGE BLOB
    SURVIVE_REBOOT NUMBER(1)
    DELETE_TIME NOT NULL NUMBER(10)
    No Primary KEY
    INDEX_NAME INDEX_TYPE UNIQUENES
    I_DROPBOX_USER_ID NORMAL NONUNIQUE
    I_DROPBOX_SURVIVE_REBOOT NORMAL NONUNIQUE
    If I run
    select user_id, create_time, rnk1 FROM
    (select user_id, user_name, to_char(convert_unix_time(create_time)) create_time, DENSE_RANK() OVER (ORDER BY user_name) rnk1 from user_auth where user_name LIKE '%stomp%_%')
    where rnk1 < 100
    I get the result. How do i select these first 100 rows in the dropbox and delete them. relationship is many to many.

  • Delete from 95 million rows table ...

    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.

    user8604530 wrote:
    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.how many rows in the table BEFORE the DELETE?
    how many rows in the table AFTER the DELETE?
    How do I ask a question on the forums?
    SQL and PL/SQL FAQ
    Handle:     user8604530
    Status Level:     Newbie
    Registered:     Mar 10, 2010
    Total Posts:     64
    Total Questions:     26 (22 unresolved)
    I extend to you my condolences since you rarely get your questions answered.

  • Delete in xsql:dml vs xsql:delete-request and returned rows attribute

    There is a difference in the number of rows returned between using a delete in xsql:dml vs xsql:delete-request. If I issue a delete via xsql:dml and the row I wish to delete is not in the table, then I get a result with rows equal to 0 as expected. However if I issue a delete via xsql:delete-request and the row I wish to delete is again not in the table, then I get a result with rows equal to 1.
    It appears that the value of rows in the response to xsql:delete-request is the number of rows to be processed, ie the number of rows in the posted document, whereas the value of rows in the response to xsql:dml is the number of rows processed in the database.
    I'd expect that the result that we want is the number of rows processed in the database. Thus xsql:delete-request should use the rows attribute in the response to reflect the number of rows processed in the database and thus be consistent with xsql:dml, and possibly use another attribute to reflect the number of rows to be processed.
    The same problem occurs with an update in xsql:dml vs xsql:update-request.
    http://aetius/xsql/demo/dmldelete.xsql?cxn=demo&bind=ename&ename=COMPAQ&sql=delete+from+EMP+where+ENAME+%3d+?
    result is
    <xsql-status action="xsql:dml" rows="0"/>
    where dmldelete.xsql is
    <xsql:dml
    null-indicator="yes"
    connection="{@cxn}"
    bind-params="{@bind}"
    xmlns:xsql="urn:oracle-xsql">
    {@sql}
    </xsql:dml>
    http://aetius/xsql/demo/reqdelete.xsql?cxn=demo&table=EMP&key=ENAME
    where posted document is
    <ROWSET><ROW><ENAME>COMPAQ</ENAME></ROW></ROWSET>
    result
    <xsql-status action="xsql:delete-request" rows="1"/>
    where reqdelete.xsql is
    <xsql:delete-request
    connection="{@cxn}"
    key-columns="{@key}"
    table="{@table}"
    xmlns:xsql="urn:oracle-xsql">
    </xsql:delete-request>
    Steve.

    If you post as an HTML parameter, then you can directly reference the parameter value as either a lexical parameter:
    <xsql:dml>
    insert into foo(xml_column) values ('{@paramname}')
    </xsql:dml>
    or as a bind parameter:
    <xsql:dml bind-params="paramname">
    insert into foo(xml_column) values (?)
    </xsql:dml>
    I'd recommend the latter since it's more robust to the presence of quotes in the XML that's being posted.
    If you post the XML HTTP body, then you'd have to write a custom action handler that called the getPageRequest().getPostedDocument() to get hold of the posted XML Document.

  • DataTemplate Not showing data but showing correct amount of rows

    I just started using Datatemplates to convert our Answers Reports over to Publisher. Basicly I am using the direct SQL to BI Server method.
    Copy the SQL from the advanced SQL field in BI Answers.
    Place new SQL in DataTemplate
    Construct DataStructure
    It seems to be getting the proper amount of rows but none of the data is shown. I have checked the data multiple times for spelling errors and other simple items but I can't find it. It seems to be making a connection just fine but it does not show and value in the data elements. It just returns 6 empty rows.
    Example XML Attached:
    <dataTemplate name="FiveDayCo" defaultPackage="" dataSourceRef="OracleRPD">
         <properties>
    <property name="include_parameters" value="true"/>
    <property name="include_null_Element" value="true"/>
    <property name="include_rowsettag" value="false"/>
    <property name="scalable_mode" value="off"/>
    <property name="db_fetch_size" value="300"/>
    </properties>
         <parameters/>
         <lexicals/>
         <dataQuery>
              <sqlStatement name="COOR" ><![CDATA[SELECT "- Correspondence"."% 5 Day Correspondence Response Time" saw_0, '90%' saw_1, "- Correspondence"."% 5 Day Correspondence Response Time"-90 saw_2, "- Correspondence Details"."Correspondence Type" saw_3 FROM Finance WHERE ("- Correspondence Details"."Correspondence Type" IN ('Customer Service', 'Routine', 'Undeliverable', 'Privacy')) AND ("Business Unit"."Business Unit ID" = 6.00) AND ("- Record Details"."Record Type" = 'CORE') AND ("- Correspondence Details"."Correspondence Status" <> 'Closed') AND (("- Primary Expected Completed Date"."Primary Expected Year" = cast (YEAR( date '2010-07-01') as char(4)) ) AND ("- Primary Expected Completed Date"."Primary Expected Calendar Month" = MONTH( date '2010-07-01'))) ORDER BY saw_1, saw_3]]></sqlStatement>
         </dataQuery>
         <dataStructure>
              <group name="Coorespondence" source="COOR" >
                   <element name="FiveDayCoorespondenceTime" source="saw_0" function=""/>
                   <element name="KPIValue" source="saw_1" function=""/>
                   <element name="Variance" source="saw_2" function=""/>
                   <element name="CoorespondenceType" source="saw_3" function=""/>
              </group>
         </dataStructure>
    </dataTemplate>

    A few things to check. Does the sql by itself return data?
    Also, do you need the following: <dataTemplate name="FiveDayCo" defaultPackage="" dataSourceRef="OracleRPD"> ? If a package is not used/necessary, remove the reference.
    The following tag should be enough.
    <dataTemplate name="FiveDayCo">
    Your datastructure code should be: (for element name use value (instead of source = ) and I removed function syntax since you didn't specify any function like sum, average)
    <dataStructure>
    <group name="Coorespondence" source="COOR" >
    <element name="FiveDayCoorespondenceTime" value="saw_0" function=""/>
    <element name="KPIValue" value="saw_1"/>
    <element name="Variance" value="saw_2"/>
    <element name="CoorespondenceType" value="saw_3"/>
    </group>
    </dataStructure>
    For a good data template reference, check out the following: http://blogs.oracle.com/xmlpublisher/2009/06/data_template_progression.html
    Hope that helps!

  • HT1752 I am working on a chart in paged with 7 columns. Problem has occurred with amount of row. I have completed 999 rows and it won't allow me to continue. As I need to do approx 3000+ rows, can anyone explain to me how to add extra rows. Thanks Jane

    I am working on a chart in pages with 7 columns.
    Problem has occurred with amount of rows. I have completed 999 rows and it won't allow me to continue. As I need to do approx 3000+ rows, can anyone explain to me how to add extra rows. Thanks Jane

    Try posting in the Pages forum
    https://discussions.apple.com/community/iwork/pages

  • How do you delete more than one row on numbers?

    how do you delete more than one row on numbers?

    Hi operatorcarmax,
    Select the Rows (shift click works for me). Then hover the cursor over any selected Row label to see the upside-down triangle. Click on it to see a Pop-up menu.
    The Numbers User Guide is available for download under the Numbers Help Menu. A good read.
    Regards,
    Ian.

  • Return Set Amount Of Rows

    Hello All,
    Quick question I hope... How do I request a set amount of terms and have the Oracle Agent stop, once the amount of rows has been reached.
    I am not looking for (example):
    select * from
    (SELECT rownum ROW_NUM, empID, empName FROM employees)
    where empName like '%SM%'
    AND
    ROW_NUM <= 10
    This works in most cases, but Oracle is still evaluating all results, and only returning rows where the ROW_NUM meets the criteria. If I have a million rows with matching empName , and only want to return 10 results it still evaluates all million rows. and break the query, this does not work...

    Statement 1 is faster - rownum criteria is used along with the last_name
    Statement 2 is slower - rownum criteria is used after all the last names are fetched.
    --#1
    SQL> select first_name, last_name from emp2 where last_name like 'W%' and rownum < 10 ;
    FIRST_NAME           LAST_NAME
    Alana                Walsh
    Alana                Walsh
    Alana                Walsh
    Alana                Walsh
    Alana                Walsh
    Alana                Walsh
    Alana                Walsh
    Alana                Walsh
    Alana                Walsh
    9 rows selected.
    Execution Plan
    Plan hash value: 3151743630
    | Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |       |    "9"|   234 |    13   (0)| 00:00:01 |
    |*  1 |  COUNT STOPKEY               |       |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| EMP2  |  5145 |   130K|    13   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN          | EMP2X |       |       |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<10)
       3 - access("LAST_NAME" LIKE 'W%')
           filter("LAST_NAME" LIKE 'W%')
    Note
       - dynamic sampling used for this statement
    Statistics
             15  recursive calls
              0  db block gets
            157  consistent gets
              8  physical reads
              0  redo size
            552  bytes sent via SQL*Net to client
            380  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              9  rows processed
    --#2
    SQL> select * from ( select first_name, last_name, rownum rn from emp2 where last_name like 'W%'  ) where  rn< 10 ;
    FIRST_NAME           LAST_NAME                         RN
    Matthew              Weiss                              1
    Alana                Walsh                              2
    Jennifer             Whalen                             3
    Matthew              Weiss                              4
    Alana                Walsh                              5
    Jennifer             Whalen                             6
    Matthew              Weiss                              7
    Alana                Walsh                              8
    Jennifer             Whalen                             9
    9 rows selected.
    Execution Plan
    Plan hash value: 2594185790
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |      | "5145"|   195K|   619   (3)| 00:00:08 |
    |*  1 |  VIEW               |      |  5145 |   195K|   619   (3)| 00:00:08 |
    |   2 |   COUNT             |      |       |       |            |          |
    |*  3 |    TABLE ACCESS FULL| EMP2 |  5145 |   130K|   619   (3)| 00:00:08 |
    Predicate Information (identified by operation id):
       1 - filter("RN"<10)
       3 - filter("LAST_NAME" LIKE 'W%')
    Note
       - dynamic sampling used for this statement
    Statistics
             13  recursive calls
              0  db block gets
           2371  consistent gets
              0  physical reads
              0  redo size
            745  bytes sent via SQL*Net to client
            380  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              9  rows processed

Maybe you are looking for

  • Vendor Status

    Hi Experts... I am Tech. person i dont know about following.. Is there any thing like we can set Vendor status into SAP... Like 'Black Listed' , 'ACTIVE' etc... if it is there ... then in which tables the status is stored

  • Error 37- a bad filename or volume name was encountered

    I'm getting this message when trying to few a video clip taken by my camera. The video clip is in My Pictures. The pictures are showing up correctly, and the video clip shows up as "Quick Time Movie". When I click on it to view I get this message. I

  • Swithcing java IDE's

    I have gotten into javva through school where we used an educational IDE. I'm not sure but so far I have experinced diffculties when using my code outside the educational IDE. I pretty sure we were taught the "actual " java (lol) but the structure mi

  • Epson 4880 Error:-9672

    I have downloaded the latest and greatest print driver from epson and get the error "An Error occurred while trying to add the selected printer. Error:-9672 I added the printer using default if found the driver and I clicked Add. Any help would be ap

  • Shared memory problem - memory leak?

    I've got the following error after calling a stored procedure about 26000 times. Does this mean Oracle 8.1.6 Thin Driver has memory leak in CallableStatement? Thanks. ORA-04031: unable to allocate 4096 bytes of shared memory ("shared pool","BEGIN che