Need suggestion in deletion for five tables at a time

Hi,
I need some suggestion regarding a deletion and i have the following scenario.
tab1 contains 100 items.
for one item tab2..6 tables contain 4000 rows.So the loop will run for each item and will delete 20,000 lines and will do a commit.
Currently for 5,00,000 deletion it is taking 1 hr.All the tables and indexes are analysied.
CURSOR C_CHECK_DELETE_IND
IS
SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y';
type p_item IS TABLE OF tab1.item%type;
act_p_item p_item;
BEGIN
OPEN C_CHECK_DELETE_IND;
LOOP
FETCH C_CHECK_DELETE_IND bulk collect INTO act_p_item limit 5000;
FOR i IN 1..act_p_item.count
LOOP
DELETE FROM tab2 WHERE item = act_p_item(i);
DELETE FROM tab3 WHERE item = act_p_item(i);
DELETE FROM tab4 WHERE item = act_p_item(i);
DELETE FROM tab5 WHERE item = act_p_item(i);
DELETE FROM tab6 WHERE item = act_p_item(i);
COMMIT;
END IF;
END LOOP;
exit when C_CHECK_DELETE_IND%notfound;
END LOOP;
Hope i have explained the scenario.Can you please suggest me the right approach.
Thanks in advance.

drop the loop
DELETE FROM tab2 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
DELETE FROM tab3 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
DELETE FROM tab4 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
DELETE FROM tab5 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
DELETE FROM tab6 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );if you can accomplish the task without looping, by all means do so!
you could also do a bulk delete.
forall j in indices of act_p_item save exceptions
  DELETE tab2
   WHERE item = act_p_item(j);etc for tab3 to 6
but unless the query SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' is horrible, I'd stick to the
DELETE FROM tab2 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' ); path
Edited by: tanging on Dec 21, 2009 10:19 AM

Similar Messages

  • Data gets deleted for the table RSTRFIELDSH

    If data gets deleted for the table RSTRFIELDSH at BW end
    ...what should i do?to get it back

    Wim,
    We lost the data from production. We ran Archive program to archive the data and archiving file is still exist.
    There is no possibliity to run the OUTBOUND process again to create the IDOCs.
    But only my concern is When we run the Transaction SARA we will be getting the data for other period as well.
    But I need to reload the data for three months only because that archiving file also has other period data as well.
    Can you send me details of how to use SARA Transaciton for particular period.
    Thanks,
    Kalikonda.

  • When do I really need to create indexes for a table?

    Once I was talking to a dba in a conference.
    He told me that not always I have to create indexes for a single table, it depends of its size.
    He said that Oracle read registers in blocks, and for a small table Oracle can read it fully, in a single operation, so in those cases I don't need indexes and statistcs.
    So I would like to know how to calculate it.
    When do I really need to create indexes for a table?
    If someone know any documment that explain that, or have some tips, I'd aprecciate.
    Thanks.
    P.S.: The version that I'm using is Oracle 9.2.0.4.0.

    Hi Vin
    You mentioned so many mistakes here, I don't know where to begin ...
    vprabhu_2000 wrote:
    There are different kinds of Index. B-tree Index is by default. Bit map index, function based index,index organized table.
    B-tree index if the table is large This is incorrect. Small tables, even those consisting of rows within just one block, can benefit from an index. There is no table size too small in which an index might not be benefical. William Robertson in his post references links to my blog where I discuss this.
    and if you want to retrieve 10 % or less of data then B-tree index is good. This is all wrong as well. A FTS on a (say) million row table could very well be more efficient when retrieving (say) just 1% of data. An index could very well be more efficient when retrieving 100% of data. There's nothing special about 10% and there is no such magic number ...
    >
    Bit Map Index - On low cardinality columns like Sex for eg which could have values Male,Female create a bit map index. Completely and utterly wrong. A bitmap index might be the perfect type of index, better than a B-Tree, even if there are (say) 100,000 distinct values in the table. That a bitmap index is only suitable for low cardinality columns is just not true. And what if it's an OLTP application, with lot's of concurrent DML on the underlining table, do you really think a bitmap index would be a good idea ?
    >
    You can also create an Index organized table if there are less rows to be stored so data is stored only once in index and not in table. Not sure what you mean here but an IOT can potentially be useful if you have very large numbers of rows in the table. The number of rows has nothing to do with whether an IOT is suitable or not.
    >
    Hope this info helps. Considering most of it is wrong, I'm not sure it really helps at all :(
    Cheers
    Richard Foote
    http://richardfoote.wordpress.com/

  • Need suggestion on adding Index on table

    Hi,
    There is table called customer_locations in my database which has records about more then 5000 rows, When we write some select query to fetch data from this table takes too much time to load the data.
    Need a suggestion how to add index to increase the performance on this table. also which type of index to be added need a suggestion
    table sql script is mentioned below
    CREATE TABLE "CUSTOMER_LOCATIONS"
    (     "LOCATION_ID" NUMBER NOT NULL ENABLE,
         "COMPANY_NAME" VARCHAR2(512),
         "ADDRESS_LINE_1" VARCHAR2(512),
         "ADDRESS_LINE_2" VARCHAR2(512),
         "PHONE_NUMBER" VARCHAR2(255),
         "FAX_NUMBER" VARCHAR2(255),
         "CITY" VARCHAR2(512),
         "STATE" VARCHAR2(512),
         "ZIP" VARCHAR2(100),
         "COUNTRY" VARCHAR2(255),
         "CREATED_BY" VARCHAR2(512),
         "CREATED_DATE" TIMESTAMP (6),
         "MODIFIED_BY" VARCHAR2(512),
         "MODIFIED_DATE" TIMESTAMP (6),
         "DOMAIN_ID" NUMBER,
         "LOCATION_TYPE" VARCHAR2(255),
         "STATUS" VARCHAR2(50),
         "IB_STATUS" VARCHAR2(100),
         "OLD_LOCATION_ID" VARCHAR2(50),
         CONSTRAINT "SS_CUSTOMER_LOCATIONS_PK" PRIMARY KEY ("LOCATION_ID") ENABLE
    Please suggest
    Thanks
    Sudhir

    Hi Sudhir,
    Since you have no predicates it would be unavoidable to have FULL TABLE SCAN. But let me tell you that FULL TABLE SCANS are not bad.
    Just to help you with, the below code can enunciate the use of Indexes:
    drop table test_table;
    create table test_Table as select * from all_objects where rownum < 5001;
    select * from user_ind_columns where table_name = 'TEST_TABLE';
    --No rows fetched.
    explain plan for
    select object_name || ' is a ' || object_type as OBJ_DESC, object_id
      from test_table;
    --5000 Rows fetched
    select operation, options, object_name, object_alias, object_instance, object_type, optimizer, depth, position, cost, cardinality, cpu_cost, io_cost
    from plan_table;
    OPERATION               OPTIONS     OBJECT_NAME     OBJECT_ALIAS          OBJECT_INSTANCE     OBJECT_TYPE     OPTIMIZER     DEPTH     POSITION     COST     CARDINALITY     CPU_COST     IO_COST
    SELECT STATEMENT     (NULL)     (NULL)          (NULL)                    (NULL)               (NULL)          ALL_ROWS     0          19               19          5000          1698651          19     
    TABLE ACCESS          FULL     TEST_TABLE     TEST_TABLE@SEL$1     1                    TABLE          (NULL)          1          1               19          5000          1698651          19
    alter table test_Table add constraint pk_object_id PRIMARY KEY (object_id);
    explain plan for
    select object_name || ' is a ' || object_type as OBJ_DESC, object_id
      from test_table
    where object_id = 26;
    select operation, options, object_name, object_alias, object_instance, object_type, optimizer, depth, position, cost, cardinality, cpu_cost, io_cost
    from plan_table
    where statement_id = 'WITH_PK';
    OPERATION               OPTIONS               OBJECT_NAME          OBJECT_ALIAS          OBJECT_INSTANCE     OBJECT_TYPE          OPTIMIZER     DEPTH     POSITION     COST     CARDINALITY     CPU_COST     IO_COST
    SELECT STATEMENT     (NULL)               (NULL)               (NULL)                    (NULL)               (NULL)               ALL_ROWS     0          2               2          1               15543          2
    TABLE ACCESS          BY INDEX ROWID     TEST_TABLE          TEST_TABLE@SEL$1     1                    TABLE                              1          1               2          1               15543          2
    INDEX                    UNIQUE SCAN          PK_OBJECT_ID     TEST_TABLE@SEL$1     (NULL)               INDEX (UNIQUE)     ANALYZED     2          1               1          1               8171          1Let me know if it help or if you still have any concerns.
    Regards,
    P
    Edited by: PurveshK on May 29, 2012 12:40 PM

  • Need the API Name for OPM tables

    Hi,
    I need the public / private API to populate data into the following Tables.
    Table Name :
    QC_SPEC_MST
    QC_SMPL_MST
    QC_RSLT_MST
    I didnt get any reference from Metalink.
    Can anyone help us to identify the API or any workaround to solve this ?
    Thanks & Regards,
    Anik Datta

    Hi Anik;
    All APIs are listed in Oracle Integration Repository
    http://irep.oracle.com/index.html
    API User Notes - HTML Format [ID 236937.1]
    R12.0.[3-4] : Oracle Install Base Api / Open Interface Setup Test [ID 427566.1]
    Oracle Trading Community Architecture API User Notes, June 2003 [ID 241320.1]
    Technical Uses of Customer Interface and TCA-API [ID 269121.1]
    Pelase also check below:
    Api's in EBS
    Re: Api's in EBS
    http://sairamgoudmalla.blogspot.com/2009/05/script-to-find-oracle-apis-for-any.html
    API
    Fixed Asset API
    List of API
    Re: List of APIs
    Oracle Common Application Components API Reference Guide
    download.oracle.com/docs/cd/B25284_01/current/acrobat/jta115api.pdf
    List of APIs and open interface R12
    Re: List of APIs and open interface R12
    Regard
    Helios

  • Need suggestion in BENEFITS for a STD Plan

    My client has STD plan with standard 60% and 100% coverage, But the no. of weeks employee gets 100% coverage depends on employee's years of service. And the cost is 1.45 per $1000 of coverage....max 12,000
    Ex:
      <u>Years of Service</u>         -
                   <u>weeks payed at 100%</u>
               under 1----
          0
                1 to 2 -
         2
                 2 to 3----
         4
    etc ....
    How can we accommodate these conditions of 100% coverage for different weeks?
    Thanks in advance for your help and time..
    regards

    drop the loop
    DELETE FROM tab2 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
    DELETE FROM tab3 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
    DELETE FROM tab4 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
    DELETE FROM tab5 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );
    DELETE FROM tab6 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' );if you can accomplish the task without looping, by all means do so!
    you could also do a bulk delete.
    forall j in indices of act_p_item save exceptions
      DELETE tab2
       WHERE item = act_p_item(j);etc for tab3 to 6
    but unless the query SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' is horrible, I'd stick to the
    DELETE FROM tab2 WHERE item in (SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y' ); path
    Edited by: tanging on Dec 21, 2009 10:19 AM

  • When need to update statistics for a table

    In Sybase we use "delete statistics ", "update all statistics " and "sp_recompile ", does Oracle need to do such things in any case? Thank you!

    Modern versions have a default job to do this, which may or may not be appropriate in its settings.  Volumes have been written about it.  Version and app dependent.  Highly recommend studying Jonathan Lewis' books.
    Stored procedures automatically recompile on use, although there can be issues dependent on bad design or configuration.

  • Trying to get row counts for all tables at a time

    Hi,
    i am trying to get row counts in database at a time with below query but i am getting error:
    its giving me ora-19202 error..please advise me
    select
          table_name,
          to_number(
            extractvalue(
              xmltype(dbms_xmlgen.getxml('select count(*) c from '||table_name))
              ,'/ROWSET/ROW/C')
              count
        from all_tables;

    ALL_TABLES returns tables/views current user has access to. These tables/views are owned not just by current user. However your code
    dbms_xmlgen.getxml('select count(*) c from '||table_name)does not specify who the owner is. You need to change it to:
    dbms_xmlgen.getxml('select count(*) c from '||owner || '.' || table_name)However, it still will not work. Why? As I said, ALL_TABLES returns tables/views current user has access to. Any type of access, not just SELECT. So if current user is, for example, granted nothing but UPDATE on some table the above select will fail. You would have to filter ALL_TABLES through ALL_SYS_PRIVS, ALL_ROLE_PRIVS, ALL_TAB_PRIVS and PUBLIC to get list of tables current user can select from. For example, code below does it for tables/views owned by current user and tables/views current user is explicitly granted SELECT:
    select
          t.owner,
          t.table_name,
          to_number(
            extractvalue(
              xmltype(dbms_xmlgen.getxml('select count(*) c from '||t.owner || '.' || t.table_name))
              ,'/ROWSET/ROW/C')
              count
        from all_tables t,user_tab_privs p
        where t.owner = p.owner
          and t.table_name = p.table_name
          and privilege = 'SELECT'
    union all
    select
          user,
          t.table_name,
          to_number(
            extractvalue(
              xmltype(dbms_xmlgen.getxml('select count(*) c from '||t.table_name))
              ,'/ROWSET/ROW/C')
              count
        from user_tables t
    OWNER                          TABLE_NAME                                                          COUNT
    SCOTT                          DEPT                                                                    4
    U1                             QAQA                                                                    0
    U1                             TBL                                                                     0
    U1                             EMP                                                                     1
    SQL> SY.

  • Need to insert rows into 100 tables at a time

    hi there,
    below is our script for creation of 100 tables...
    we need a plsql script, to insert rows into 100 tables at a single time...
    please help us...vey urgent...
    DECLARE
    counter NUMBER;
    sql_string VARCHAR2(2000);
    BEGIN FOR counter IN 1..100 LOOP sql_string := 'CREATE TABLE emp_table'||counter||'
    (id integer primary key, col_a VARCHAR2(42),col_b date,col_c number,col_d varchar2(20),col_e varchar2(20),
    col_f varchar2(20),col_g varchar2(20),col_h date,col_i varchar2(20),col_j varchar2(20),col_k date)';
    EXECUTE IMMEDIATE sql_string;
    END LOOP;
    END;
    /

    hi,
    below is our procedure and the error we are getting...
    Name Null? Type
    ID VARCHAR2(10)
    COL_A VARCHAR2(10)
    COL_B VARCHAR2(10)
    COL_C VARCHAR2(10)
    COL_D VARCHAR2(10)
    COL_E VARCHAR2(10)
    COL_F VARCHAR2(10)
    COL_G VARCHAR2(10)
    COL_H VARCHAR2(10)
    COL_J DATE
    DECLARE
    counter NUMBER;
    sql_string VARCHAR2(4000);
    BEGIN FOR counter IN 1..100 LOOP sql_string := 'CREATE TABLE emp_a'||counter||'
    (id varchar2(10), col_a varchar2(10), col_b varchar2(10), col_c varchar2(10), col_d varchar2(10), col_e varchar2(10),
    col_f varchar2(10), col_g varchar2(10), col_h varchar2(10), col_j date)';
    EXECUTE IMMEDIATE sql_string;
    END LOOP;
    END;
    DECLARE
    counter NUMBER;
    sql_string VARCHAR2 (2000);
    BEGIN
    FOR OuterCounter IN 1 .. 100 LOOP --- table prefix in which it is to be inserted
    FOR InnerCounter IN 1 .. 100 LOOP --- records to be inserted
    sql_string := 'INSERT INTO emp_a' || Outercounter || ' (id, col_a, col_b, col_c, col_d, col_e, col_f, col_g, col_h, col_j)
    VALUES ('
    || InnerCounter || ', to_char( ''col_a''' || innercounter || '),'
    || InnerCounter || ', to_char( ''col_d''' || innercounter || '),'
    || ', to_char( ''col_e''' || innercounter || '),'
    || ', to_char( ''col_f''' || innercounter || '),'
    || ', to_char( ''col_g''' || innercounter || '),'
    || ', to_char( ''col_h''' || innercounter || '),'
    || ', to_char( ''col_j''' || innercounter || '), SYSDATE)';
    EXECUTE IMMEDIATE sql_string;
    END LOOP;
    END LOOP;
    END;
    DECLARE
    ERROR at line 1:
    ORA-00907: missing right parenthesis
    ORA-06512: at line 17
    please check the procedure and write the correct one...

  • DELETE FROM database table takes more time...

    Hi Friends,
    The below statement takes more time.
    LOOP AT i_final.
      DELETE FROM zcisconec WHERE werks = i_final-werks
                              AND aufnr = i_final-aufnr
                              AND vornr = i_final-vornr.
    ENDLOOP.
    Internal table I_FINAL will have more than 80,000 records.
    DB Table zcisconec have 4 primary key fields out of 10 fields.
    Below 4 fields are primary key fields
    WERKS
    AUFNR
    VORNR
    MATNR
    Please guide me..How to optimize it?
    Regards,
    Viji

    HI,
    Check this one ,
    put a break point on that delete statement and add another line of code after, like .... CHECK SY-SUBRC = 0. Now got to debug and stop at the DELETE statement, check the number of records in your DB table, now hit F5 to step to the next statement, now go back to SE16 and refresh, do you see the number change? It should.........if you are selecting the data correctly, make sure that you are getting data into the IT_  table.
    DELETE zcisconec from i_final.
    CALL FUNCTION 'DB_COMMIT'.
    Regards,
    Ansari.
    Edited by: Ansari Samsudeen on Sep 15, 2009 8:14 AM

  • No 'order by' selection for cluster tables generates run-time

    Hi together,
    Basically I want to use a "select...order by" statement with table mhnd.
    Since mhnd is a cluster table you can not use "...order by".
    So, I use a select ... into table. I then sort this table and finally read it (index = 1)
    Unfortunatly this approach generates huge run-time.
    What can be done to minimize run-time?
    Thanks for any help

    Hi Gerd,
    Which cluster table you are using and what`s data you wanted? Maybe there are some FM to extract data from it.
    regards,
    Archer

  • Creating SQL-Loader script for more than one table at a time

    Hi,
    I am using OMWB 2.0.2.0.0 with Oracle 8.1.7 and Sybase 11.9.
    It looks like I can create SQL-Loader scripts for all the tables
    or for one table at a time. If I want to create SQL-Loader
    scripts for 5-6 tables, I have to either create script for all
    the tables and then delete the unwanted tables or create the
    scripts for one table at a time and then merge them.
    Is there a simple way to create migration scripts for more than
    one but not all tables at a time?
    Thanks,
    Prashant Rane

    No there is no multi-select for creating SQL-Loader scripts.
    You can either create them separately or create them all and
    then discard the one you do not need.

  • Sample coding for multiple table join

    Hi,
    i need a sample coding for multiple table join. can anyone help me out.
    regards
    Gokul

    SELECT AVBELN AFKDAT AVTWEG ASPART AWAERK AKURRF AKUNAG AKNUMV
             BPOSNR BFKIMG BNETWR BMATNR
             DBEGRU ELABOR E~MATKL
             INTO CORRESPONDING FIELDS OF TABLE ITAB
             FROM VBRK AS A INNER JOIN VBRP AS B
                  ON AVBELN EQ BVBELN
             INNER JOIN J_1IEXCHDR AS C
                  ON AVBELN EQ CRDOC
             INNER JOIN KNVV AS D
                  ON DKUNNR EQ AKUNAG  AND
                     DVKORG EQ AVKORG  AND
                     DSPART EQ ASPART  AND
                     D~BEGRU NE SPACE
             INNER JOIN MARA AS E
                  ON EMATNR EQ BMATNR
             WHERE A~FKDAT IN S_FKDAT AND
                   A~FKART EQ 'F2'    AND
                   A~VTWEG IN S_VTWEG AND
                   A~SPART IN S_SPART AND
                   A~KUNAG IN S_KUNAG AND
                   A~FKSTO NE 'X'     AND
                   B~WERKS IN S_WERKS AND
                   C~TRNTYP = 'DLFC'  AND
                   E~LABOR IN S_LABOR AND
                C~SRGRP IN ('01','02','03','31','32','33','41','42','43',
                            '81','82','83','95','55','45', '48') AND
                   B~MATNR IN S_MATNR AND
                   D~BEGRU IN S_BEGRU AND
                   E~MATKL IN S_MATKL.
    but my suggestion not to use more than 2 table in inner join it will affect in performance use for all entries instead of join.
    regards
    shiba dutta

  • How to create xsd's for each table in repository instead of entire repos

    Hi Gurus,
    Is there any way i could create xsd's for each table in the repository separately instead of creating single xsd file from "Export repository schema" option which creates a single xml file for the entire repository.I need to create xsd for each table in repository...
    Any Help greatly appreciated
    Thanks
    Aravind

    Open the Lookup table you want the XSD for, in Data manger
    Export it to access.( You can select all the fields you want to export to access and then check option "open Access after export")
    Now in Access, again right click the table and export it to XML.
    Provided you have .NET frame work installed on the machine where you are doing this export, you can do the following:
    Use XSD.exe from command prompt and get the XSD.
    Use the following link as a reference for XSD stuff.
    http://msdn.microsoft.com/en-us/library/x6c1kb0s(VS.71).aspx
    (OR)
    Get the whole XML of the repository and distill the whole structure for Lookups and create XSD using any standard XML editor.

  • Single Trigger for muliple tables

    Hi All,
    I have one requirement to create a trigger to populate a table (X) based on the any insert or update on few tables (A,B,C,D,E,F).
    Do I need to create trigger for each table ??. any idea of using all tables in one trigger.

    Hi,
    this is not possible.
    but you can use multiple trigger on single table
    if you can use view, then you can use INSTEAD OF Trigger on view
    for example
    CREATE OR REPLACE VIEW manager_info AS
        SELECT e.ename, e.empno, d.dept_type, d.deptno, p.prj_level,
               p.projno
            FROM   Emp_tab e, Dept_tab d, Project_tab p
            WHERE  e.empno =  d.mgr_no
            AND    d.deptno = p.resp_dept;
    CREATE OR REPLACE TRIGGER manager_info_insert
    INSTEAD OF INSERT ON manager_info
    REFERENCING NEW AS n                 -- new manager information
    FOR EACH ROW
    DECLARE
       rowcnt number;
    BEGIN
       SELECT COUNT(*) INTO rowcnt FROM Emp_tab WHERE empno = :n.empno;
       IF rowcnt = 0  THEN
           INSERT INTO Emp_tab (empno,ename) VALUES (:n.empno, :n.ename);
       ELSE
          UPDATE Emp_tab SET Emp_tab.ename = :n.ename
             WHERE Emp_tab.empno = :n.empno;
       END IF;
       SELECT COUNT(*) INTO rowcnt FROM Dept_tab WHERE deptno = :n.deptno;
       IF rowcnt = 0 THEN
          INSERT INTO Dept_tab (deptno, dept_type)
             VALUES(:n.deptno, :n.dept_type);
       ELSE
          UPDATE Dept_tab SET Dept_tab.dept_type = :n.dept_type
             WHERE Dept_tab.deptno = :n.deptno;
       END IF;
       SELECT COUNT(*) INTO rowcnt FROM Project_tab
          WHERE Project_tab.projno = :n.projno;
       IF rowcnt = 0 THEN
          INSERT INTO Project_tab (projno, prj_level)
             VALUES(:n.projno, :n.prj_level);
       ELSE
          UPDATE Project_tab SET Project_tab.prj_level = :n.prj_level
             WHERE Project_tab.projno = :n.projno;
       END IF;
    END;see more : http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm
    Edited by: Mahir M. Quluzade on Oct 12, 2011 9:23 AM

Maybe you are looking for

  • I have a problem with the WiFi on my new 4th gen 32g Itouch.

    I have a problem with the WiFi on my new 4th gen 32g Itouch. It is showing the bars and the wifi connection is showing that it is on but I can't get onto any of my apps that use the internet. Any suggestions as to what might be the problem? Thanks!

  • Customization of Billing and Receipt History Report

    HI, I am trying to customize the standard report "Billing and Receipt History" in Accounts Receivable. I need to sort the data in the report based on due date. I have commented all the other sort criteria in the queries but still the report does not

  • The content isn't flow down when I add new rows

    Hello folks, I've implemented a dynamic rows adding in my PDF form and it's working fine. Why the content that resides under the table does not go down after adding new rows to it, but the new rows are on it? What should I do in my form? Thank you, Y

  • Error message A12E1-what is this?

    Error message A12E1 - what is this?

  • Extend a MAXDB database.

    Hi, Is there a way I can extend my existing Datafiles of a maxDB database (version 7.6) I know about the auto_extend parameter, but that will create additional datafiles for the DB, while I am only interested to enlarge the existing datafiles. Thx fo