Transfering old data from a table to a history table for records purposes

Hi,
I need to write a program that automatically writes old records to another table before updating or deleting itself.
Can this be automated to be done every 28th of every month.
See my tables below:
CREATE TABLE PAY(
PAYMENTID NUMBER(8) NOT NULL,
EMPID NUMBER NOT NULL,
NAME VARCHAR(30) NOT NULL,
MONTHLY_NORMALHRS NUMBER(2),
OTHRS_FOR_MONTH NUMBER(2),
MONTHLY_WAGES NUMBER(7,2),
OTAMOUNT_FOR_MONTH NUMBER(7,2),
--ALLOWANCES
--DEDUCTIONS
GROSS_MONTHLY NUMBER (7,2),
MONTHLY_TAX NUMBER(7,2),
MONTHLY_NET NUMBER(7,2),
PAYMENTDATE DATE NOT NULL,
CONSTRAINT PK_PAY PRIMARY KEY (PAYMENTID));
INSERT INTO PAY VALUES(100,200,'JOHN BULL',140,20,'','','','','','28-MARCH-2006');
UPDATE PAY SET MONTHLY_WAGES= ((35000.00/52)*4), OTAMOUNT_FOR_MONTH=(20*1.5), GROSS_MONTHLY=((35000.00/12)+(20*1.5)), MONTHLY_TAX=((35000.00/52)*4)+((20*1.5)*.20),MONTHLY_NET=(((35000.00/12)+(20*1.5)))-(((35000.00/52)*4)+((20*1.5)*.20)) WHERE PAYMENTID = 100;
CREATE TABLE PAYMENT_HISTORY(
PAYMENTID NUMBER(8),
EMPID NUMBER NOT NULL,
NAME VARCHAR(30) NOT NULL,
MONTHLY_NORMALHRS NUMBER(2),
OTHRS_FOR_MONTH NUMBER(2),--
MONTHLY_WAGES NUMBER(7,2),-- ((CLASSIFICATION.YEARLY_PAY/52)*4) OR CLASSIFICATION.YEARLY_PAY/12
OTAMOUNT_FOR_MONTH NUMBER(7,2),
--ALLOWANCES
--DEDUCTIONS
GROSS_MONTHLY NUMBER (7,2),--(MONTHLY_WAGES +(MONTHLY_OTHRS*1.5))
MONTHLY_TAX NUMBER(7,2),
MONTHLY_NET NUMBER(7,2),
PAYMENTDATE DATE NOT NULL,
CONSTRAINT UQ_PAY_PAYM_HIST UNIQUE (PAYMENTID,PAYMENTDATE));
Now I need to be able transfer the update record on table PAY to table PAYMENT_HISTORY on a regularly date say the 28th of every month.
How do I do this, a trigger or a procedure or both?
SOS
Cube60

Example ;-
create table test67(no number(2) primary key, name varchar2(10));
create table test67h(no number(2), name varchar2(10),flag char);
create or replace trigger t67 before insert or update or delete on test67 for each row
begin
if inserting then
insert into test67h values(:new.no,:new.name,'N');
end if;
if updating then
insert into test67h values(:new.no,:new.name,'U');
end if;
if deleting then
insert into test67h values(:old.no,:old.name,'D');
end if;
end;
SQL> insert into test67 values(1,'Lijesh');
1 row created.
SQL> update test67 set name='Vishnu' where no=1;
1 row updated.
SQL> delete from test67;
1 row deleted.
SQL> select * from test67h;
NO NAME F
1 Lijesh N
1 Vishnu U
1 Vishnu D
SQL> select * from test67;
no rows selected

Similar Messages

  • Gui_download for transferring the data from internal table to excel sheet.

    hi all,
    i am using gui_download for transferring the data from internal table to excel sheet.
    I have a internal table with 3 columns col1,col2,col3 and I am getting the file at the specified path,but my problem is that,in the excel sheet(path specified) all the 3 columns values are printed in one column.Please help me.
    Thanks in advance.

    Hi Venkata,
    plz use FM 'SAP_CONVERT_TO_XLS_FORMAT' :
      call function 'SAP_CONVERT_TO_XLS_FORMAT'
        exporting
    *   I_FIELD_SEPERATOR          =
    *   I_LINE_HEADER              =
          i_filename                 = p_file
    *   I_APPL_KEEP                = ' '
        tables
          i_tab_sap_data             = t_mbew
    * CHANGING
    *   I_TAB_CONVERTED_DATA       =
    * EXCEPTIONS
    *   CONVERSION_FAILED          = 1
    *   OTHERS                     = 2
      if sy-subrc <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      endif.
    Hope this helps,
    Erwan

  • Data from 2 tables for jdbc sender adapter

    how to pickup data from 2 tables at a time when using a jdbc sender adapter?

    select <fields> from table1 where <condition>
    union
    select <fields> fromt table2 where <condition>
    also u can combine this with joins as pointed

  • Delete Variance data from COSB table for version 1

    Hi,
    We are trying to change the Valuation of version 1 from Legal to Profit center in OKEQ.
    While doing so, we are trying to remove variance indicator due to error message KT337.
    When trying to uncheck variance indicator for this version there is error message KV853.
    SAP per SAP note 337183, if there are any dependent entries in COSB table (CO Object: Total Variances/Results Analyses) we get this message.
    Now the question is how can i delete the variance data with version 1 from the table COSB.
    Message no. KT337 :
    Valuation variance from version "000"
    Diagnosis
    You are maintaining version 1 in controlling area 1000. The actual indicator is not active and at least one of the two indicators WIP and Variance are active. In this case, the valuation in version 1 must match the valuation of version '000'. This is not the case: 2 <> 0.
    Procedure
    Activating the indicators WIP and Variance with the actual indicator being inactive at the same time requires valuation 0.
    Message no. KV853
    Variance calculation cannot be reset - there are dependent entries
    Regards
    Raghu
    Edited by: Raghu Ram Thatavarthy on Nov 16, 2011 3:35 PM
    Edited by: Raghu Ram Thatavarthy on Nov 16, 2011 3:36 PM

    You may need to contact the SAP SLO (System landscape optimization) team

  • Procedure to Purge data from WF tables for Item Type POERROR

    Hi,
    I have executed concurrent program "Purge Obsolete Workflow Runtime Data" to purge old data for Item Type "REQAPPRV". After execution of the concurrent program i re-query to WF_ITEMS table to get the count of the records for "REQAPPRV" but all records did not get purged as "REQAPPRV" is present as PARENT_ITEM_KEY for Item Type "POERROR" in WF_ITEMS table.
    Now to purge records of "REQAPPRV" i need to first Purge records of "POERROR". Now the problem is END_DATE of those POERROR records is NULL at WF_ITEMS table that is why concurrent program "Purge Obsolete Workflow Runtime Data" does not able to purge them. How can i resolve this issue?? Please guide..
    Regards,
    Priyanka

    Hi,
    I have already gone thorugh MOS doc, cross checked the queries provided in document and found that status of PARENT_ITEM_TYPE [REQAPPRV] is "COMPLETE" and the status of Chilld ITEM_TYPE [POERROR] is "ACTIVE". I want to close or Purge the child ITEM_TYPE. Can you guide how can we achive that ??
    Regards,
    Priyanka

  • How to extract data from monthly tables for annual balance chart?

    Hello Again!
    I am wanting to decide whether to enter all of my transactions into one table for the whole year or separate tables for each month. If I was to use one table for the whole year it would probably have in the region of 1000 entries, and as I understand it, Numbers doesn't work so well with such large tables. Also, as I will be mostly using Numbers on my iPad, where it runs slower, this may well slow things down considerably. Another reason for using Monthly Tables is that it would be easier to search.
    Having said this, I am using the following method to construct balance charts for my accounts which seems to depend on having just one table for all transactions for the year. Here is a sample of the transactions table:
    which updates the balance for each account after each new transaction is entered.
    Then I use the following table for creating the balance charts:
    It uses the following formula to return the balance at the end of each week for each account (this is the formula for Account 1) which then is displayed in the chart.
    =LOOKUP(A2,$Week Number,All Transactions :: Account 1)
    If I was to use the Monthly Tables method it is not obvious to me how I would create the Weekly Balance Table, given that I would need to draw data from 12 different tables, and that some of these Tables would contain the same Week Number (for weeks that overlap months).
    I would be grateful for any advice on achieving the result I am looking for by the most efficient method.
    Thanks
    Nick
    P.S. Hopefully this is the last question for a while!

    I wasn't urging you to keep an annual table but to spit it on an easier to rule basis.
    Split it in chuncks of exactly 35 days starting from the first one of the 'year' in operation
    Doing that everything will be easier
    the index of a day will be calculated by
    =DATEDIF(B,D,"D")+1
    The index of the 'custom_week' will be calculated by :
    =1INT((DATEDIF(B,D,"D")1)/7)
    So, it would be easy to gather datas from a given 'custom_week'.
    Yvan KOENIG (VALLAURIS, France) jeudi 5 août 2010 10:22:25

  • Changed my laptop-transferred old data from time capsule but cannot access

    So I was backing up onto TC with my old laptop as normal. I then changed laptops and restored from the TC to get all data onto the néw laptop. No problem there.
    I then started backing up my new laptop onto the TC however the old backups and new one are not linked. Whe I go into Time Machine I can see that there are old backups there (from the old laptop) but they are greyed out and I can't get back to pull any data out.
    How can I get the old and new linked?

    jiggy44 wrote:
    I then started backing up my new laptop onto the TC however the old backups and new one are not linked.
    Welcome to Apple's discussion groups.
    You've posted your message in the Time Capsule message section, but where it really should have gone is in the Time Machine section of the OS X 10.6 Snow Leopard area:
    http://discussions.apple.com/forum.jspa?forumID=1342
    A Time Capsule is an Apple AirPort Wi-Fi base station with an integrated disk drive. Time Machine is backup software that has been included with OS X since version 10.5.
    That's the way Time Machine works. It identifies a backup with a particular computer. Since you have a different computer, Time Machine created a new backup. This message thread discusses your situation:
    http://discussions.apple.com/message.jspa?messageID=10726382
    When I go into Time Machine I can see that there are old backups there (from the old laptop) but they are greyed out and I can't get back to pull any data out.
    How can I get the old and new linked?
    I don't believe you can combine the two sets of backups. Your options now are (a) abandon the new backup and reconfigure your Mac to continue with the old backup or (b) continue with the new backup and use Time Machine's option to "Browser Other Time Machine Disks" (available with a click-and-hold on the Time Machine icon in the Dock, among other techniques) to access the contents of the old backup.

  • Archiving old data from a partitioned table

    Hi,
    While sifting through all the options for archiving the old data from a table which is also indexed, i came across a few methods which could be used:
    1. Use a CTAS to create new tables on a different tablespace from the partitions on the exisitng table. Then swap these new tables with the data from the partitions, drop the partitions on the exisiting table (or truncate them although i m not sure which one is the recommended method),offline this tablespace and keep it archived. In case you require it in the future, again swap these partitions wih the data from the archived tables. I am not sure if i got the method correctly.
    2. Keep an export of all the partitions which need to be archived and keep that .dmp file on a storage media. Once they are exported, truncate the corresponding partitions in the original table. If required in the future, import these partitions.
    But i have one constraint on my Db which is I cannot create a new archive tablespace for holding the tables containing the partitioned data into then as I have only 1 tablespace allocated to my appplication on that DB as there are multiple apps residing on it together. Kindly suggest what option is the best suited for me now. Should I go with option 2?
    Thanks in advance.

    Hi,
    Thanks a bunch for all your replies. Yeah, I am planning to go ahead with the option 2. Below is the method I have decided upon. Kindly verify is my line of understanding is correct:
    1. export the partition using the clause:
    exp ID/passwd file=abc.dmp log=abc.log tables=schemaname.tablename:partition_name rows=yes indexes=yes
    2. Then drop this partition on the original table using the statement:
    ALTER TABLE tablename drop PARTITION partition_name UPDATE GLOBAL INDEXES;
    If I now want to import this dump file into my original table again, I will first have to create this partition on my table using the statement:
    3. ALTER TABLE tablename ADD PARTITION partition_name VALUES LESS THAN ( '<<>>' ) ;
    4. Then import the data into that partition using:
    imp ID/passwd FILE=abc.dmp log=xyz.log TABLES=schemaname.tablename:partition_name IGNORE=y
    Now my query here is that this partitioned table has a global index associated with it. So once i create this partition and import the data into it, do i need to drop and recreate the global indexes on this table or is there any aother method to update the indexes. Kindly suggest.
    Thanks again!! :)

  • Deleting Data from 85 tables

    Hi All,
    There is a 13 years old data in DB, which holds data from 2001. The task assigned to me is to delete the old data from Main table and its associated tables.
    87 tables are identified and data needs to be removed.
    1) We don't have an option to bring down the DB, the DB should be online.
    2) We can't disable triggers and constratins.
    3) We can't disable LOGGING
    4) The total Size of the tables that we are going to do the deleting is around 290 GB.
    5) DB is Oracle 11g.
    What I did is,
    I wrote a Stored Procedure, that get's MONTH and YEAR as input parameter. This procedure is called 12 times to delete one year.
    I have collected all the ID’s and stored in a separate table. Based on the id, I am deleting the data from each table and committing. (I was doing a bulk commit after deleting some group of tables, but those commits took lot of time.).
    Now total time to delete 150 Million Rows from the entire table is 4 days to delete 7 years of data.
    Is it any way I can make the process faster other than adding INDEX to avoid full table scans.
    is there is a better approach to handle this?.
    Advance Thanks for your response.
    -Jac

    988197 wrote:
    Hi All,
    There is a 13 years old data in DB, which holds data from 2001. The task assigned to me is to delete the old data from Main table and its associated tables.
    87 tables are identified and data needs to be removed.
    1) We don't have an option to bring down the DB, the DB should be online.
    2) We can't disable triggers and constratins.
    3) We can't disable LOGGING
    4) The total Size of the tables that we are going to do the deleting is around 290 GB.
    5) DB is Oracle 11g.
    What I did is,
    I wrote a Stored Procedure, that get's MONTH and YEAR as input parameter. This procedure is called 12 times to delete one year.
    I have collected all the ID’s and stored in a separate table. Based on the id, I am deleting the data from each table and committing. (I was doing a bulk commit after deleting some group of tables, but those commits took lot of time.).
    Now total time to delete 150 Million Rows from the entire table is 4 days to delete 7 years of data.
    Is it any way I can make the process faster other than adding INDEX to avoid full table scans.
    If you added another index, that would likely make the operation SLOWER because every time a row is deleted, the index would also have to be changed to reflect that fact. Indexes introduce more work to be done on DML operations.
    RP0428 has it right. This is a one-time operation and you are already well down the road. Just keep going.
    is there is a better approach to handle this?.
    Advance Thanks for your response.
    -Jac

  • Performance Issue - Fetching latest date from a507 table

    Hi All,
    I am fetching data from A507 table for material and batch combination. I want to fetch the latest record based on the value of field DATBI. I have written the code as follows. But in the select query its taking more time. I dont want to write any condition in where claue for DATBI field because I have already tried with that option.
    SELECT kschl
               matnr
               charg
               datbi
               knumh
        FROM a507
        INTO TABLE it_a507
        FOR ALL ENTRIES IN lit_mch1
        WHERE kschl = 'ZMRP'
        AND   matnr = lit_mch1-matnr
        AND   charg = lit_mch1-charg.
    SORT it_a507 BY kschl matnr charg datbi DESCENDING.
      DELETE ADJACENT DUPLICATES FROM it_a507 COMPARING kschl matnr charg.

    Hi,
    These kind of tables will be storing large volumes of data. Thus while making a select on it, its important to use as many primary key fields as possible in the where condition. Here you can try mentioning KAPPL since its specific to a requirement. If its for purchasing use 'M' and try.
    if not lit_mch1[] is initial.
    SELECT kschl
    matnr
    charg
    datbi
    knumh
    FROM a507
    INTO TABLE it_a507
    FOR ALL ENTRIES IN lit_mch1
    WHERE kappl = 'M'
    AND kschl = 'ZMRP'
    AND matnr = lit_mch1-matnr
    AND charg = lit_mch1-charg.
    endif.
    SORT it_a507 BY kschl matnr charg datbi DESCENDING.
    DELETE ADJACENT DUPLICATES FROM it_a507 COMPARING kschl matnr charg.
    This should considerably increase the performance
    Regards,
    Vik

  • How to retrieve data from plsql table in BI publisher Data template

    Hi All,
    I have created a data template for XML publisher report. In data template i m getting data from plsql table. for that i have created one package with pipelined function. I am able to run that sql from sql developer .But if i run the concurrent program then i got error like "java.sql.SQLSyntaxErrorException: ORA-00904: "XXXXX": invalid identifier".
    I have used the same parameters in Data template and concurrent program....
    please clarify me what needs to be done....
    thanks in advance....
    Regards,
    Doss

    Hi Alex ,
    i am using pipelined function and get the data from cursor and load it into plsql table (nested table). and i use the below in my data template to fetch the data:
    <sqlStatement name="Q1">
    <![CDATA[select * from  table(PO_SPEND_RPT_PKG.generate_report(P_ORG_ID,P_SOB_ID,P_ORG_NAME,P_PERIOD_NAME,P_CLOSE_STATUS,P_E_PCARD_NEED,P_REPORT_TYPE))]]>
    </sqlStatement>
    if i run the above in sql developer i can get the result....from apps if i run i got the error "java.sql.SQLSyntaxErrorException: ORA-00904: "P_ORG_ID": invalid identifier"
    Edited by: kalidoss on Sep 14, 2012 4:32 AM

  • Help needed in  extracting data from PCD tables

    Hi Friends
    I Have a requiremnt for creating custom portal activity report ,even though
    we have  standard report, the extraced data will be used to create bw reports later.
    my part is to find a way to extract the data from PCD tables for creating
    custom portal activity reports
    i have selected the following  tables for the data extraction
    WCR_USERSTAT,WCR_WEBCONTENTSTAT,WCR_USERFIRSTLOGON,
    WCR_USERPAGEUSAGE.
    My questions are
    1.Did i select the Exact PCD tables?
    2.Can i use UME api  for  accessing the data from those tables?
    3.can i use  the data extracted  from PCD tables in JSPdynpage  or
    webdynpro apps?
    4.can i Querry  the  PCD tables from  JSPDynpage or Webdynpro
    Please help me in finding a solution for this
    Thanks
    Ashok Battula

    Hi daniel
    Can u tell  me weather i can develop the following  custom reports from those WCR tables
         Report Type
    1     Logins
          - Unique Count
          - Total Count
          - Most Active Users (by Partner Name)
          - Most Active Users (by Contact Name)
          - Entry Point (by page name)
          - Session Time
          - Hourly Traffic Analysis
    2     Login Failures
          - Total Count
          - Count by error message
          - Credentials Entered (by user name and password)
    3     Content Views (by File Name)
          - Unique Count
          - Total Count
          - Most requested Files
          - Most requested Pages
          - File Not Found
    4     Downloads (by File Name)
          - Unique Count
          - Total Count
          - Most requested Files
          - File Not Found
    5     Portal Administration
          - Site Content (by file name)
          - Site Content (by page name)
          - Latest Content (by file name)
          - Expired Content (by file name)
          - Subscriptions Count (by file name)
    6     Login History (by Partner, Contact Name)
          - No Login
          - First Login
          - Duration between registration and first login
          - Most Recent Login
          - Average Number of Logins
    plz  help me in find ing a way
    thanks
    ashok

  • Retrive data from HR tables

    Hi Experts,
    How to retrive data from HR tables for custom screen & store in custom table? I need Employee number, employee name, department, their location & mail id. When employee number is given, all other details must be fetched. Can you give a solution for this problem?
    Thanks,
    Kavitha

    Hi Kavitha,
    in HR you could get data from different tables based on infotypes. lets say if any data need to get from infotype 0001 (Org assignment), add "PA" before the infotype number so it would be PA0001.
    you could get the Employee no, name, department, location(i guess personal area or personal subarea) from PA0001 table and email id from PA105 infotype.
    Regards
    Raju

  • To extract Data from Pool Table Data Sources

    hi
    I want to extract data from Pool table, for that i want to create infoset for that pool table. can anyone please let me know the
    procedure to create info set on pool tables.
    Regards
    Atul
    Moderator message: please (re)search yourself first.
    Edited by: Thomas Zloch on Nov 8, 2010 12:54 PM

    Hi Atul
    You have a couple of options here:
    1) Create Infoset SQ02 on those tables and RSO2 - create generic ds
    2) Create functional Module and create generic ds using FM
    Replicate DS to BW and Build objects and map them in transformations and create dtp and IP.
    Refer to this [link|How to extract data from a pool table?; for more details.
    Regards
    Harsh

  • Insert old missing data from one table to another(databaase trigger)

    Hello,
    i want to do two things
    1)I want to insert old missing data from one table to another through a database trigger but it can't be executed that way i don't know what should i do in case of replacing old data in table_1 into table_2
    2)what should i use :NEW. OR :OLD. instead.
    3) what should i do if i have records exising between the two dates
    i want to surpress the existing records.
    the following code is what i have but no effect occured.
    CREATE OR REPLACE TRIGGER ATTENDANCEE_FOLLOWS
    AFTER INSERT ON ACCESSLOG
    REFERENCING NEW AS NEW OLD AS OLD
    FOR EACH ROW
    DECLARE
    V_COUNT       NUMBER(2);
    V_TIME_OUT    DATE;
    V_DATE_IN     DATE;
    V_DATE_OUT    DATE;
    V_TIME_IN     DATE;
    V_ATT_FLAG    VARCHAR2(3);
    V_EMP_ID      NUMBER(11);
    CURSOR EMP_FOLLOWS IS
    SELECT   EMPLOYEEID , LOGDATE , LOGTIME , INOUT
    FROM     ACCESSLOG
    WHERE    LOGDATE
    BETWEEN  TO_DATE('18/12/2008','dd/mm/rrrr') 
    AND      TO_DATE('19/12/2008','dd/mm/rrrr');
    BEGIN
    FOR EMP IN EMP_FOLLOWS LOOP
    SELECT COUNT(*)
    INTO  V_COUNT
    FROM  EMP_ATTENDANCEE
    WHERE EMP_ID    =  EMP.EMPLOYEEID
    AND    DATE_IN   =  EMP.LOGDATE
    AND    ATT_FLAG = 'I';
    IF V_COUNT = 0  THEN
    INSERT INTO EMP_ATTENDANCEE (EMP_ID, DATE_IN ,DATE_OUT
                                ,TIME_IN ,TIME_OUT,ATT_FLAG)
         VALUES (TO_NUMBER(TO_CHAR(:NEW.employeeid,99999)),
                 TO_DATE(:NEW.LOGDATE,'dd/mm/rrrr'),       -- DATE_IN
                 NULL,
                 TO_DATE(:NEW.LOGTIME,'HH24:MI:SS'),      -- TIME_IN
                 NULL ,'I');
    ELSIF   V_COUNT > 0 THEN
    UPDATE  EMP_ATTENDANCEE
        SET DATE_OUT       =  TO_DATE(:NEW.LOGDATE,'dd/mm/rrrr'), -- DATE_OUT,
            TIME_OUT       =   TO_DATE(:NEW.LOGTIME,'HH24:MI:SS'), -- TIME_OUT
            ATT_FLAG       =   'O'
            WHERE EMP_ID   =   TO_NUMBER(TO_CHAR(:NEW.employeeid,99999))
            AND   DATE_IN <=  (SELECT MAX (DATE_IN )
                               FROM EMP_ATTENDANCEE
                               WHERE EMP_ID = TO_NUMBER(TO_CHAR(:NEW.employeeid,99999))
                               AND   DATE_OUT IS NULL
                               AND   TIME_OUT IS NULL )
    AND   DATE_OUT  IS NULL
    AND   TIME_OUT IS NULL  ;
    END IF;
    END LOOP;
    EXCEPTION
    WHEN OTHERS THEN RAISE;
    END ATTENDANCEE_FOLLOWS ;
                            Regards,
    Abdetu..

    INSERT INTO SALES_MASTER
       ( NO
       , Name
       , PINCODE )
       SELECT SALESMANNO
            , SALESMANNAME
            , PINCODE
         FROM SALESMAN_MASTER;Regards,
    Christian Balz

Maybe you are looking for

  • Macbook white to tv hd?

    I was wondering what kind of cord i would need to plug into from my macbook to my 46 inch hd tv.? Thank you for all your help

  • GetCurrentRow() return java.lang.NullPointerException ( Version 11.1.2.1.0)

    Hi all ------here is my code ApplicationModule def=Configuration.createRootApplicationModule("model.AppModule", "AppModuleLocal"); MaitreViewImpl vo=(MaitreViewImpl)def.findViewObject("MaitreView1"); MaitreViewRowImpl row=(MaitreViewRowImpl)vo.getCur

  • I dream of new features in the near future !!!

    I dream of new features in the near future, including that of power sharing my screen on my TV (imagine a Keynote presentation or a spreadsheet in a presentation), and able to play my games on Iphone 4 content directly on my TV . Wow, just imagine th

  • WRVS4400N reset ?

    just bought a Linksys Wireless Router (WRVS4400N) and updated the firmware but once the router rebooted... the red DIAG light turned on and it stays on..... i tried resetting the router but it it doesn't seem to get this router configured correctly a

  • WebI 4.0 using BEx Query - #TOREFRESH when using input control

    Once I refresh my webI report for certain month range and then I use input control for analysing my data within that month range. It always ask for refresh "#TOREFRESH". The above happens also if I have lot of formula variables in my report then it a