Managing wwv_flow_files table

Lets say I have a file upload feature on one of my pages.
When the page is submitted, the file is uploaded to the wwv_flow_files table.
I have a after submit process to parse the file, load it into a collection and process the data. When the process ends successfully, it does a DELETE from wwv_flow_files.
I put a validation to upload only certain types of files based on the extension. If the validation fails, the file stays around in the wwv_flow_files table.
If I have a after submit parse routine to parse/validate the file and that fails, the file stays around in the wwv_flow_files table.
If I have any othe error in my after submit process that causes it to exit via my raise_application_error(), the file stays around in the wwv_flow_files table.
In this situation, what is the recommeneded way to properly purge the wwv_flow_files table?
I havent yet put COMMITs anywhere in any of my HTML DB apps because the engine implicitly does a commit at the end of every page view, but this file upload thing seems to be its own little "autonomous transaction".
Thanks

Vikas - If you can do a delete after your process
runs successfully, why can't you do a delete when you
encounter one of the other situations (before raising
an unhandled exception)? Let me explain all my "use cases".
1. On Submit validation that checks the "extension" of the file. This is a simple PL/SQL expression that checks lower(:P1_FILENAME) like '%.csv'.
If this validation fails, the file hangs around in the wwv_flow_file_objects$ table, how can I prevent this?
2. If the validation succeds, I proceed to my one and only after submit process that does something like
parse_file(:P1_FILENAME);
if something then
raise_application_error(-20000,'something failed');
end if;
if otherthing then
raise_application_error(-20000,'otherthing failed');
end if;The parse_file procedure parses the file and stores it in a collection and if everything succeeds it does a delete from wwv_flow_files where filename=:P1_FILENAME;
How can I make sure that the file doesnt stick around in wwv_flow_files no matter what?
For a catch-all technique that recognizes fringe cases, consider a dbms job
that runs from your workspace.What would this job look like?
Even a rollback in your own page process would be too
late to prevent it. A rollback in your process would
prevent the file from appearing in the wwv_flow_files
view but would not prevent it from remaining in the
underlying wwv_flow_file_objects$ table.Not sure I understand this. The wwv_flow_files is a simple wrapper view    FROM wwv_flow_file_objects$
   WHERE security_group_id = wwv_flow.get_sgid;So why would something be in the table but not in the view?

Similar Messages

  • How to effectively manage large table which is rapidly growing

    All,
    My environment is single node database with regular file system.
    Oracle - 10.2.0.4.0
    IBM - AIX
    A tablespace in this database is growing rapidly. Especially a single table in that tablespace having "Long Raw" column datatype has grown from 4 GBs to 900 GBs in 6 months.
    We had discussion with application team and they mentioned that due to acquisitions, data volume is increased and we are expecting it to grow up to 4 TBs in next 2 years.
    In order to effectively manage the table and to avoid performance issues, we are looking for different options as below.
    1) Table is having date column. With that thought of converting to partitioned table like "Range" partitioning. I never converted a table of 900 GBs to a partitioned table. Is it a best method?
         a) how can I move the data from regular table to partitioned table. I looked into google, but not able to find good method for converting to regular table to partitioned table. Can you help me out / share best practices?
    2) In one of the article, I read, BLOB is better than "Long RAW" datatype, how easy is to convert from "Long RAW" datatype. Will BLOB yield better performance and uses disk space effectively?
    3) Application team is having purging activity based on application logic. We thought of using shrinking of tables option with enable row movement- "alter table <table name> shrink space cascade". But it is returning the error that table contains "Long" datatype. Any suggestions.
    Any other methods / suggestions to handle this situation effectively..
    Note: By end of 2010, we have plans of moving to RAC with ASM.
    Thanks

    To answer your question 2:
    2) In one of the article, I read, BLOB is better than "Long RAW" datatype,
    how easy is to convert from "Long RAW" datatype. Will BLOB yield better
    performance and uses disk space effectively?Yes, LOBs, BLOBs, or CLOBs are (supposed) to be better than raws (or long raws). In addition, I believe Oracle has or will shortly be desupporting the use of long raws in favor of LOBs, CLOBs, or BLOBs (as appropriate).
    There is a function called "to_lob" that you have to use to convert. Its a pain because you have to create the second table and then insert into the second table from the first table using the 'to_lob' function.
    from my notes...
    =================================================
    Manually recreate the original table...
    Next, recreate (based on describe of the table), except using CLOB instead of LONG:
    SQL> create table SPACER_STATEMENTS
    2 (OWNER_NAME VARCHAR2(30) NOT NULL,
    3 FOLDER VARCHAR2(30) NOT NULL,
    4 SCRIPT_ID VARCHAR2(30) NOT NULL,
    5 STATEMENT_ID NUMBER(8) NOT NULL,
    6 STATEMENT_DESC VARCHAR2(25),
    7 STATEMENT_TYPE VARCHAR2(10),
    8 SCRIPT_STATEMENT CLOB,
    9 ERROR VARCHAR2(1000),
    10 NUMBER_OF_ROWS NUMBER,
    11 END_DATE DATE
    12 )
    13 TABLESPACE SYSTEM
    14 ;
    Table created.
    Try to insert the data using select from original table...
    SQL> insert into SPACER_STATEMENTS select * from SPACER_STATEMENTS_ORIG;
    insert into SPACER_STATEMENTS select * from SPACER_STATEMENTS_ORIG
    ERROR at line 1:
    ORA-00997: illegal use of LONG datatype
    That didn't work...
    Now, lets use TO_LOB
    SQL> insert into SPACER_STATEMENTS
    2 (OWNER_NAME, FOLDER, SCRIPT_ID, STATEMENT_ID, STATEMENT_DESC, STATEMENT_TYPE, SCRIPT_STATEMENT, ERROR, NUMBER_OF_ROWS, END_DATE)
    3 select OWNER_NAME, FOLDER, SCRIPT_ID, STATEMENT_ID, STATEMENT_DESC, STATEMENT_TYPE, TO_LOB(SCRIPT_STATEMENT), ERROR, NUMBER_OF_ROWS, END_DATE
    4 from SPACER_STATEMENTS_ORIG;
    10 rows created.
    works well...
    ===============================================================

  • How to manage the tables after deploying an SDA for Oracle

    How can I manage the tables after deploying an SDA  on Oracle.There is tool for MaxDB,but how can I connection to the Oracle Database?
    Thanks

    In J2ee administrator console

  • Profile management meta tables list

    Hi ,
    Does anyone has the complete list of the profile management meta tables ?
    Thanks in advance.
    Dennis

    Hi,
    Apply this "patch" on your Demo environment and you will have all ERD's based on component/pages including Profile Management.
    PeopleSoft Enterprise Human Capital Management 9.1 Entity Relationship Diagrams [ID 968850.1]
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=REFERENCE&id=968850.1
    Regards,
    Hakan

  • I Want manage cluster table view . I control this  table

    Hi Abaper ;
    I Want manage cluster table view . I control this  table ( which cell change or delete or add  ) I use total internal table ( creating by system ) but total table has a line and some data is incorrect .
    for example :
    in database data :
                                term      stundet_name          lesson_id          lesson_name    
                                2005          ahmet                       1                  Matematik
                                2005          yasin                        2                      Tarih
    I view data   
                                term      stundet_name          lesson_id          lesson_name    
                                2001          ahmet                       1                  Matematik
                                2001          yasin                        2                      Tarih
    (term data false)

    Hi Turgut
    Check this out ; I dont try but maybe useful for you.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/a1/e4521aa2f511d1a5630000e82deaaa/content.htm

  • Layout Management in Table Control

    Hi Dialog Programming Experts,
    I have a new requirement - adding Layout Management options in Table Control. This is dialog/module programming, not ALV Report. Is there a function module for this?
    Thanks so much in advance for your help.
    Regards,
    Joyreen

    Hi
    For filter use the following function modules,
    l_rightx-id = l_index.
      l_rightx-fieldname = 'DESCR'.
      l_rightx-text = text-054.
      APPEND l_rightx TO i_rightx.
      l_index = l_index + 1.
      CLEAR l_rightx.
      l_rightx-id = l_index.
      l_rightx-fieldname = 'DEL_CM'.
      l_rightx-text = text-055.
      APPEND l_rightx TO i_rightx.
    CALL FUNCTION 'CEP_DOUBLE_ALV'
           EXPORTING
                i_title_left  = text-034
                i_title_right = text-035
                i_popup_title = text-036
           IMPORTING
                e_cancelled   = l_cancelled
           TABLES
                t_leftx       = i_leftx[]
                t_rightx      = i_rightx[].
    Firstly populate the right table with all fields which you want in the filtering condition. The left table will be populated once the use selects some fields and transfer it to the left portion of the dialog.
    Then use the following FM like this.
    DATA: i_group TYPE lvc_t_sgrp.
          CALL FUNCTION 'LVC_FILTER_DIALOG'
               EXPORTING
                    it_fieldcat   = i_fldcat1
                    it_groups     = i_group
               TABLES
                    it_data       = i_ziteminvoice[]
               CHANGING
                    ct_filter_lvc = i_filter
               EXCEPTIONS
                    no_change     = 1
                    OTHERS        = 2.
    here filter table should have fields from left table above.
    Once you get the filter data, populate range table for each fields and then delete your internal table using these range.
    CASE l_filter-fieldname.
                  WHEN 'ITMNO'.
                    l_itmno-sign = l_filter-sign.
                    l_itmno-option = l_filter-option.
                    l_itmno-low = l_filter-low.
                    l_itmno-high = l_filter-high.
                    APPEND l_itmno TO li_itmno.
                    CLEAR l_itmno.
    DELETE i_ziteminvoice WHERE NOT  itmno IN li_itmno OR
                                          NOT  aedat IN li_aedat OR...
    First check this if it works, else let me know.
    Thanks
    Sourav.

  • Import Manager Join Tables not working

    Hi Experts,
    for combining tables in the Import Manager i tried to use the join and lookup function.
    My easy example is:
    Table1
    Product ID
    Organsation ID
    Table2
    Organisation ID
    Plant
    Now I'm joining using the field Organisation ID which contains the same items and make a lookup on the field Plant. In the Source Preview I can see the looked up field but without any values and it is with a grey background.
    Any suggestions?
    Thanks
    Andy

    Hi,
    I tried this at my end. It seems like a Bug. I am using MDM 7.1 SP3.
    Facing same issue, I mean Grey background (not editable) in source preview, when we look-up field Plant from Table1 to Table2.
    This field values are seen mapped in Map Fields/Values tab, but when i import data. Fields get populated for Product ID and Organization ID only but Plant field is not getting populated into MDM data Manager.
    So, i am quite sure it seems like a Bug and I would Suggest you to Raise OSS note for the same.
    Temporarily, You can import data in below manner.
    I mean when you select Source table as Table1, you can import Product ID and Organization ID.
    After importing Table1, select your Source table as Table2 map both fields Organization Id and Plant.
    Import data by selecting Matching Field as Organization ID and Default Import Action as Update (All Mapped Fields).
    So, in this way you can have your complete data in your MDM but obviously just because of bug which you can do in Single Step, Now going to be done in two Steps.
    Note: For one of Client i did this on MDM 5.5 SP6, that time it was working perfectly.
    Thanks and Regards,
    Mandeep Saini

  • Query the Manage Sessions Table in OBIEE

    Hi All,
    First I should ask, is it possible to get the information you see on Administration -> Manage Sessions, using a query to the database ?
    If not, I would appreciate Ideas to get the best time to reboot OPMCTL on a system that is working almost 24Hrs, is never a constant when users are not working on the system. The idea is to not reboot OPMCTL when there are active sessions.
    Creative Ideas welcome
    Thanks.

    Hope you are on Oracle DB if yes, write a simple query on db tables (V$session and V$session_logops) based on the moudle filter 'nqserver' which gives all Active SESSIONS
    SELECT SID, Serial#, UserName, Status, SchemaName, Logon_Time
    FROM V$Session
    WHERE
    Status=‘ACTIVE’ AND MODULE LIKE '%nqserver%' AND
    UserName='BI_USER';
    For more information refer : Oracle Business Intelligence
    Thanks,
    Saichand

  • Manage max table storage space in case of excess data (size in GB)

    My scenario is that I am using sql server 2008 r2 on my end. I have created a database named testDB. I have a lot of tables with some log tables in this. some tables have contain lack of records in log table.
    So my purpose is that I want to fix the table size of those tables(log tables) and want to move records in other database table placed on another location. So my database has no problem.
    Please tell me, is there any way to make such above steps which I want for my database?
    Is there already built any such functionality in sql server?
    May be this question repeated but still I have no solution for my issue.
    Fill free to ask any query.
    Thanks

    My scenario is that I am using sql server 2008 r2 on my end. I have created a database named testDB. I have a lot of tables with some log tables in this. some tables have contain lack of records in log table.
    So my purpose is that I want to fix the table size of those tables(log tables) and want to move records in other database table placed on another location. So my database has no problem.
    Please tell me, is there any way to make such above steps which I want for my database?
    Is there already built any such functionality in sql server?
    May be this question repeated but still I have no solution for my issue.
    Fill free to ask any query.
    Thanks
    well, there is no such direct option to restrict the table size.. one way, you can do it. putt that table on separate filegroup and files and restrict the growth on the file. BUT, this will not give the accurate limitaion on the rows you want and, it not
    a good practice to do that- infact you should never do this option. and if you have more tables, each one would require it's own filegroup/files - this is a bad idea.
    the more common solution is to archive the information into another table in a different database.
    a simple script such as this would work. this will move all the log data older than 30 days to archive database
    use ArchiveDatabase
    GO
    insert into archivetable
    select * from testdb..Oldtablelog where logdate <(dateadd(day,-30,getdate())
    is there any particular reason you want to archive the data.. if this for database managability- for backups/maintanence - you can partition and mark the old filegroups as readonly and new data as read-write.
    Hope it Helps!!

  • Oracle11gR2 Workspace Manager and table consistency after merge

    Hi folks,
         I'm working with Oracle Workspace Manager in order to get data inserted and validated into workpaces before they become available to the LIVE workspace.
         Doing some tests I found a problem about data consistency after I merge the data from a child workspace to the parent workspace.
         To be able to explain and reproduce the problem I create a simple test case:
    --Create table TB_LINK
    create table TB_LINK
      CD_LINK NUMBER not null,
      DS_LINK VARCHAR2(30)
    --Create primary key
    alter table TB_LINK add primary key (CD_LINK);
    --Create table TB_GUD
    create table TB_GUD
      CD_GUD  NUMBER not null,
      DS_GUD  VARCHAR2(30),
      CD_LINK NUMBER
    -- Create primary key
    alter table TB_GUD add primary key (CD_GUD);
    -- Create foreign key 
    alter table TB_GUD
      add constraint FK_TB_LINK foreign key (CD_LINK)
      references TB_LINK (CD_LINK);
    -- Create sequences
    create sequence SEQ_TB_GUD
    minvalue 1
    maxvalue 9999999999999999999999999999
    start with 1
    increment by 1
    nocache;
    create sequence SEQ_TB_LINK
    minvalue 1
    maxvalue 9999999999999999999999999999
    start with 1
    increment by 1
    nocache; 
    --Create Triggers
    create or replace trigger "INS_TB_GUD" before insert on TB_GUD for each row
    Begin
    select SEQ_TB_GUD.nextval into :new.CD_GUD from dual;
    end;
    create or replace trigger "INS_TB_LINK" before insert on TB_LINK for each row
    Begin
    select SEQ_TB_LINK.nextval into :new.CD_LINK from dual;
    end;
    --Enable version TB_LINK and TB_GUD
    EXECUTE DBMS_WM.EnableVersioning('TB_GUD','VIEW_WO_OVERWRITE',FALSE,FALSE,'UNLIMITED');
    EXECUTE DBMS_WM.EnableVersioning('TB_LINK','VIEW_WO_OVERWRITE',FALSE,FALSE,'UNLIMITED');
    --Create a workspace
    EXECUTE DBMS_WM.CreateWorkspace ('TEST_WKS');
    --Goto workspace TEST_WKS
    EXECUTE dbms_wm.gotoworkspace('TEST_WKS');
    --Insert data into TB_LINK and TB_GUD
    INSERT INTO TB_LINK(DS_LINK) VALUES ('DS1');
    INSERT INTO TB_LINK(DS_LINK) VALUES ('DS2');
    INSERT INTO TB_LINK(DS_LINK) VALUES ('DS3');
    INSERT INTO TB_LINK(DS_LINK) VALUES ('DS4');
    COMMIT;
    INSERT INTO TB_GUD(DS_GUD,CD_LINK) VALUES ('GUD1',1);
    INSERT INTO TB_GUD(DS_GUD,CD_LINK) VALUES ('GUD2',2);
    INSERT INTO TB_GUD(DS_GUD,CD_LINK) VALUES ('GUD3',3);
    INSERT INTO TB_GUD(DS_GUD,CD_LINK) VALUES ('GUD4',4);
    COMMIT;
    --Checking keys
    select * from tb_link;
       CD_LINK      DS_LINK
             1           DS1
             2           DS2
             3           DS3
             4           DS4
    select * from tb_gud;
       CD_GUD      DS_GUD     CD_LINK
             1           GUD1              1
             2           GUD2              2
             3           GUD3              3
             4           GUD4              4
    --Merge Workspace
    EXECUTE DBMS_WM.MergeWorkspace ('TEST_WKS');
    --Checking keys
    EXECUTE dbms_wm.gotoworkspace('LIVE');
    select * from tb_link;
       CD_LINK      DS_LINK
             5           DS4
             6           DS3
             7           DS1
             8           DS2
    We can see that the CD_LINK got new values after merge and that was not expected.
    select * from tb_gud;
       CD_GUD      DS_GUD                  CD_LINK
             6           GUD3                       3
             7           GUD1                       1
             8           GUD2                       2
             5           GUD4                       4
    We can see that the CD_GUD got new values after merge and that was not expected.
    Now, the values for the CD_LINK column does not have corresponding records at the TB_LINK table, as the foreign key does not exist anymore.
    Could you please help me understand what is going on?
    Thanks,
    Luis

    Hi Luis,
    The reason for the difference is that the trigger is being run during the MergeWorkspace operation.  The inserts into the child workspace(TEST_WKS) translates into an insert into the LIVE workspace during merge as the rows do not yet exist.  As a result, the trigger is fired and the sequence is evaluated.  Ideally, we should not allow the PK to be modified by a sequence in this case.
    You have 2 options:
    (1) Check for :new.CD_GUD being null prior to using the sequence.  Any dml coming from a merge/refresh operation will have a non-null value.
    (2) Turn off the trigger during dbms_wm procedures.  This can be done using dbms_wm.SetTriggerEvents.  I would assume you would only want this trigger being run for DML events.
    Let me know if you have any questions.
    Regards,
    Ben

  • Can Designer 9i generate Workspace Manager versioned tables?

    All,
    I am exploring the use of the Oracle 9i feature Workspace Manager.
    Can Designer 9i generate versioned tables for a Workspace Manager enabled instance. I know that the tables could be generated to scripts and then edited to properly alter versioned tables. However manually editing each script is not a long term solution. I have read through the application help, newsgroups, and this discussion group and did not find any information.
    I welcome any input you have to offer,
    Doug

    Hi Steve,
    I'm the Product Manager. Feel free to contact me directly at [email protected] to discuss your auditing requirements.
    In general, yes, Workspace Manager can maintain a history of changes to a table.
    It can make a timestamped copy of a row every time a change to it is committed. The GotoDate command allows the user to set session context to a particular point in time to see the database (including the changed rows) as it was at that time.
    DML doesn't need to change (unless hints are needed to optimize performance). All historical copies of the rows are kept in the same table as the original row.
    Best Regards,
    Bill

  • Managing Hierarchical tables

    Hi all,
    I need a query to get the emp hirarchical table as follows: (BASED ON EMP TABLE)
    KING --> JONES --> SCOTT
    KING --> JONES --> FORD
    and so on...
    In other words, all the path from the president to the last level of the organization in one line each. For example:
    King is manager of Jones, and Jones is chief of Scott and Ford, so I want to display both paths.
    Moreover, I don't want to filter anything, I want to show all the possible paths for all the employees.
    I would appreciate your help.
    Thanks.

    In Oracle 8i, you can do the same thing, using the following packaged function by Solomon Yakobson:
    CREATE OR REPLACE
    PACKAGE Hierarchy
      IS
            TYPE BranchTableType IS TABLE OF VARCHAR2(4000)
              INDEX BY BINARY_INTEGER;
            BranchTable BranchTableType;
            FUNCTION Branch(vLevel          IN NUMBER,
                            vValue          IN VARCHAR2,
                            vDelimiter      IN VARCHAR2 DEFAULT CHR(0))
                            RETURN VARCHAR2;
            PRAGMA RESTRICT_REFERENCES(Branch,WNDS);
    END Hierarchy;
    Package created.
    CREATE OR REPLACE
    PACKAGE BODY Hierarchy
      IS
            ReturnValue VARCHAR2(4000);
      FUNCTION Branch(vLevel        IN NUMBER,
                      vValue        IN VARCHAR2,
                      vDelimiter    IN VARCHAR2 DEFAULT CHR(0))
                      RETURN VARCHAR2
       IS
       BEGIN
            BranchTable(vLevel) := vValue;
            ReturnValue := vValue;
            FOR I IN REVERSE 1..vLevel - 1 LOOP
              ReturnValue := BranchTable(I)|| vDelimiter || ReturnValue;
            END LOOP;
            RETURN ReturnValue;
      END Branch;
    END Hierarchy;
    Package body created.
    COLUMN   name FORMAT A10
    COLUMN   val  FORMAT A35
    SELECT   name, empno, mgr, val
    FROM     (SELECT     LPAD (' ', ( level - 1 )) || ename name, empno, mgr,
                         hierarchy.branch (LEVEL, ename, ' --> ') AS val
              FROM       emp
              START WITH mgr IS NULL
              CONNECT BY PRIOR empno = mgr)
    ORDER BY 4
    NAME            EMPNO        MGR VAL                                                               
    KING             7839            KING                                                              
    BLAKE           7698       7839 KING --> BLAKE                                                    
      ALLEN          7499       7698 KING --> BLAKE --> ALLEN                                          
      JAMES          7900       7698 KING --> BLAKE --> JAMES                                          
      MARTIN         7654       7698 KING --> BLAKE --> MARTIN                                         
      TURNER         7844       7698 KING --> BLAKE --> TURNER                                         
      WARD           7521       7698 KING --> BLAKE --> WARD                                           
    CLARK           7782       7839 KING --> CLARK                                                    
      MILLER         7934       7782 KING --> CLARK --> MILLER                                         
    JONES           7566       7839 KING --> JONES                                                    
      FORD           7902       7566 KING --> JONES --> FORD                                           
       SMITH         7369       7902 KING --> JONES --> FORD --> SMITH                                 
      SCOTT          7788       7566 KING --> JONES --> SCOTT                                          
       ADAMS         7876       7788 KING --> JONES --> SCOTT --> ADAMS                                
    14 rows selected.

  • RAR Management view tables

    Hi Experts,
    We need to delete old logs for all sync jobs, batch risk analysis and management view data from RAR. Initially we had run all jobs for all roles but management requirement is that it should show only customer roles. Although we can anytime run new risk analysis but in managemnt view it will show old data also and comparison etc will be confusing to some users.
    Please suggest if there is any workaround to remove old data. If we need ot delete at table level, which tables data need ot be cleared to make sure no old data remains till date.
    Thanks in advance.
    Sabita

    Hi Chinmaya,
    Thank you so much for your reply.
    Are you going to upgrade to SAP GRC AC SP13 ( I am bit confused).
    Thanks in advance.
    Regards
    Poojagopal
    Edited by: POOJAGOPAL on Jun 3, 2011 8:20 PM

  • Pipeline Performance Management - Opportunities Table does not show up

    Hi,
    we're using Pipeline Performance Management.
    the colorful bar chart actually already shows up when open it.
    however, when we click on the bar chart, we dont find the list/table opportunities bellow the chart.
    we have all PPM services activated, org model - user maintained.
    any lead really appreciated.
    thanks
    JD

    solved by removing the flag on demand

  • Managed Item Tables in 8.8

    Hi,
    We started our add-on upgrade to 8.8.  The tables for batches and serial numbers are all new.  Does anyone know where I can find the documentation for these tables.  This should included the purpose of each table and how it relates to the others tables.  What is described in the SDK help file applies only to the 2007 tables.
    Thanks,
    Mike

    Hi Mike,
    We struggled with the same kind of problem as you, only in our case just serial number management. Although the DI API for serial number management has remained the same, we used some queries on the serial number table as well and we were worried about how those queries were affected. For serial numbers, it seems the new tables OSRN and OSRQ have replaced old table OSRI but that the old OSRI is still available, as a view. This means that any queries you might have on OSRI would still work, and luckily this was true for our code. Have you checked this for your code?
    However, in order to avoid future compatibility issues, we chose to use the new tables anyway. It seems that OSRN is a kind of header table for serial number records and OSRQ is a kind of 'serial numbers per warehouse' table, whereas the old OSRI contains both serial numbers and warehouse availability. We concluded this by looking at how the new view OSRI is defined in SBO 8.8. In this definition, we also saw that the old OSRI.Status value is now derived from the value in OSRQ.Quantity. So for instance, suppose you want to check whether or not a (system) serial number is available for a given item (item01 with internal serial number 123) in a certain warehouse (01), your old query (prior to version 8.8) would be
    SELECT SysSerial FROM OSRI
    WHERE ItemCode   = 'item01'
      AND IntrSerial = '123'
      AND WhsCode    = '01'
      AND Status     = 0
    but now (version 8.8 or later) it would be
    SELECT OSRN.SysNumber FROM OSRN
    LEFT JOIN OSRQ OSRQ ON OSRN.ItemCode = OSRQ.ItemCode AND OSRN.SysNumber = OSRQ.SysNumber
    WHERE OSRN.ItemCode   = 'item01'
      AND OSRN.DistNumber = '123'
      AND OSRQ.WhsCode    = '01'
      AND OSRQ.Quantity   = 1
    Does this help?
    Regards,
    Marnix Kammer
    Edited by: Marnix Kammer on Oct 20, 2009 2:17 PM

Maybe you are looking for