Best approach to impliment "rate table"

My application needs to initialize several "rate tables" (think tax rate tables, etc.) from property and/or xml files and then perform LOTS of lookups into these tables.
None of the collections seems to provide quite the right functionality. A Map is close but I also need to be able to lookup by the key (eg. income) and return the rate for the correct "bracket". That is, I want to find return the rate associated with the maximum key less than or equal to my lookup value.
I'm thinking the best approach is to wrap or extend a Map and provide an addition lookup method.
I should also point out that these tables tend to be relatively small (from 5 to 20 or 30 "rows") which may indicate I'd be better with a brute force array implimentation.
Any suggestions and/or comments as to how best to proceed would be greatly appreciated.
R.Parr
Temporal Arts

You might try wrapping a SortedMap. That would make it easy to return <= or >= results.

Similar Messages

  • Best approach to publish new table or new column on existing table on MDW?

    Hi,
    I'm refering to Olite R3 without any patches. I don't use Java API, I use MDW.
    if I have a new table or a new column on a existing table, what's the best approach to publish it?
    I'm asking this because I've trying lots of approaches and the only solution was, step-by-step:
    1) On MDW, drop the publication item
    2) Add again the publication item
    3) Associate the publication item to the publication
    4) Save everything
    5) File / Deploy (if I don't do it, it does not work)
    6) Tools/Package... (that's where it's a problem: if I don't remove the app and create it again it does not work!)
    7) on the client side, I perform a msync with "force refresh"
    That's the only way I found to publish new items for sure. Any other action does not push the new table or new column to the client's embbeded DB.
    Any comments?
    Regards,
    Maurício Américo Vernaschi.

    I do not use MDW, rather a mix of java and the final publish step you use, but
    Adding new PIs should be easy, just add them and re-publish (no need to drop anything)
    for changes, if you just have new columns and the sql statement is 'select * from' then you should just need to make the changes in the base schema objects, and run the publish with no changes and the updates should be picked up. If selecting specific columns, then update and re-publish.
    When using MDW at the end you can save the application as a jar file, and then use this jar file to publish in the mobile manager - this is the best wayto publish.
    Have a look at this jar file in winzip, and you will find it contains a web.xml file. This is the xml definition of the publication items, and for simple changes it is possible to just edit this file and republish via the mobile manager

  • RFC Lookup - Best Approach To Parse Returned Tables

    Hi Everyone,
    We are doing some RFC Lookups at a header node that are returning tables for all of the items (for performance reasons).  I am trying to figure out what the best way to extract the values from the table, which is most of time has more than 1 key column.  At the moment I am doing this through DOM, but I have also heard about using arrays, and have even seen an example of using a hashtables with all of the values concatenated together to later parse out using substrings.  I'm looking for the best approach to:
    1) Store this data as some kind of global object to lookup during the header
    2) Search and Parse from the global object during linte items.
    As an example, I have the following lines in my table:
    Key1,Key2,Value1,Value2,Value3
    A,A,1,2,3
    A,B,1,2,4
    A,C,3,4,2
    B,A,2,4,6
    And during line item processing I may want to find the value for Key1=A, Key2=C.
    Thanks
    Peter

    Hi Peter,
    Please take a look at these...
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/xi-code-samples/xi%20mapping%20lookups%20rfc%20api.pdf
    /people/siva.maranani/blog/2005/08/23/lookup146s-in-xi-made-simpler
    /people/sravya.talanki2/blog/2005/12/21/use-this-crazy-piece-for-any-rfc-mapping-lookups
    /people/alessandro.guarneri/blog/2006/03/27/sap-xi-lookup-api-the-killer
    cheers,
    Prashanth
    P.S Please mark helpful answers

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • Best approach -To create RTF template having more than 50 tables.

    Hi All,
    Need your help.I am new to BI publisher. Currently we are using BIP 11g.
    I want to develop.rtf template having lots of layout and images.
    Data is coming from different tables (example : pulling from around 40 tables). When i tried to pull data from 5 tables by joining tables. It takes more time using data model in BI publisher 11g saved in xml and used in word doc.
    Could you please suggest best approach  weather i need to develop .rtf template via data model or query to generate a report.
    Also please suggest / guide me .
    Regards & Thanks in advance.

    it's very specific requirements
    first of all it's relate to logic behind
    as example 50 tables are related ? or 50 independent tables ? or may be 5 related and another independent ?
    based on relation of tables you create sql statement(s)
    how many sql statement(s) you'll have lead to identify ways to get data, as example, by package or trigger etc
    kim size of resulting select statement(s)
    if size say 1mb it's must be fast to get report but for 1000mb it can consume many time
    also kim what time it's not only to select data but to merge data and template
    looks like experimenting and knowing full logic of report is only ways to get needed output in projection of data and time

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

  • Best approach to get the source tables into Target

    Hi
    I am new to Goldengate and I would like to know what is the best approach to get the Source tables to be replicated into the target (oracle to oracle) before performing the initial load without using exp/expdp . Is there any native Goldengate utility which i can use during the initial load or before that will create the tables on the target before loading the data ?
    Thanks

    i dont think so, for the initial load replication your struncture should be available at target machine. once your machines are in sync then you can use goldengate ddl setup for automatically replicate the table with data. 
    Batter approach for you to create a structure on target machine using export improt.  In export use conect=metadata_only for copy the structure only.....
    like
    EXPDP <<user>>/<<password>>@connection_string schemas=abc directory=TEST_DIR dumpfile= gg.dmp Content = metadata_only logfile= gg.log

  • Best approach to pass string to table

    What is the best approach or way to pass a colon delimited string back to a table from a procedure?
    I know when using a multiselect item you can pass a string back and forth using
    HTMLDB_UTIL.STRING_TO_TABLE or
    HTMLDB_UTIL.TABLE_TO_STRING
    but I'm unsure how to pass a colon separated string from a procedure to insert into a table?
    Unfortunately using the approach below yields me an error
    PLS-00306: wrong number or types of arguments in call to 'TABLE_TO_STRING"
    create or replace PROCEDURE "123_ABC" (
    g_abc IN number,
    g_123 IN varchar2)
    AS
    r_one varchar2(15);
    r_two varchar2(17);
    t_selected HTMLDB_APPLICATION_GLOBAL.VC_ARR2;
    CURSOR 123_cur IS
    select ID, NAME
    from table
    where abc=g_abc
    and 123=g_123;
    BEGIN
    delete from TABLE_STAGE;
    COMMIT;
    t_selected := HTMLDB_UTIL.TABLE_TO_STRING(r_one);
    FOR a_rec IN 123_cur
    LOOP
    INSERT INTO table_stage
    VALUES (r_one, r_two);
    COMMIT;
    END LOOP;
    END 123_ABC;

    In my opinion, unless you embark on O/R mapping frameworks, your JDBC implementation is probably the most efficient but not neccessarly the most elegant aproach. I described an alternative approach which ships serialized Java objects by value across tiers in my book http://www.amazon.com/gp/product/1555583296/
    page 803 chapter 12.
    A second option is to publish the SQL statements (Insert, Delete, Update) as Web services (see Database Web Services in the book).
    Another option also described in the book, is to use the rowset APi (webrowset, joinrowset).
    Kuassi

  • Best approach for roll over in BPC?

    Hi All,
    We are currently looking for the best approach in SAP BPC for the roll
    forward of closing balances into opening balances from the previous
    year to the current period.
    Our current approach takes the closing balance account lines from the
    previous year , copies them into specific opening year members (f_init
    and ob_init) using business transformation rules then every month there
    are business transformation rules which takes these values in local and
    base currency to calculate the fx gain\loss and also copies over the
    closing balance at the historic rate into the opening balance of the
    current period. This approach takes both input data and journal data
    into account.
    We also need to take into account now the fact that we need to pull
    through any journals which were posted to adjustment companies and some
    (but not all) legal entities for traditional lines which do not have
    typical opening balance accounts (e.g. cash, stock, accruals etcu2026). The
    approach above can work but we need to add the relevant opening balance
    accounts.
    Please could you advise whether there is a better approach than this?
    Kind Regards,
    Fiona

    I normally prefer saving images in LocalFolder and save file name in database table. I prefer this because saving just file name will keep size of SQLite database small so will load faster.
    Gaurav Khanna | Microsoft .NET MVP | Microsoft Community Contributor

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach for IDOC - JDBC scenario

    Hi,
    In my scenarion I am creating sales order(ORDERS04) in R/3 system and which need to be replicated in a SQL Server system. I am sending the order to XI as an IDoc and want to use JDBC for sending data to SQL Server. I need to insert data in two tables(header & details). Is it possible without BPM?  Or what is the best approach for this?
    Thanks,
    Sri.

    Yes, this is possible without the BPM.
    Just create the Corresponding Datatype for the insertion.
    if the records to be inserted are different, then there wil be 2 different datatypes ( one for header and one for detail).
    Do a mutlimapping, where your Source is mapped into the header and details datatype and then send using the JDBC sender adapter.
    For the strucutre of your Datatype for insertion , just check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    To access any Database from XI, you will have to install the corresponding Driver on your XI server.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
    Regards,
    Bhavesh

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

  • Best approach to "migrate" from BEX reports to Webi reports ?

    Hello,
    i have read lots of documents regarding best practices on how to built webi reports and universes on top of BW.
    But i can't find any document about best approach, not in performance way of thinking but in best way of using reports.
    i mean: when end users are coming from bex reports (where they can drill down through hierarhies and use free filters ) to webi reports (where layout is quite beatiful and user can change it easely), this is not the same way of consuming reports.
    I come from BO world and are new on reporting on top of BW.
    for me webi is good for quite static layout reporting where data is clear and available.of course you can have prompts for interactivy and more accurate reporting.  Drill down is just a functionality but is not the real purpose of the report tool.
    So ,according to me there is a gap between both tools (BEX and WEBI) but end users are the same.
    So i 'm wondering if you have any feedback for the best approach to build webi reports where end users are coming from bex reporting.
    And how to choose between prompts, drill-down (with available filters on top of the window), fold/unfold and input controls or just having diffrent levels of hierarchies in the table/ section/ breaks but without drill down (because if you drill down, report begins weird with diffrent levels) ...?
    So , if you have any feedback , advise....
    thanks in advance,
    Rgds,

    Hi,
      WEBI don't replace BEX reports, is for different audience, in fact BEX is for OLAP reports and analysis.
      You can find some answer in this page
    [FAQ: The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI & Business Objects Roadmap|FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]|FAQ]
    spercific for What is the future of the BEx Query Designer? you can read here
    [FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]#section11] and here [FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]#section3]
      The idea is to use the rigth tool for the rigth job.
      You can find more information here [http://www.sdn.sap.com/irj/sdn/edw], [http://www.sap.com/solutions/sapbusinessobjects/index.epx], [http://www.sap.com/solutions/sapbusinessobjects/newsevents/index.epx], [http://www.sap.com/community/flash/BusinessIntelligenceAGuideforMidsizeCompanies.pdf]
    I hope this help you.
    Best regards.

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • Best approach... TiledView or regular ViewBean

    I'm wondering what approach would be best with JATO and the display
    I'm trying to build on a JSP page. My page layout will look something
    like this...
    Checked Out Items: (none)
    ToDo Items: 3 InfoDocs
    2 Sun Alerts
    Draft Items: (none)
    My Docs: 345 InfoDocs
    28 Sun Alerts
    18 SRDB's
    Now, this might get confusing so please ask questions if anything I
    state is not clear. The values after the colons (:) (i.e. 3 InfoDocs,
    2 Sun Alerts, etc) are dynamic. These values will be genereated off of
    queries on a database. There could be more listings or there could be
    the value (none). As far as setting this up through the JATO
    framework, I'm trying to determine the best way to do this. Right now
    I have a VoyagerHome.jsp that will represent the layout displayed
    above. I also have a VoyagerHomeViewBean.java. At first I was thinking
    of registering a Tiled View in this ViewBean but then I felt it didn't
    make sense and wouldn't work out the way I wanted it to. So now I'm
    thinking of registering the labels (Checked Out Items:, ToDo Items:,
    Draft Items:, and My Docs:) as StaticTextFields within the
    VoyagerHomeViewBean.
    Then I think I would also need to make the values (3 InfoDocs, 2 Sun
    Alerts, etc) children and make them HREF jato types. But I'm not sure
    how I can do this when the numbers are going to be dynamic. Does this
    make sense? One question I have is should I or do I have to register
    each document type (InfoDoc,
    Sun Alert, SRDB, etc) for each label grouping (Checked Out Items:,
    ToDo Items:, etc). So for example, in the end, I will need place
    holders for the following:
    Checked Out Items: xxx InfoDocs
    Checked Out Items: xxx Sun Alerts
    Checked Out Items: xxx SRDBs
    Checked Out Items: xxx Cobalt Assets
    Checked Out Items: xxx iPlanet Assets
    ToDo Items: xxx InfoDocs
    ToDo Items: xxx Sun Alerts
    ToDo Items: xxx SRDBs
    ToDo Items: xxx Cobalt Assets
    ToDo Items: xxx iPlanet Assets
    same for Draft Items and My Docs....
    This may seem confusing so please ask questions. In
    other words, is there a way to re-use the HREF tags and run them
    through a tiled view or something eventhough these values for the HREF
    tags are going to be run off of different database tables?
    What I'm trying to do is kinda difficult to explain but I hope you get
    some idea of what I'm trying to accomplish.
    Thanks
    - Billy -

    Best approach i can think of is a TiledView with HREFs for InfoDocs, Sun
    Alerts etc. You can use a StaticTextField for the numbers in the
    TiledView. ToDo Items, Draft Items etc. you can just label them in your
    jsp itself, no need to register them as children.
    Senthil
    chubbykidd wrote:
    I'm wondering what approach would be best with JATO and the display
    I'm trying to build on a JSP page. My page layout will look something
    like this...
    Checked Out Items: (none)
    ToDo Items: 3 InfoDocs
    2 Sun Alerts
    Draft Items: (none)
    My Docs: 345 InfoDocs
    28 Sun Alerts
    18 SRDB's
    Now, this might get confusing so please ask questions if anything I
    state is not clear. The values after the colons (:) (i.e. 3 InfoDocs,
    2 Sun Alerts, etc) are dynamic. These values will be genereated off of
    queries on a database. There could be more listings or there could be
    the value (none). As far as setting this up through the JATO
    framework, I'm trying to determine the best way to do this. Right now
    I have a VoyagerHome.jsp that will represent the layout displayed
    above. I also have a VoyagerHomeViewBean.java. At first I was thinking
    of registering a Tiled View in this ViewBean but then I felt it didn't
    make sense and wouldn't work out the way I wanted it to. So now I'm
    thinking of registering the labels (Checked Out Items:, ToDo Items:,
    Draft Items:, and My Docs:) as StaticTextFields within the
    VoyagerHomeViewBean.
    Then I think I would also need to make the values (3 InfoDocs, 2 Sun
    Alerts, etc) children and make them HREF jato types. But I'm not sure
    how I can do this when the numbers are going to be dynamic. Does this
    make sense? One question I have is should I or do I have to register
    each document type (InfoDoc,
    Sun Alert, SRDB, etc) for each label grouping (Checked Out Items:,
    ToDo Items:, etc). So for example, in the end, I will need place
    holders for the following:
    Checked Out Items: xxx InfoDocs
    Checked Out Items: xxx Sun Alerts
    Checked Out Items: xxx SRDBs
    Checked Out Items: xxx Cobalt Assets
    Checked Out Items: xxx iPlanet Assets
    ToDo Items: xxx InfoDocs
    ToDo Items: xxx Sun Alerts
    ToDo Items: xxx SRDBs
    ToDo Items: xxx Cobalt Assets
    ToDo Items: xxx iPlanet Assets
    same for Draft Items and My Docs....
    This may seem confusing so please ask questions. In
    other words, is there a way to re-use the HREF tags and run them
    through a tiled view or something eventhough these values for the HREF
    tags are going to be run off of different database tables?
    What I'm trying to do is kinda difficult to explain but I hope you get
    some idea of what I'm trying to accomplish.
    Thanks
    - Billy -
    To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp
    [Non-text portions of this message have been removed]

Maybe you are looking for

  • Black Screen with white text at startup (??)

    Hello All: I have a 2010 15" MacBook Pro, just got it in June. Had to send it in for a logic board replacement and now have it back. Everything works great except now instead of having the standard Apple greyish startup screen, I have a black screen

  • Re: Bootmanager is missing on Satelite A210-15Y with W Vista

    Hi everyone I have a serious problem with my Notebook Satellite A210-15Y. For two weeks I bought it. Yesterday without warnings I had opened just one Window for Internet explorer and suddenly I received around 20 windows of IExplorer. I stopped all t

  • IPad demanding passcode when there isn't one!

    I got my iPad at the end of June from the Apple online store. A few weeks later, for no apparent reason, it requested a password for security settings, which I had not set. Since I could access the itunes store from my computer, this wasn't a major p

  • Passing a variable into a Function?

    I simplified my original code as much as possible to show what I need. From the code below, I need to trace (x) so that it doesn't show the top value of 5. Actually, I need to be able to recognise which of the attachedMovieClip s have been pressed so

  • Dynamic ALV and internal table.

    Hi all, I have a requirement of creating a dynamic field catelog based on input values in selection screen. Example: 1) When I enter date range, say  20/03/2008 to 25/03/2008 I should have 6 columns in the output for each date with column heading as