Best approach-several infotypes requested to be extracted/modeled in SAP BI
Hello Gurus!
The business wants to pull the data from about 150 HR infotypes (including 9-series) into SAP BI. The requirement has been blessed and it is a go (maybe the total number might increase over 150 but not decrease)!! The main premise behind such a requirement is to be able to create ad-hoc reports from BI (relatively quickly) on these tables.
Now, has anyone of you heard of such a requirement - if so, what is 'best practice' here? Do we just blindly keep modeling/creating 150 DSOs' in BI or is there a better way to do this? Can we atleast reduce the work load of creating models for all these infotypes, somehow? (maybe create views, etc.?)
Any kind of response is appreciated. Thank you in advance. Points await sensible responses.
Regards,
Pranav.
Personally, I'd say the best approach for this would be not to extract the Infotypes at all and use Crystal Reports to generate ad-hoc queries directly against the Infotypes in your R3/ECC environment. This would eliminate the need to extract and maintain all of that data in BW and unless you have SAP Business Objects Enterprise installed on top of SAP BW, "relatively quick" ad-hoc queries in BW is not what I would call a common theme (BEx Analyzer, which is what I'm assuming is your reporting UI solution for this, isn't exactly user-friendly).
If you must bring all of these Infotypes in SAP BW, creating views of Infotype data may be your best bet. It would definitely reduce the number of repositories on your BW environment, thereby reducing your maintenance time and costs.
Similar Messages
-
Best approach to implement feature "I found a mistake" in SAP EP
Hi All,
We are planning to implement functionality where any end user can log in the mistake he/she found while using Portal i.e. "I found a mistake".
This is very critical to Business since
1) Evryday we post lot of content
2) There will be always some improvement areas which we do not notice
Regards,
GangaHi,
Thanks for your reply. But I need to know following details:
1) Send a message to your local SAP Support. - Where we need to configure this?
2) Can we change configuration so that an email will be sent once the user logs the error
3)Component --- Can we taek out this and since we are reporting for an specific iView, can we make it to point to name of iView
4) Attach any relevant files (optional).-- Where attachments are stored?
5) What are the database tables involved in this?
Regards,
Ganga
Edited by: Hiremath Gangadharayya on Jul 25, 2008 7:39 PM -
Delimit several infotypes for Leaving Action
Hi,
I would like to know how to 'Send' or call a backend program to delimit infotypes called during the leaving action of the employee. thus calling several infotypes but sometimes configured in such a way it is dependent on the status of the EE or have PY record.
Thanks!Hi Binoj,
thx. i tried this:
0000 MASSN 06 0 * * TERMINATION - DELIMIT RECORDS*
0000 MASSN 06 10 P T001P-MOLGA='34'
0000 MASSN 06 20 P P0000-MASSN='10'
0000 MASSN 06 30 I LIS9,9021,,,,,(P0000-BEGDA)
0000 MASSN 06 40 I LIS9,9022,,,,,(P0000-BEGDA)
0000 MASSN 06 50 I LIS9,9023,,,,,(P0000-BEGDA)
0000 MASSN 06 60 I LIS9,9024,,,,,(P0000-BEGDA)
delimit process has been done..
but we still have to input delimit date
btw what is '/D' for?
looking forward to hearing from you.
best regards,
dhenny muliawaty (pei pei) -
R/3 4.7 to ECC 6.0 Upgrade - Best Approach?
Hi,
We have to upgrade R/3 4.7 to ECC 6.0
We have to do th DB, Unicode and R/3 upgrade. I want to know what are the best approaches available and what risks are associated iwth each approach.
We have been considering the following approaches (but need to understand the risk for each approach).
1) DB and Unicode in 1st and then R/3 upgrade after 2-3 months
I want to understand that if we have about 700 Include Program changing as part of unicode conversion, how much of the functional testing is required fore this.
2) DB in 1st step and then Unicode and R/3 together after 2-3 months
Does it makes sense to combine Unicode and R/3 as both require similar testing? Is it possible to do it in 1 weekend with minimum downtime. We have about 2 tera bytes of data and will be using 2 systems fdor import and export during unicode conversion
3) DB and R/3 in 1st step and then Unicode much later
We had discussion with SAP and they say there is a disclaimer on not doing Unicode. But I also understand that this disclaimer does not apply if we are on single code page. Can someone please let us know if this is correct and also if doing Unicode later will have any key challenges apart from certain language character not being available.
We are on single code pages 1100 and data base size is about 2 tera bytes
Thanks in advance
Regards
RahulHi Rahul
regarding your 'Unicode doubt"' some ideas:
1) The Upgrade Master Guide SAP ERP 6.0 and the Master Guide SAP ERP 6.0 include introductory information. Among other, these guides reference the SAP Service Marketplace-location http://service.sap.com/unicode@sap.
2) In Unicode@SAP can you find several (content-mighty) FAQs
Conclusion from the FAQ: First of all your strategy needs to follow your busienss model (which we can not see from here):
Example. The "Upgrade to mySAP ERP 2005"-FAQ includes interesting remarks in section "DO CUSTOMERS NEED TO CONVERT TO A UNICODE-COMPLIANT ENVIRONMENT?"
"...The Unicode conversion depends on the customer situation....
... - If your organization runs a single code page system prior to the upgrade to mySAP ERP 2005, then the use of Unicode is not mandatory. ..... However, using Unicode is recommended if the system is deployed globally to facilitate interfaces and connections.
- If your organization uses Multiple Display Multiple Processing (MDMP) .... the use of Unicode is mandatory for the mySAP ERP 2005 upgrade....."
In the Technical Unicode FAQ you read under "What are the advantages of Unicode ...", that "Proper usage of JAVA is only possible with Unicode systems (for example, ESS/MSS or interfaces to Enterprise Portal). ....
=> Depending on the fact if your systems support global processes, or depending on your use of Java Applications, your strategy might need to look different
3) In particular in view of your 3rd option, I recommend you to take a look into these FAQs, if not already done.
Remark: mySAP ERP 2005 is the former name of the application, which is named SAP ERP 6.0, now
regards, and HTH, Andreas R -
Best approach to return Large data ( 4K) from Stored Proc
We have a stored proc (Oracle 8i) that:
1) receives some parameters.
2)performs computations which create a large block of data
3) return this data to caller.
compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
Thanks for any help,
Yoram AyalonWe have a stored proc (Oracle 8i) that:
1) receives some parameters.
2)performs computations which create a large block of data
3) return this data to caller.
compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
Thanks for any help,
Yoram Ayalon -
What are the best approaches for mapping re-start in OWB?
What are the best approaches for mapping re-start in OWB?
We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
We have number of mappings. We built process flows for mappings as well.
I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
How do we re-cycle failed rows?
Are there any builtin features/best approaches in OWB to implement the above?
Does runtime audit tables help us to build re-start process?
If not, do we need to maintain our own tables (custom) to maintain such data?
How did our forum members handled above situations?
Any idea ?
Thanks in advance.
RIHi RI,
How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
What will happen if m3 fails?Process is suspended and you can restart execution from m3.
By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
Oleg -
Best approach to do Range partitioning on Huge tables.
Hi All,
I am working on 11gR2 oracle 3node RAC database. below are the db details.
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
We tested this partitioning startegy with few million record in other environment with below steps.
1. CREATE TABLE TRANSACTION_N
PARTITION BY RANGE ("CREATED_DT")
( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
as (select * from TRANSACTION where 1=2);
2. exchange partion for data move to new partition table from old one.
ALTER TABLE TRANSACTION_N
EXCHANGE PARTITION DATA1
WITH TABLE TRANSACTION
WITHOUT VALIDATION;
3. create required indexes (took almost 3.5 hrs with parallel 16).
4. Rename the table names and drop the old tables.
this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
Thanks,
Hari>
in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
We tested this partitioning startegy with few million record in other environment with below steps.
1. CREATE TABLE TRANSACTION_N
PARTITION BY RANGE ("CREATED_DT")
( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
as (select * from TRANSACTION where 1=2);
2. exchange partion for data move to new partition table from old one.
ALTER TABLE TRANSACTION_N
EXCHANGE PARTITION DATA1
WITH TABLE TRANSACTION
WITHOUT VALIDATION;
3. create required indexes (took almost 3.5 hrs with parallel 16).
4. Rename the table names and drop the old tables.
this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
>
Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
See Exchanging Partitions in the VLDB and Partitioning doc
http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
>
When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
>
Comments below are limited to working with ONE table only.
ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
Partitioning an Existing Table using DBMS_REDEFINITION
http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all. -
Best approach for handling a Repair business
Hi All,
A company provides Repair Services for various branded Products. The manufacturers send their Products for repair along with replacement Parts. The repaired Products are despatched back to the Original Manufacturer with details of Repair (nature of repair, Parts consumed etc.,). The Products are serialized and hence repair details has to be provided for each Unit of the Product.
After repair, Labour and / or Material Charges (billing) are claimed with the Manufacturer (the company may also use some of their own Parts for repair). Actual cost incurred for Repair is also to be determined.
Following metrics are to be taken care of:
Technicians Performance in terms of no of units repaired per day, % Yield (Out of 100 Units, recovery is 80) Technician wise, Cost of Repair Actual cost incurred for Repair / No of Units repaired
What would be the best approach to handle the above scenario in SAP R/3 v 4.7 ?
Request feedback.
Thanks.
RajHi Raj,
the steps for reparis process is as follows
1.Repair order process
2.Technical check and reparis processing
3.outbound delivery and billing
Repair order process
1.create the sales order type RAS-repari/service and include the service product and serviceable material over there and save the order
2. open the sales order in change mode go to menu sales document and deliver change the delivery type to LR i.e return delivery
3. assign serial number for the serviceable materials
the path is go to extras-->serial numbers assign and save
4. go to VL02N and post the goods reciept that means we are taking the goods in to our stores. Then check IQ03 thr which we can display serial number master record and verify.
Technical check and reparis processing
1. open sales order in change mode and here we will recognize how the returns were posted
go to item view-->repair tab in recieved field you can observe the recieved items
2. below that Business decision: start repair
you will get the repari procedure these things we can set it in customization.
the path is sales and distributionsalessales documentscustomer services-returns and repari procedure--define repair procedure
we have to assign repair procedure to item category
so that whenever you will put the material through item category we will get our own repair procedure.
then in the business decision
we can select how many u want to repair and how many u want to do outbound and how many u want to scrap for scrap items uhave to issue credit memo after this save the sales order
open sales order in change mode go to editdisplay criteria-all items
double click on the item sothat you can see the smorder i.e. service order (which gets generated automatically) which we will assign in requirement class.
Then go to service order confirm the operations and techo the order
then last step is
outbound delivery and billing
we have to send the repaired device back
go to va02
and see as soon as we techo the order an outbound delivery is created in repari tab. Perfrom outbound delivery i.e. salesdocument--deliver(need ot remove default delivery type) and save.
create transfer order and goods issue is posted.
then go to DP90
billing request process individually
give your sales document number
press save billing request
then go to sales document billing
invoice will get generated save it
settle the service order to sales order thr kO88 and settle the repair order thr VA88
still if you have any doubts mail me and reward me with points if it is useful to you
Regards
Satish -
Urgent question on styling forms and best approach to building them in BC
Hi I need some guidance on forms and adding the styling to them
Thsi page I have created in reflow
http://www.beadmanso…LS_LP/assets/spstudioslf1.html
Here is the same page were I have added a BC Form
http://www.beadmansolutions.co.uk/SPStudios_2LS_LP/assets/spstudioslf.html
I need some advice to whether this can be styled and what is the best approach to go about this?
Many thanks Chrishughanagle wrote:
But I'm intrigued... Given that you're completely au fait with PHP/MySQL solutions, what prompted you to integrate a Wordpress blog on your own site when you could have built one for yourself quickly and easily?
Several reasons:
I originally installed it in 2006, because I wanted to find out what all the fuss about WordPress was about.
I wanted to learn how to integrate a WordPress theme with the CSS for the rest of my site, so they would have an integrated look.
I don't blog very often (only six times so far this year).
The commenting and moderation system is very well organized. I didn't see much point in reinventing the wheel.
It's only a small section of my site. Most other pages aren't database-driven, but they do use a lot of PHP includes and conditional logic.
Why would any of us mortals bother building PHP/MySql blogs if guys as adept as you are using Wordpress?
If you want a blog, WordPress presents you with a ready-made solution that's very easy to set up. What's not so easy is modifying and styling it. That's where a good knowledge of PHP and CSS are essential. Also, I'd say that WordPress is not suitable for a lot of sites. That's why I don't use it for the other parts of my site. -
What is the best approach to return Large data from Stored Procedure ?
no answers to my original post, maybe better luck this time, thanks!
We have a stored proc (Oracle 8i) that:
1) receives some parameters.
2)performs computations which create a large block of data
3) return this data to caller.
compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
Thanks for any help,
Yoram AyalonCreate a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
new farm's Web Application(s)/Service Application(s).
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Best approach for building dialogs based on Java Beans
I have a large amount of Java Beans with several properties each. These represent all the "data" in our system. We will now build a new GUI for the system and I intend to reuse the beans as far as possible. My idea is to automatically generate the configuration dialogs for each bean using the java.beans package.
What is the best approach for achieving this? Should I use PropertyEditors or should I make my own dialog-generator using the Introspetor class or are there any other suitable solutions?
All suggestions and tips are very welcome.
Thanks!
ErikDefinitely, it is better for you to use JTable. Why not try it?
-
Best approach to dealing with someone else's sphagetti code with no documentat​ion
Hello LabVIEW gurus,
I am just given a few software tools to add functionality and rewrite, each of which is a big spaghetti mess and each tool has 100+ vis all sphagetti, these tools control a very complex machine talking via seria, parallel, ethernet, 485 etc. and there is barely any documentation of the logic or the implemetation of the source code / what the subvis do.
what would be my best approach to understand this mess and recreate it in a structured way faster. it has lot of old sequence structures and just plain bad style of programming.
any help is highly appreciated
Thanks allAnd Do not forget about using the VI Analyzer TK! It can reveal several obvious sources to clarify code that "Stinks" A lot of skull sweat went into that framework and it has signifigant value!
Norbert_B wrote:
If your task is only to ADD things, you might be interested in Steve's recommendation here.
Norbert
(Inside joke ahead)
Ah, That explains the TDMS File Viewer!
Spoiler (Highlight to read)
You really should run that through the VIA....:smileymad
You really should run that through the VIA....:smileymad
Spoiler (Highlight to read)
It can be done fairly quick
It can be done fairly quick
Spoiler (Highlight to read)
How do you unspoiler? Ah well I'll hope a moderator can leave only the first comment "spoiled"
How do you unspoiler? Ah well I'll hope a moderator can leave only the first comment "spoiled"
Spoiler (Highlight to read)
Note the quote from the link "The Code we inherited might have been "richly obfuscated."" "richly Obfuscaed code was a code review term used for code written by your boss... The VIA would call it something else.
Note the quote from the link "The Code we inherited might have been "richly obfuscated."" "richly Obfuscaed code was a code review term used for code written by your boss... The VIA would call it something else.
Jeff -
Best approach to transfer and transform data between two Oracle Instances
Let me first preface this post with the fact that I am an Oracle newb.
The current problem I am trying to solve is how to quickly transfer data from a customers legacy database into a new normalized destination that is modeled differently than the legacy data. So, the data not only has to be transferred, but also transformed (into normalized tables)
We have discovered, the hard way, that reading the data using a C++ application and performing inserts (even with indexing and constraints disabled) is just way too slow for what we need. We are dealing with around 20 million records here. I need to determine what the best approach extracting this data out of the source and inserting it into the destination. Any comments or tips are greatly appreciated.
Note: I have read about SQL*Loader and mentioned it to management, but they seem resistant to this approach. It's not totally out of the question though.Oracle has a lot of technologies that fall under the general heading of "replication" to choose from. Going from a database to a flat file and back to another database is generally not the most efficient solution-- it's generally much easier to go straight from one database to another.
Oracle provides materialized views, Streams, and Change Data Capture (CDC)-- any of these would be able to give your new system incremental feeds of data from the old system's data and could be used by whatever routines you write to transform the data between data models.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Best approach to impliment "rate table"
My application needs to initialize several "rate tables" (think tax rate tables, etc.) from property and/or xml files and then perform LOTS of lookups into these tables.
None of the collections seems to provide quite the right functionality. A Map is close but I also need to be able to lookup by the key (eg. income) and return the rate for the correct "bracket". That is, I want to find return the rate associated with the maximum key less than or equal to my lookup value.
I'm thinking the best approach is to wrap or extend a Map and provide an addition lookup method.
I should also point out that these tables tend to be relatively small (from 5 to 20 or 30 "rows") which may indicate I'd be better with a brute force array implimentation.
Any suggestions and/or comments as to how best to proceed would be greatly appreciated.
R.Parr
Temporal ArtsYou might try wrapping a SortedMap. That would make it easy to return <= or >= results.
-
RFC Lookup - Best Approach To Parse Returned Tables
Hi Everyone,
We are doing some RFC Lookups at a header node that are returning tables for all of the items (for performance reasons). I am trying to figure out what the best way to extract the values from the table, which is most of time has more than 1 key column. At the moment I am doing this through DOM, but I have also heard about using arrays, and have even seen an example of using a hashtables with all of the values concatenated together to later parse out using substrings. I'm looking for the best approach to:
1) Store this data as some kind of global object to lookup during the header
2) Search and Parse from the global object during linte items.
As an example, I have the following lines in my table:
Key1,Key2,Value1,Value2,Value3
A,A,1,2,3
A,B,1,2,4
A,C,3,4,2
B,A,2,4,6
And during line item processing I may want to find the value for Key1=A, Key2=C.
Thanks
PeterHi Peter,
Please take a look at these...
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/xi-code-samples/xi%20mapping%20lookups%20rfc%20api.pdf
/people/siva.maranani/blog/2005/08/23/lookup146s-in-xi-made-simpler
/people/sravya.talanki2/blog/2005/12/21/use-this-crazy-piece-for-any-rfc-mapping-lookups
/people/alessandro.guarneri/blog/2006/03/27/sap-xi-lookup-api-the-killer
cheers,
Prashanth
P.S Please mark helpful answers
Maybe you are looking for
-
I have an iPad 2 and an iPhone 4S. I bought Apple TV recently and I've gotten bored of just using it to stream Netflix. I've been trying all morning to get Airplay to work on either my iPad or my iPhone, but I don't even see the Airplay icon when I d
-
E51 Driver won't install after XP reinstall
I have been using my E51 with PC Suite 6.86.9.3 on XPSP2 for several months and all was fine. I had need to reinstall XP using the "Upgrade" option - i.e. keeping all your applications and settings. After the reinstall, the E51 will not load. It says
-
[CS3 Win]: How to get active spread and layer ?
How to get the active spread and layer in an Indesign CS3 document?
-
Have an iMac & buying a second one, how to copy my programs?
I copied my programs from my son's macbook, no cds, now I want to copy them to a new imac. some software to buy?
-
I just downloaded ios 7 and now my phone won't open, anyone know how I can open it?
I downloaded ios 7 and at first my phone worked fine, I locked my phone and now I cant unlock it even though when someone calls the phone rings. Does anyone know how I can fix this?