Best practice for extracting data to feed external DW
We are having a healthy debate with our EDW team about extracting data from SAP. They want to go directly against ECC tables using Informatica and my SAP team is saying this is not a best practice and could potentially be a performance drain. We are recommending going against BW at the ODS level. Does anyone have any recommendations or thoughts on this?
Hi,
As you asked for Best Practice, here it is in SAP landscape.
1. Full Load or Delta Load data from SAP ECC to SAP BI (BW): SAP BI understand the data element structure of SAP ECC, and delta mechanism is the continous process of data load from a SAP ECC (transaction system) to BI (Analytic System).
2. You can store transaction data in DSOs (granular level), and in InfoCubes (at a summrized level) within SAP BI. You can have master data from SAP ECC coming into SAP BI separately.
3. Within SAP BI, you SHOULD use OpenHub service to provide SAP BI data to other external system. You must not connect external extractor to fetch data from DSO and InfoCube to target system. OpenHub service is the tool that faciliate data feeding to external system. You can have Informatica to take data from OpenHubs of SAP BI.
Hope I explain to best of your satisfaction.
Thanks,
S
Similar Messages
-
Best practice for integrating oracle atg with external web service
Hi All
What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
With Thanks & Regards
AbhishekUsing Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
Cheers
R
Edited by: Rajeev_R on Apr 29, 2013 3:49 AM -
Best practice for heirachical data
First off, I have to say that JMX in Java 6 is terrific stuff. Bundling jconsole in with Java has made JMX adoption so much easier for us.
Now, to my question. We have read-only hierarchical data (think a DOM tree) that we would like to publish via JMX. What is the best practice? We see two possibilities:
1. Publish each node of the tree with it's own object name and type. This will allow jconsole to display the information in the tree control.
2. Publish just the root of the tree with an object name and type and then use CompositeType to describe the nodes of the tree. This means you look at the tree in the "Attribute Value" panel of jconsole.
Is there any best practices for such data? We have implemented #2 and it works but we are wondering if long term this might lead to unforeseen consequences.
Thanks in advance.
--MartyPath,
I did go with #1 and it worked out great. Every node in our tree is an ObjectName node. Works very well for us.
--Marty -
Best practice for sharing data with model window
Hi team,
what would the best practice for sharing data with a modal
window be ? I use a modal window to display record details from a
record list, but i am not quite sure how to access the data from
the components in the main application in the modal window.
Any hints would be welcome
Best
FrankPass a reference to the parent into the modal popup. Then you
can reference anything in the parent scope.
I haven't done this i 2.0 yet so I can't give you code. I'll
post if I do.
Oh, also, you can reference the parent using parentDocument.
So in the popup you could do:
parentDocument.myPublicVariable = "whatever";
Tracy -
Where to find best practices for tuning data warehouse ETL queries?
Hi Everybody,
Where can I find some good educational material on tuning ETL procedures for a data warehouse environment? Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems. (For example, most of our ETL
queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
partition tables that are larger than 50 GB;
use minimal logging to load data precisely where you want it as fast as possible;
often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
rebuild statistics after every load of a table.
But I still feel like I'm missing some very crucial concepts for performant ETL development.
BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices. Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
SQL" tasks.
Thanks, and any best practices you could include here would be greatly appreciated.
-EricOnline ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL using Merge command of SQL Server
2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
Kindly let me know if any further help is needed
Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities -
Best Practice for Extracting a Single Value from Oracle Table
I'm using Oracle Database 11g Release 11.2.0.3.0.
I'd like to know the best practice for doing something like this in a PL/SQL block:
DECLARE
v_student_id student.student_id%TYPE;
BEGIN
SELECT student_id
INTO v_student_id
FROM student
WHERE last_name = 'Smith'
AND ROWNUM = 1;
END;
Of course, the problem here is that when there is no hit, the NO_DATA_FOUND exception is raised, which halts execution. So what if I want to continue in spite of the exception?
Yes, I could create a nested block with EXCEPTION section, etc., but that seems clunky for what seems to be a very simple task.
I've also seen this handled like this:
DECLARE
v_student_id student.student_id%TYPE;
CURSOR c_student_id IS
SELECT student_id
FROM student
WHERE last_name = 'Smith'
AND ROWNUM = 1;
BEGIN
OPEN c_student_id;
FETCH c_student_id INTO v_student_id;
IF c_student_id%NOTFOUND THEN
DBMS_OUTPUT.PUT_LINE('not found');
ELSE
(do stuff)
END IF;
CLOSE c_student_id;
END;
But this still seems like killing an ant with a sledge hammer.
What's the best way?
Thanks for any help you can give.
WayneDo not design in order to avoid exceptions. Do not code in order to avoid exceptions.
Exceptions are good. Damn good. As it allows you to catch an unexpected process branch, where execution did not go as planned and coded.
Trying to avoid exceptions is just plain bloody stupid.
As for you specific problem. When the SQL fails to find a row and a value to return, what then? This is unexpected - if you did not want a value, you would not have coded the SQL to find a value. So the SQL not finding a value is an exception to what you intend with your code. And you need to decide what to do with that exception.
How to implement it. The #1 rule in software engineering - modularisation.
E.g.
create or replace function FindSomething( name varchar2 ) return foo.col1%type is
id foo.col1%type;
begin
select col1 into id from foo where col2 = upper(name);
return( id );
exception when NOT_FOUND then
return( null );
end;
And that is your problem. Modularisation. You are not considering it.
And not the only problem mind you. Seems like your keyboard has a stuck capslock key. Writing code in all uppercase is just as bloody silly as trying to avoid exceptions. -
Best practices for initial data loads to MDM
Hi,
We need to load more than 300000 vendors from SAP into MDM production repository. Import server might take days to load that much if no error occurs.
Are there any best practices for initial loads to MDM available? What considerations must be made while doing the initial loads.
HarshaHello Harsh
With SP05 patch1 there is a file aggregation functionality in the import port. Is is supposed to optimize the import performance.
BTW, give me your mail address and I will send you an idoc packaging paper for MDM.
Regards,
Goekhan -
Best practice to extract data from Hyperion Enterprise 5.5
We are looking into extracting high-level data from our Hyperion Enterprise 5.5 and am in the process of researching what are the best practices to do that. I am reading the docs for the APIs that I can call from VB6. I am also interested if there are available Java APIs out there.Thanks in advance and Happy Holidays to everyone!Angelito [email protected]
The easiest is using HAL (Hyperion Application Link). I have used HAL to extract data, organizations, account, subs, entities, etc.
-
Best practice for migrating data tables- please comment.
I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
They also require extensive documentation where every step is recorded in a document and use that for the deployment.
I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
Please comment on your view of this practice. Thanks!>
Please comment on your view of this practice. Thanks!
>
Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
>
I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
>
The process you describe is what I would expect, and require, in any well-run environment.
>
I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
>
Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
>
They also require extensive documentation where every step is recorded in a document and use that for the deployment.
I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
>
Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding. -
Obiee 11g : Best practice for filtering data allowed to user
Hi gurus,
I have a table of the allowed areas for each user.
I want to show only the data facts associated with these allowed areas.
For instance my user scott can see France and Italy data.
I made a variable session. I put this session variable in a filter.
It works ok but only one value (the first one i think) is taken in account (for instance, with my solution scott will see only france data).
I need all the possible values.
I tried with the row wise parameter of the variable session. But it doesn't work (error obiee).
I've read things on internet about using stragg or valuelistof but neither worked.
What would be the best practice to achieve this goal of filtering data with conditions by user stored in database ?
Thanks in advance, EmmanuelCheck this link
http://oraclebizint.wordpress.com/2008/06/30/oracle-bi-ee-1013332-row-level-security-and-row-wise-intialized-session-variables/ -
Best Practice for Master Data Reporting
Dear SAP-Experts,
We face a challenge at the moment and we are still trying to find the right approach to it:
Business requirement is to analyze SAP Material-related Master Data with the BEx Analyzer (Master Data Reporting)
Questions they want to answer here are for example:
- How many active Materials/SKUs do we have?
- Which country/Sales Org has adopted certain Materials?
- How many Series do we have?
- How many SKUs below to a specific season
- How many SKUs are in a certain product lifecycle
- etc.
The challenge is, that the Master Data is stored in tables with different keys in the R/3.
The keys in these tables are on various levels (a selection below):
- Material
- Material / Sales Org / Distribution Channel
- Material / Grid Value
- Material / Grid Value / Sales Org / Distribution Channel
- Material / Grid Value / Sales Org / Distribution Channel / Season
- Material / Plant
- Material / Plant / Category
- Material / Sales Org / Category
etc.
So even though the information is available on different detail levels, the business requirement is to have one query/report that combines all the information. We are currently struggeling a bit on deciding, what would be the best approach for this requirement. Did anyone face such a requirement before - and what would be the best practice. We already tried to find any information online, but it seems Master data reporting is not very well documented. Thanks a lot for your valuable contribution to this discussion.
Best regards
LukasPass a reference to the parent into the modal popup. Then you
can reference anything in the parent scope.
I haven't done this i 2.0 yet so I can't give you code. I'll
post if I do.
Oh, also, you can reference the parent using parentDocument.
So in the popup you could do:
parentDocument.myPublicVariable = "whatever";
Tracy -
Best practice for saving data in SQL server
Hi all
Hoping for a little help on this question.
If i have a list of fields ex. (name,address,postal,phone etc.). Then i create a webform/task
to gather some of theese fields (name, postal), then i make another webform/task to gather some other fields (address, phone).
What is best practice in the SQL server for storing returning values.
Is it:
1. to make a table with all the fields in the list + taskid. Theese fields could be in
correct format (number, date etc.). And all answers to all tasks is inserted into this table.
2. Make a value table for each field with the correct type + task id. So all name values
are stored in the "name value table" with the task id.
How would i select values from a certain task from this kind of setup?
3. ??
Best regards
BoHi Atul
Thanks for your reply, can you elaborate a bit further on this, since i am still a little confused.
Let me try to explain my scenario at bit more:
Say instead that it is 50 fields in a table with their own unique ID, maybe an answer table
would look like this:
taskid | field_1 | field_2 | field_3 | field 4 | field_n
So no matter which fields the user fillsout it will can be stored in one table.
QUestion is, is this a good way to do it? and how do i select from this table using a join
As far as i know you cant name columns in a table with just numbers, which would have been
great, giving the columnnames the field_id.
OR
Would you have 50 tables each with a field_id and a value (of correct type) ?
And could you give me an example of how to bind and select from this kind of structure ?
Also inserting into 50 tables on a save.... is that the right way to go? :)
Best regards
Bo -
Best Practices for Remote Data Communication?
Hello all
I am developing a full-fledged website in Flex 3.4 and Zend Framework, PHP. I am using the Zend_AMF class in Zend framework for communicating the data to remote server.
I will be communicating to database in the following way...
get data from server
send form data to server
send requests to server to get data in response
Right now I have created just a simple login form which just sends two fields username and password in the method in service class on remote server.
Here is a little peek into how I did that...
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
<mx:RemoteObject id="loginService" fault="faultHandler(event)" source="LoginService" destination="dest">
<mx:method name="doLogin" result="resultHandler(event)" />
</mx:RemoteObject>
<mx:Script>
<![CDATA[
import mx.rpc.events.ResultEvent;
import mx.controls.Alert;
private function resultHandler(event:ResultEvent):void
Alert.show("Welcome " + txtUsername.text + "!!!");
]]>
</mx:Script>
<!-- Login Panel -->
<mx:VBox>
<mx:Box>
<mx:Label text="LOGIN"/>
</mx:Box>
<mx:Form>
<mx:FormItem>
<mx:Label text="Username"/>
<mx:TextInput id="txtUsername"/>
</mx:FormItem>
<mx:FormItem>
<mx:Label text="Password"/>
<mx:TextInput id="txtPassword" displayAsPassword="true" width="100%"/>
</mx:FormItem>
<mx:FormItem>
<mx:Button label="Login" id="loginButton" click="loginService.doLogin(txtUsername.text, txtPassword.text)"/>
</mx:FormItem>
</mx:Form>
</mx:VBox>
</mx:Application>
This works fine. But if I create a complicated form which has many fields then it would be almost unbearable to sent each fields as an argument of a function.
Another method that can be used is using HttpService which supports XML like request and response.
I want to ask what are best practices in Flex when using remote data communication on a large scale? Like may be using some classes or objects which store data? Can somebody guide me on how to approach data storing?
Thanks and Regards
VikramOh yes, I have done study about Cairngorm, haven't really applied it though. I thought that it helps in separating the data models, presentation and business logic into various layers.
Although what I am looking for is something about data models may be?
Thanks and Regards
Vikram -
Best Practices for Loading Data in 0SD_C03
Hi, Guru, I want to know which is the best practice to have information about Sales, billing, delivery. I know it has this Datasources.
Sales Order Item Data - 2LIS_11_VAITM
Billing Document Data: Items - 2LIS_13_VDITM
Billing Document Header Data - 2LIS_13_VDHDR
Sales-Shipping: Allocation Item Data - 2LIS_11_V_ITM
Delivery Header Data - 2LIS_12_VCHDR
Delivery Item Data - 2LIS_12_VCITM
Sales Order Header Data - 2LIS_11_VAHDR
Do I have to load all this Datasource to Infocube 0SD_C03 or I have to create copy of 0SD_C03 to mach with each Datasources.Hi.
If you just want to statistic the amount or quantity of the sales process,I suppose you to create 3 cubes and then use a multi provider to integrated those 3 cubes you created.for example:
2LIS_11_VAITM -> ZSD_C01
2LIS_12_VCITM -> ZSD_C02
2LIS_13_VDITM -> ZSD_C03
In this scenario,you can enhance the 2lis_12_vcitm and 2lis_13_vditm with sales order data,such as request delivery date etc..and then create a Multiprovider such as ZSD_M01.
Best Regards
Martin Xie -
Best practices for accessing data in subviews
I've got two iPhone projects which share most of their code base. I'm trying to figure out the best way to load data from some plist files and store them in a common container UIView and provide access to the data for subviews and subviews of the subviews, etc. Right now I've got the data being passed from the container view to the subviews it creates and then the subview themselves pass it further, basically a bucket brigade to get the data to where it needs to go which could be 3 or 4 views down in the hierarchy.
Is there a better approach? I've looked at delegates & protocols but I'm having a hard time understanding how they work and whether they are appropriate in this situation. Originally I had the app delegate holding the data and any class anywhere could invoke the app delegate and get the data. However this approach fails with 2 projects because the app delegates have different names and the classes that need to access it are common to both projects. Can the app delegate be renamed without significant impact? Or is there a way a UIView can be set up as a delegate in much the same way?
Thanks for any advice!
GregHi Greg - You can rename the app delegate class to whatever you want, just remember to change it in IB, and make sure you catch any place it already appears in your code. I guess there's no need to change the app delegate class file names unless you want to.
However there are lots of other solutions to your problem. A case could be made for declaring a global pointer to this data, for example. Or, you could encapsulate the data wherever you want and make an extern (globally visible) C function to access it.
Another solution would be to put the data in a shared object which would be accessed just like the shared app object, e.g.
#include "MyObject.h"
NSDictionary *myPlistData = [MyObject sharedObject].plistData;
I just got done looking at "how to make a shared object" in the Cocoa docs and can't seem to find it atm. Anyway it's in there somewhere, either in an Obj-C doc or one of the top level guides. The job just wants a class method that returns the pointer stored in a static C var; if the var is nil, the object is first created and its addy is stored in the var.
Hope that helps!
- Ray
Maybe you are looking for
-
Can't connect to Wireless Network. Officejet 6500a
I just bought an Officejet 6500A and can't seem to have it connect to my wireless network via Ethernet. All the cables are connected, however when I try to install using the network connection I keep gettting an error saying no networks are found. I
-
What are the parameters "page-forward" and "page-backward" used for?
In the LIMITS section of the Netscape Calendar Server configuration documentation, there are two parameters called "page-forward" and "page-backward." The default setting for these parameters is FALSE. However, it is unclear what these parameters are
-
After resetting devices in Netflix, Apple TV is not finding my Netflix account
Apple TV was reporting that it couldn't connetct to Netflix and to try again later. My MacBook and Wii were fine. After several days of this, I reset my Apple TV settings and reset my devices in my Netflix account. All devices are back up, including
-
Why is my BROWSE Window blank?
Hi, I've been using Homesite for a few months now, and just recently my Browse window no longer displays anything. I can't find anything in the options or setting menus to get it back. The Edit window seems to work fine, but the Browse window is comp
-
I have a formula, where I want the columns to remain constant but I want the middle number to increase by 1. How do I do it? =SUMIF($F2:$F45,1,$H2:$H45) The first cell contains the correct formula with the columns highlighted. Autofill gives me a re