Data Missmatching(reqiurement approach)

Hi to all,
     As per the Requirement we want Pending Sales Order quantity(Material wise),we are using a standard cube  SALES OVERVIEW(0SD_C03).we are using a standard query Orders Deliverys And Sales Quantities
In this Query we are getting Incoming Orders Quantity, Deliverd Quantity and Billed Quantity. But while comparing records with r/3 and bw .. we are not getting exactly .. what might be the Error Possibilities ,,
Thnx & Rgds
Chaitanya
[email protected]

You might want to check the base tables of extractor in R/3 and compare with BW cube data..

Similar Messages

  • PI 7.11 mapping lookup - data enrichment - appropriate approach?

    Hi guys,
    we just upgraded from PI 7.0 to PI 7.11.
    Now I´m facing a new scenario where an incoming order have to be processed.
    (HTTP to RFC)
    Furthermore each item of the order have to be enriched by data looked up in a SAP ERP 6.0 system.
    the lookup functionality could be accessed during RFC or ABAP Proxy
    With the new PI release we have several possibilities to implement this scenario, which are ...
    (1) graphical RFC Lookup in message mapping
    (2) ccBPM
    (3) using of the lookup API in java mapping
    (4) message mapping RFC Lookup in a UDF
    Because of performance reason I prefer to make use of the Advanced Adapter Engine, if this is possible.
    Further there should only one lookup request for all items of the order instead of each order item.
    I tried to implement possiblity (1), but it seems to be hard to fill the request table structure of the RFC function module. All examples in SDN only uses simple (single) input parameters instead of tables. Parsing the result table of the RFC seems to be tricky as well.
    Afterwards I tried to implement approach (3) using an SOAP adapter as Proxy with the protocol XI 3.0.
    (new functionality in PI 7.11)
    But this ends up in a crazy error message so it seems that SOAP adapter could not used as proxy adapter in this case.
    ccBPM seems also be an good and transparent approach, because there is no need of complex java code or lookup api.
    So  the choice is not so easy.
    What´s the best approach for this scenario??
    Are my notes to the approach correct or do I use/interpret it wrong?
    Any help, ideas appreciated
    Kind regards
    Jochen

    Hi,
    the error while trying to use the soap channel for proxy communication is ....
    com.sap.aii.mapping.lookup.LookupException: Exception during processing the payload. Error when calling an adapter by using the communication channel SOAP_RCV_QMD_100_Proxy (Party: , Service: SAP_QMD_MDT100_BS, Object ID: 579b14b4c36c3ca281f634e20b4dcf78) XI AF API call failed. Module exception: 'com.sap.engine.interfaces.messaging.api.exception.MessagingException: java.io.IOException: Unexpected length of element <sap:Error><sap:Code> = XIProxy; HTTP 200 OK'. Cause Exception: 'java.io.IOException: Unexpected length of element <sap:Error><sap:Code> = XIProxy; HTTP 200 OK'.
    so this feature seems not to work for soap lookups, isn´t it.
    Kind regards
    Jochen

  • Data Migration Validation Approach

    Dear all,
    I am rather new to the SAP world so some patience and clear explanations might be required. Currently I have to find some approaches to validate migrated data (like for instance take random employee and check whether data is moved correctly), but in our company we have no common practices for that kind of things. Can I ask you to help me out and give me some tips (or maybe describe some practices from your experience) on different approaches to validate migrated data.
    Thank you all in advance!

    Hi,
    There are a couple of resources you can use to audit employee master data. The quickest action for you is probably to use SAP standard reports, rather than using SAP Query / Ad Hoc query. You certainly could use either of the reporting tools but if you're really new to SAP and are working on a deadline to audit your master data, I think the learning curve might be too high.
    Some helpful standard reports are:
    PC00_M02_LINF0 - Robust infotype overview per employee, but requires some drill-down.
    S_AHR_61016360 - You might need to play with the selection screen a bit but this can be useful. Canned Ad Hoc Query report.
    You may have to resort to looking up your employees at the infotype table level using transaction SE16. This takes longer but it's also a good exercise for some one who is just starting out in SAP HR.
    Honestly, if you're looking to perform an audit of multiple employees at once, then SAP Query or Ad Hoc Query are much more robust tools. If you're going to be working with master data in SAP beyond the go-live, then it would benefit you greatly to take the time to learn them both. Ad Hoc Query is a simpler but less powerful reporting tool that is useful if you want to slap together a data overview very quickly. Unfortunately I don't know of a good resource for Ad Hoc query off the top of my head. [Here|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10eab7af-0e54-2c10-28a5-87b47adbe1a5?quicklink=index&overridelayout=true] is a fairly useful tutorial to SAP Query, which is a more robust - and more complicated - reporting tool.
    I hope this helps!
    -Matthew

  • Agewise stock report data missmatch in   SAP BI  ?

    Dear  Experts,
    While in R/3 for  material  No 49214  total stock quantity  is 29,0000
    But after executing the query  Agewise stock   for the same  material No 49214 total stcok quantity  is  showing 27,0000 .
    How to match both the  total stcok qauntity.?
    Regards,
    Asit

    Hi,
    But after executing the query Agewise stock for the same material No 49214 total stcok quantity is showing 27,0000 .
    Do you mean BEx Report? If yes, the you need to first look in the Infoprovider level --> PSA Level and in R/3 side RSA3.
    Check if you have any Conditions or Restrictions in the Query.
    Regards,
    Suman

  • Create two or more flash files from one (Excel) data source

    Hi experts,
    I have the following requirement about Xcelsius.
    Our data source is a relational database. Via the ODBC driver we manage to create several queries and execute them into Microsoft Excel. Those excel sheets are the basic for the Xcelsius reports.
    Now we want to build some highly visualized reports on that, but the crucial fact is. We don't want to have all the reports in one FlashFile, but we need several flash file depending on the area of the queries.
    Hence our requirement would be one of the following points:
    - create multiple flash files from one .xlf
    - create multiple .xlf from data source
    Another requirement is the automatic execution of the process. We don't want to have a person in between, who has to call all the .xlf files to create the Flash reports step by step. What we need is an automatic process.
    Can this requirement be fulfilled in a way?
    Maybe by using the Xcelsius SDK?
    Thanks for any helps and comments!
    Sebastian

    Sebastian,
    Firstly talking about the important requirement i.e. automating the process:
    In your case you can achieve this by using the XML maps. This will pick the data automatically when ever report is refreshed.
    Secondly, both the approaches are correct, however i would go with the first one.
    1. Create multiple flash files from one .xlf
         You just need to create one dashboard and have a filter on areas (Invisible) and then export to flash (for every area).    
    2. Create multiple .xlf from data source
         This approach is also fine, however you need to create multiple dashboards and do the same thing i.e filter data based on area.
    P.S. Did you get a chance to explore options to integrate Xcelsius with your Relational Database, this will be much effective.
    -Anil

  • Adaptive tag for date value - output in milliseconds?

    I'm trying to add code to my presentation template, so when a page has multiple content items, I grab the most recent value of all 'last modified dates' for the content items on the page, so this would essentially display the page's last modified date. The approach I'm trying involves outputting the modified date for each content item into a format I can sort in a javascript function, then retrieve the most recent value to display at the bottom of the page's content. In this case, I figured the best manner to sort the items easily would be to display the date in milliseconds, then sort, and finally convert the most recent modified date into mmmm dd, yyyy hh:mm AM/PM format.
    I see there is a method to obtain this date in millseconds (since January 1, 1970) format in javascript, however, is there a way to do this using the adaptive tag <pcs:value expr="modified" format="____________">?
    If not, is there a better way to obtain dates for sorting in a javascript function, so can obtain the entire date, hours, and minutes? I did find one method that grabs the month, day, and year, but not the hours and minutes.
    Thanks!

    Hi Suman/Jeedesh,
    As per Pco notification, it will trigger whenever any of the tag value changes in Agent instance subscription items.
    For above issue, My suggestion
    1. Create DB table name TAGLIST with 200 tags as rows in columns (Tagname, TagValue)
    2. Based on notification trigger, create a transaction and update values w.r.t TagNames in above table
    3. Next time, when notification trigger with fresh value for any of the tag, cross check with existing TagName with Value and update in DB table.
    4. And in the mean time, send those Tag details vie mail trigger or as per requirement
    Instead of creating 200 notification, above is a just alternate way suggestion to achieve dynamic tag value change notification.
    Hope it might solve your problem
    Regards,
    Praveen Reddy

  • AQ or Streams to replicate data from a database table and put it on a queue

    If was tasked with the above if I was using AQ I'd think about publishing to queue via insert trigger on a table that's the destination end of a replication.
    Given that there's replication involved it sounds like if you want to replicate data from A to B and also push it to a queue you should be thinking Streams with a transform and not just AQ.
    However, how do you get Streams to replicate from A to B and at the same time and in the same transaction transform a message and put it on a queue?
    Also, if you are to build the message in XML and use an XML payload what do you use to construct the message? PL/SQL by hand? Are there API's in PL/SQL to build a DOM and then stringify the DOM you've built such that everything is entity encoded correctly and the payload can be published to a queue as an XML payload?

    Oh yes sorry I lost track of which thread this was. I was thinking Streams but I forgot the original suggestion was to use triggers.
    We have many use cases today where data arrives via inserts into some table and a process polls this table and processes the data. This approach does not scale very well. Currently these polling processes are polling the same database that is constantly inserting data as it arrives via multple processes.
    I am trying to move away from this approach by introducing queuing. In addition to the above we also have an approach where as the data arrives it's placed on a queue and the process processes these messages asynchonously from the queue. Currently this queue based implementation does not use Oracle. I am trying to move to a solution where if the data must arrive via inserts into a database the processing of the data doesn't necessarily have to be by fetching those records out of the table thru polling. The general idea is that as the messages are inserted they are subsequently enqueued and processed off a queue. There is no desire to have a process which simply polls the database and enqueues what I find in the table. The goal is to remove constant polling of the database.

  • TCP data server or better solution?

    Hi all,
    We upgraded our software system from 32 bit to 64 bit then LabVIEW is also upgraded from LabVIEW 2010 x86 to LabVIEW 2010 x64. However, one device of us does not have the 64 bit driver thus in order to make all the devices works seamlessly in one computer as before, I used TCP data server and client. The server and the main VI are set up on the LabVIEW x64 whereas the client, which drives one device with 32 bit driver, is running on the LabVIEW x86 process on the same computer. I used the Simple Data Server and Simple Data Client in Examples of LabVIEW.
    Would you please advise me or give me some suggestions for the following questions:
    1. By using this configuration, I can only transfer data in one way. If I want to transfer data in both directions, what should I do? Could you please give me some example codes?
    2. I do not understand clearly the synchronization of the server and the client in this case. For example, when I change the waiting time of server and client, sometimes it freezes the program, sometimes it does not work. Could you advise me the perfect way to set the synchronization (as fast as possible).
    3. Is there any possible solution that could be better than this data server/client approach for my 32 bit driver related problem as described above?
    Thank you very much, I appreciate your help.
    Trung

    Could you guys plesae give me some answers/suggestions. 
    I appreciate it.
    Regards,
    Trung

  • Sorting View output on date - Generic extraction !!!

    Hi All,
    I created a generic data source based on the view, But my problem is like, for the same slection conditions ( same material in the same site) i may get two records, in that case i need to pick latest record based on the created date.
    My approach was like I passed the data from generic data source to DSO and enabled overwrite functionality keeping material and site as key fields, thinking like latest record will overwrite old record, but interestingly old record is replacing the latest record,
    i saw in PSA , the record number for latest record is 555. but record number for old record is 1000.
    So my question is like, what determing the sequence of records that enter into BW system? Is it View? or the sequence of fields that i defined in the view? or any settings that i ignored at the time of defining view? How can i change the sequence of records entering into BI system, i want to change in sucha a way like record number of latest record should greater than the record number of old record.
    Thanks,

    before you assign the final values to FM output, sort the internal table by created date descending and delete adjacent duplicates. So you will have only latest record in output .
    You can also do this in start routine of transforamtion from PSA to DSO

  • Best approach to get the source tables into Target

    Hi
    I am new to Goldengate and I would like to know what is the best approach to get the Source tables to be replicated into the target (oracle to oracle) before performing the initial load without using exp/expdp . Is there any native Goldengate utility which i can use during the initial load or before that will create the tables on the target before loading the data ?
    Thanks

    i dont think so, for the initial load replication your struncture should be available at target machine. once your machines are in sync then you can use goldengate ddl setup for automatically replicate the table with data. 
    Batter approach for you to create a structure on target machine using export improt.  In export use conect=metadata_only for copy the structure only.....
    like
    EXPDP <<user>>/<<password>>@connection_string schemas=abc directory=TEST_DIR dumpfile= gg.dmp Content = metadata_only logfile= gg.log

  • InfoView date prompts problem

    Hi everyone,
    I have problem with prompt summary object in InfoView. In universe I have date prompts, e.x:
    @Prompt('Period from [yyyy-mm-dd]','D',,MONO,FREE)
    but at prompt summary InfoView shows some numbers, e.x:
    Period from [yyyy-mm-dd] 1315180800000
    I have no idea what causes this situation.
    Edited by: Pietras on Dec 5, 2011 1:35 PM

    792011 wrote:
    Hi Gurus,
    I am trying to get Date Prompts from one Date Column. This is the approach i followed from previous post.
    Re: how to get date prompts from one date column
    My Approach
    1) In the report I set filter on Date column with presentation variable as startdate with some default date and in the fx window i applied
    CASE WHEN 1=0 THEN table.Date ELSE table.Date END
    2) I repeated the same thing by getting same date column with Presentation Variable as Enddate with some default date and in the fx i applied
    CASE WHEN 1=1 THEN table.Date ELSE table.Date END
    3) In the Dashboard prompt i got the same date column twice by applying same formulas in fx, default to - server variable - sysdate and setting Set variable - presentation variable - startdate and same thing with Enddate.
    The report is working fine but the report is not pulling all the records. I mean to say i have data from 12:04:36 am but report is pulling from 12:37:53 am. So i am missing some records. I dont know where i am doing mistake.
    Could someone please help me out
    Thank You
    Edited by: 792011 on Sep 14, 2011 11:00 AMBased on what you said you did, it shouldn't work. The CASE statement you have in steps 1) and 2) which you put in the fx window of two columns, are in effect the same thing as just having two instances of your table.Date column.
    Follow the steps in this link and you should be good to go:
    http://oraclebizint.wordpress.com/2008/02/26/oracle-bi-ee-101332-between-prompts-for-date-columns-using-presentation-variables/

  • Data Migration_LSMW

    hi all,
    need information on data migration and possible methods of LSMW
    Thanks
    Swapna

    hi
    Can a give a Search on the Topic "Data Migration" & "LSMW" in the forum for valuable information,
    <b>Pls Do NOT Post UR Mail-Id, Lets all follow some Rules</b>
    Data Migration Life Cycle
    Overview : Data Migration Life Cycle
    Data Migration
    This document aims to outline the typical processes involved in a data migration.
    Data migration is the moving and transforming of data from legacy to target database systems. This includes one to one and one to many mapping and movement of static and transactional data. Migration also relates to the physical extraction and transmission of data between legacy and target hardware platforms.
    ISO 9001 / TickIT accredited
    The fundamental aims of certification are quality achievement and improvement and the delivery of customer satisfaction.
    The ISO and TickIT Standards are adhered to throughout all stages of the migration process.
    •     Customer Requirements
    •     Dependencies
    •     Analysis
    •     Iterations
    •     Data Cleanse
    •     Post Implementation
    •     Proposal
    •     Project Management
    •     Development
    •     Quality Assurance
    •     Implementation
    Customer Requirements
    The first stage is the contact from the customer asking us to tender for a data migration project. The invitation to tender will typically include the Scope /
    Requirements and Business Rules:
    &#61607;     Legacy and Target - Databases / Hardware / Software
    &#61607;     Timeframes - Start and Finish
    &#61607;     Milestones
    &#61607;     Location
    &#61607;     Data Volumes
    Dependencies
    Environmental Dependencies
    &#61607;     Connectivity - remote or on-site
    &#61607;     Development and Testing Infrastructure - hardware, software, databases, applications and desktop configuration
    Support Dependencies
    &#61607;     Training (legacy & target applications) - particularly for an in-house test team
    &#61607;     Business Analysts -provide expert knowledge on both legacy and target systems
    &#61607;     Operations - Hardware / Software / Database Analysts - facilitate system  housekeeping when necessary
    &#61607;     Business Contacts
    &#61607;     User Acceptance Testers - chosen by the business
    &#61607;     Business Support for data cleanse
    Data Dependencies
    &#61607;     Translation Tables - translates legacy parameters to target parameters
    &#61607;     Static Data / Parameters / Seed Data (target parameters)
    &#61607;     Business Rules - migration selection criteria (e.g. number of months history)
    &#61607;     Entity Relationship Diagrams / Transfer Dataset / Schemas (legacy & target)
    &#61607;     Sign Off / User Acceptance criteria - within agreed tolerance limits
    &#61607;     Data Dictionary
    Analysis
    Gap Analysis
    Identifying where differences in the functionalities of the target system and legacy system mean that data may be left behind or alternatively generating default data for the new system where nothing comparable exists on legacy.
    Liaison with the business is vital in this phase as mission critical data cannot be allowed to be left behind, it is usual to consult with the relevant business process leader or Subject Matter Expert (SME). Often it is the case that this process ends up as a compromise between:
    &#61607;     Pulling the necessary data out of the legacy system to meet the new systems functionality
    &#61607;     Pushing certain data into the new system from legacy to enable the continuity of certain ad hoc or custom in-house processes to continue.
    Data mapping
    This is the process of mapping data from the legacy to target database schemas taking into account any reformatting needed. This would normally include the derivation of translation tables used to transform parametric data. It may be the case at this point that the seed data, or static data, for the new system needs generating and here again tight integration and consultation with the business is a must.
    Translation Tables
    Mapping Legacy Parameters to Target Parameters
    Specifications
    These designs are produced to enable the developer to create the Extract, Transform and Load (ETL) modules. The output from the gap analysis and data mapping are used to drive the design process. Any constraints imposed by platforms, operating systems, programming languages, timescales etc should be referenced at this stage, as should any dependencies that this module will have on other such modules in the migration as a whole; failure to do this may result in the specifications being flawed.
    There are generally two forms of migration specification: Functional (e.g. Premise migration strategy) Detailed Design (e.g. Premise data mapping document)
    Built into the migration process at the specification level are steps to reconcile the migrated data at predetermined points during the migration. These checks verify that no data has been lost or gained during each step of an iteration and enable any anomalies to be spotted early and their cause ascertained with minimal loss of time.
    Usually written independently from the migration, the specifications for the reconciliation programs used to validate the end-to-end migration process are designed once the target data has been mapped and is more or less static. These routines count like-for-like entities on the legacy system and target system and ensure that the correct volumes of data from legacy have migrated successfully to the target and thus build business confidence.
    Iterations
    These are the execution of the migration process, which may or may not include new cuts of legacy data.
    These facilitate:
    &#61607;     Collation of migration process timings (extraction, transmission, transformation and load).
    &#61607;     The refinement of the migration code i.e. increase data volume and decrease exceptions through:
    &#61607;     Continual identification of data cleanse issues
    &#61607;     Confirmation of parameter settings and parameter translations
    &#61607;     Identification of any migration merge issues
    &#61607;     Reconciliation
    From our experience the majority of the data will conform to the migration rules and as such take a minimal effort to migrate ("80/20 rule"). The remaining data, however, is often highly complex with many anomalies and deviations and so will take up the majority of the development time.
    Data Cuts
    &#61607;     Extracts of data taken from the legacy and target systems. This can be a complex task where the migration is from multiple legacy systems and it is important that the data is synchronised across all systems at the time the cuts are taken (e.g. end of day processes complete).
    &#61607;     Subsets / selective cuts - Depending upon business rules and migration strategy the extracted data may need to be split before transfer.
    Freeze
    Prior to any iteration, Parameters, Translation Tables and Code should be frozen to provide a stable platform for the iteration.
    Data Cleanse
    This activity is required to ensure that legacy system data conforms to the rules of data migration. The activities include manual or automatic updates to legacy data. This is an ongoing activity, as while the legacy systems are active there is the potential to reintroduce data cleanse issues.
    Identified by
    •     Data Mapping
    •     Eyeballing
    •     Reconciliation
    •     File Integrities
    Common Areas
    &#61607;     Address Formats
    &#61607;     Titles (e.g. mrs, Mrs, MRS, first name)
    &#61607;     Invalid characters
    &#61607;     Duplicate Data
    &#61607;     Free Format to parameter field
    Cleansing Strategy
    &#61607;     Legacy - Pre Migration
    &#61607;     During migration (not advised as this makes reconciliation very difficult)
    &#61607;     Target - Post Migration (either manual or via data fix)
    &#61607;     Ad Hoc Reporting - Ongoing
    Post Implementation
    Support
    For an agreed period after implementation certain key members of the migration team will be available to the business to support them in the first stages of using the new system. Typically this will involve analysis of any irregularities that may have arisen through dirty data or otherwise and where necessary writing data fixes for them.
    Post Implementation fixes
    Post Implementation Data Fixes are programs that are executed post migration to fix data that was either migrated in an 'unclean' state or migrated with known errors. These will typically take the form of SQL scripts.
    Proposal
    This is a response to the invitation to tender, which comprises the following:
    Migration Strategy
    &#61607;     Migration development models are based on an iterative approach.
    &#61607;     Multiple Legacy / Targets - any migration may transform data from one or    more legacy databases to one or more targets
    &#61607;     Scope - Redwood definition / understanding of customer requirements, inclusions and exclusions
    The data may be migrated in several ways, depending on data volumes and timescales:
    &#61607;     All at once (big bang)
    &#61607;     In logical blocks (chunking, e.g. by franchise)
    &#61607;     Pilot - A pre-test or trial run for the purpose of proving the migration process, live applications and business processes before implementing on a larger scale.
    &#61607;     Catch Up - To minimise downtime only business critical data is migrated, leaving historical data to be migrated at a later stage.
    &#61607;     Post Migration / Parallel Runs - Both pre and post migration systems remain active and are compared after a period of time to ensure the new systems are working as expected.
    Milestones can include:
    &#61607;     Completion of specifications / mappings
    &#61607;     Successful 1st iteration
    &#61607;     Completion of an agreed number of iterations
    &#61607;     Delivery to User Acceptance Testing team
    &#61607;     Successful Dress Rehearsal
    &#61607;     Go Live
    Roles and Responsibilities
    Data Migration Project Manager/Team Lead is responsible for:
    &#61607;     Redwood Systems Limited project management
    &#61607;     Change Control
    &#61607;     Solution Design
    &#61607;     Quality
    &#61607;     Reporting
    &#61607;     Issues Management
    Data Migration Analyst is responsible for:
    &#61607;     Gap Analysis
    &#61607;     Data Analysis & Mapping
    &#61607;     Data migration program specifications
    &#61607;     Extraction software design
    &#61607;     Exception reporting software design
    Data Migration Developers are responsible for:
    &#61607;     Migration
    &#61607;     Integrity
    &#61607;     Reconciliation (note these are independently developed)
    &#61607;     Migration Execution and Control
    Testers/Quality Assurance team is responsible for:
    &#61607;     Test approach
    &#61607;     Test scripts
    &#61607;     Test cases
    &#61607;     Integrity software design
    &#61607;     Reconciliation software design
    OtherRoles:
    •     Operational and Database Administration support for source/target systems.
    •     Parameter Definition and Parameter Translation team
    •     Legacy system Business Analysts
    •     Target system Business Analysts
    •     Data Cleansing Team
    •     Testing Team
    Project Management
    Project Plan
    &#61607;     Milestones and Timescales
    &#61607;     Resources
    &#61607;     Individual Roles and Responsibilities
    &#61607;     Contingency
    Communication
    It is important to have good communication channels with the project manager and business analysts. Important considerations include the need to agree the location, method and format for regular meetings/contact to discuss progress, resources and communicate any problems or incidents, which may impact the ability of others to perform their duty. These could take the form of weekly conference calls, progress reports or attending on site
    project meetings.
    Change Control
    &#61607;     Scope Change Requests - a stringent change control mechanism needs to be in place to handle any deviations and creeping scope from the original project requirements.
    &#61607;     Version Control - all documents and code shall be version controlled.
    Issue Management
    &#61607;     Internal issue management- as a result of Gap analysis, Data Mapping, Iterations Output (i.e. reconciliation and file integrity or as a result of eyeballing)
    &#61607;     External issue management - Load to Target problems and as a result of User Acceptance Testing
    &#61607;     Mechanism - examples:
    &#61607;     Test Director
    &#61607;     Bugzilla
    &#61607;     Excel
    &#61607;     Access
    &#61607;     TracNotes
    Development
    Extracts / Loads
    &#61607;     Depending on the migration strategy, extract routines shall be written to derive the legacy data required
    &#61607;     Transfer data from Legacy and/or Target to interim migration environment via FTP, Tape, CSV, D/B object copy, ODBC, API
    &#61607;     Transfer data from interim migration environment to target
    Migration (transform)
    There are a number of potential approaches to a Data Migration:
    &#61607;     Use a middleware tool (e.g. ETI, Powermart). This extracts data from the legacy system, manipulates it and pushes it to the target system. These "4th Generation" approaches are less flexible and often less efficient than bespoke coding, resulting in longer migrations and less control over the data migrated.
    &#61607;     The Data Migration processes are individually coded to be run on a source, an interim or target platform. The data is extracted from the legacy platform to the interim / target platform, where the code is used to manipulate the legacy data into the target system format. The great advantage of this approach is that it can encompass any migration manipulation that may be required in the most efficient, effective way and retain the utmost control. Where there is critical / sensitive data migrated this approach is desirable.
    &#61607;     Use a target system 'File Load Utility', if one exists. This usually requires the use of one of the above processes to populate a pre-defined Target Database. A load and validate facility will then push valid data to the target system.
    &#61607;     Use an application's data conversion/upgrade facility, where available.
    Reconciliation
    Independent end-to-end comparisons of data content to create the necessary level of business confidence
    &#61607;     Bespoke code is written to extract required total figures for each of the areas from the legacy, interim and target databases. These figures will be totalled and broken down into business areas and segments that are of relevant interest, so that they can be compared to each other. Where differences do occur, investigation will then instruct us to alter the migration code or if there are reasonable mitigating factors.
    &#61607;     Spreadsheets are created to report figures to all levels of management to verify that the process is working and build confidence in the process.
    Referential File Integrities
    Depending on the constraints of the interim/target database, data may be checked to ascertain and validate its quality. There may be certain categories of dirty data that should be disallowed e.g. duplicate data, null values, data that does not match to a parameter table or an incompatible combination of data in separate fields as proscribed by the analyst. Scripts are written that run automatically after each iteration of the migration. A report is then generated to itemise the non-compatible data.
    Quality Assurance
    Reconciliation
    &#61607;     Horizontal reconciliation (number on legacy = number on interim = number on target) and Vertical reconciliation (categorisation counts (i.e. Address counts by region = total addresses) and across systems).
    &#61607;     Figures at all stages (legacy, interim, target) to provide checkpoints.
    File Integrities
    Scripts that identify and report the following for each table:
    &#61607;     Referential Integrity - check values against target master and parameter files.
    &#61607;     Data Constraints
    &#61607;     Duplicate Data
    Translation Table Validation
    Run after new cut of data or new version of translation tables, two stages:
    &#61607;     Verifies that all legacy data accounted for in "From" translation
    &#61607;     Verifies that all "To" translations exist in target parameter data
    Eyeballing
    Comparison of legacy and target applications
    &#61607;     Scenario Testing -Legacy to target system verification that data has been migrated correctly for certain customers chosen by the business who's circumstances fall into categories (e.g. inclusion and exclusion Business Rule categories, data volumes etc.)
    &#61607;     Regression Testing - testing known problem areas
    &#61607;     Spot Testing - a random spot check on migrated data
    &#61607;     Independent Team - the eyeballing is generally carried out by a dedicated testing team rather than the migration team
    UAT
    This is the customer based User Acceptance Test of the migrated data which will form part of the Customer Signoff
    Implementation
    Freeze
    A code and parameter freeze occurs in the run up to the dress rehearsal. Any problems post freeze are run as post freeze fixes.
    Dress Rehearsal
    Dress rehearsals are intended to mobilise the resources that will be required to support a cutover in the production environment. The primary aim of a dress rehearsal is to identify the risks and issues associated with the implementation plan. It will execute all the steps necessary to execute a successful 'go live' migration.
    Through the execution of a dress rehearsal all the go live checkpoints will be properly managed and executed and if required, the appropriate escalation routes taken.
    Go Live window (typical migration)
    &#61607;     Legacy system 'end of business day' closedown
    &#61607;     Legacy system data extractions
    &#61607;     Legacy system data transmissions
    &#61607;     Readiness checks
    &#61607;     Migration Execution
    &#61607;     Reconciliation
    &#61607;     Integrity checking
    &#61607;     Transfer load to Target
    &#61607;     User Acceptance testing
    &#61607;     Reconciliation
    &#61607;     Acceptance and GO Live
    ===================
    LSMW: Refer to the links below, can get useful info (Screen Shots  for various different methods of LSMW)
    Step-By-Step Guide for LSMW using ALE/IDOC Method (Screen Shots)
    http://www.****************/Tutorials/LSMW/IDocMethod/IDocMethod1.htm
    Using Bapi in LSMW (Screen Shots)
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    Uploading Material Master data using BAPI method in LSMW (Screen Shots)
    http://www.****************/Tutorials/LSMW/MMBAPI/Page1.htm
    Step-by-Step Guide for using LSMW to Update Customer Master Records(Screen Shots)
    http://www.****************/Tutorials/LSMW/Recording/Recording.htm
    Uploading Material master data using recording method of LSMW(Screen Shots)
    http://www.****************/Tutorials/LSMW/MMRecording/Page1.htm
    Step-by-Step Guide for using LSMW to Update Customer Master Records(Screen Shots) Batch Input method
    Uploading Material master data using Direct input method
    http://www.****************/Tutorials/LSMW/MMDIM/page1.htm
    Steps to copy LSMW from one client to another
    http://www.****************/Tutorials/LSMW/CopyLSMW/CL.htm
    Modifying BAPI to fit custom requirements in LSMW
    http://www.****************/Tutorials/LSMW/BAPIModify/Main.htm
    Using Routines and exception handling in LSMW
    http://www.****************/Tutorials/LSMW/Routines/Page1.htm
    Reward if USeful
    Thanx & regrads
    Naren

  • BIA Dummy Cube to load master data

    Hi everyone,
    We've initiated a project to implement BIA and the racks will arrive in the next few weeks. Something I've heard discussed, but not found documented, is that some companies built a "dummy cube" consisting of all the master data involved in the cubes to be loaded to BIA. Apparently, this is to avoid the potential for locking the master data indexes as multiple cubes are indexed in parallel. Having the master data indexed in advance of indexing several cubes in parallel is apparently much faster, too.
    See "Competing Processes During Indexing"
    [Activating and Filling SAP NetWeaver BI Accelerator Indexes|http://help.sap.com/saphelp_nw2004s/helpdata/en/43/5391420f87a970e10000000a155106/content.htm]
    My questions are: Is this master data "dummy cube" approach documented somewhere? Is this only for the initial build, or is this used for ongoing index rebuilds such that new master data objects are consistently added to the dummy cube? Is this the right approach to avoid master data index locking job delays/restarts, or is there a better/standard approach to index all master data prior to indexing the cubes?
    Thanks for any insight!
    Doug Maltby

    Hi Doug - I'm not aware of this approach documented anywhere. Personally, I'm not sure a "dummy" cube buys you much. The reason I say that is because this "dummy" cube would only be used upon initial indexing. The amount of time to construct this cube, process chain(s), etc. would be close to the equivalent time to do the indexing. The amount of time it takes to do the initial build of the indexes depends on data volumes. From what I've seen in the field this could vary on average from 4-8 hours.
    Locking is a possibility, however, I don't believe this is very prevalent. One of the most important pieces to scheduling the initial builds is timing. You don't want to be loading data to cubes or executing change runs when this takes place. In the event locking does occur, that index build can simply be restarted. Because a lock takes place, it does not mean all of your indexes will fail. The lock may cause a single index build to fail. Reviewing the logs in SM37 or the status of the infocube index in RSDDV will also show the current status. Simply restart any that have failed.
    Hope this helps.
    Josh

  • Can I build two LV executables which share data using a type 2 global?

    I have two LV applications which share data residing in a common VI configured as a type 2 global. Is it possible to configure my build settings so that both applications can continue to share data?
    One approach I tried was to build the common VI into a dll which can then be shared. This works fine with the original applications but falls apart once I build them into executables.

    Ha! Accessing VIs inside an EXE is the best trick I've learnt for a while. Thanks
    As for the rest of my issues...
    I was originally using a LV2 global to store an array of variables (variants). The elements in the array are indexed by storing the variable name as an attribute. This allowed me to easily pass data between modules.
    What I wanted to do was build the modules (one by one) and then continue to use my loader, but once each module is built it seems to run in a completely separate memory space. I can't share the storage VI, even if I dynamically call it using VI server or build it into a DLL.
    I have had a play with the data socket approach and it works perfectly - both in development mode and with built apps. At the moment I am just passing the entire storage array around (was the easiest mod to my existing code) but may be better to pass individual elements around.
    So now I have a loader based application that can pass data efficiently between modules, with the flexibility that I can build new modules at any time
    Thanks for the help!

  • Email form data from within Flash

    Hi all,
    I have searched the Interent and written to several forums
    asking this same question however no one either seems to know the
    answer or just does not want to advise me in any way. I have
    however had a few responses however none of them paid off.
    My Question:
    I am in the process of creating a flash website and I am on
    the last little function I wish to provide. I want to create a
    flash contact us form as some people might not have an email client
    set up therefore I don't want to just but a mailto URL in.
    I have built the form and named the input text boxes, and
    believe I have most of the action script complete. I need to know
    how to use Coldfusion to validate the flash form and then send the
    email with the form data in it. As far as I can understand in flash
    I create a LoadVars method which puts all the form data in this and
    when the user click the submit button flash should send the
    loadvars to a CFC or simliar page. This coldfusion page
    (server-side) handles the processing , i.e. validation reports back
    if need be or continues to send the information to a predefined
    email address i.e. [email protected]
    Can some one please help either point me in a direction of a
    tutorial(s) or provide me with an example of what I am trying to
    do. I am currently running Flash 8 and Coldfusion 8 not sure if
    this has any bearing on what I am trying to do. I am not sure if I
    require Flash Remoting or not, however I do remember reading (not
    sure if this is correct) Coldfusion has Flash Remoting integrated.
    I appreciate any help, support or advise anyone can offer on
    this issue. I am keen to get this moving as my project end date is
    fast approaching.

    Hi,
    For sending email there is no need to use remoting...
    please see this link...
    =>
    http://www.sephiroth.it/tutorials/flashPHP/email/
    and if u not able to do it then i'll do it for you... with
    nominal charge of 20$.
    Thanks,
    Ankur Patel.

Maybe you are looking for

  • CPU TIME

    Hi all experts, In my AWR it shows that my CPU time is high. How i can tune that? Top 5 Timed Events Avg %Total ~~~~~~~~~~~~~~~~~~ wait Call Event Waits Time (s) (ms) Time Wait Class OUTPUT CPU time 230 96.5 Plesase give me some feedback

  • Drop Down Menus in iTunesConnect are non functional

    When I login to iTunesConnect (tried multiple browsers, clearing cache, etc) it says I have to approve the contract. When I go to contracts it says I have to chose a legal entity from the drop down menu. The menu is empty. When I try to add myself to

  • SWIFT MESSAGE TYPE

    Hi All I got a new requirement which has to work with SWIFT messages . which adapter has to use to convert SWIFT to XML. and in the 940 Message type is there any field to indicate whether its Financial related or others. Please let me know. Regards A

  • Airport stays on, but cannot see networks after sleep mode

    i have had this issue for only one week (after 3 perfect years of wireless internet). the problem... the subject section says it all. so far, the only way i get to have internet again is to restart my ibook. works perfect until the next post-sleep mo

  • HT1414 Ipod is disabled connect to itunes

    how do you restore your ipod touch 4g when it says ipod is disabled connect to itunes?