Best Practise to handle Data Refresh & Hierarchy

Hi,
During a recent discussion with one of our BI user groups, the questions were raised as what the best practice are to handle the following two issues.
Issue 1:
If entries are posted to the prior periods in SAP R/3 (outside of the daily auto-refresh range), the current process is that the user group will ask us to conduct a manual refresh in BI for the prior periods which are effected. 
Question: Is it possible to set up a trigger in the system, so that BI knows which periods are changed and automatically refreshes data for those periods?
Issue 2:
If a hierarchy used in the reports is modified, there might be an adverse impact on the financial data the user group reports.  The current process we have in place is to run a group of BI reports for both current year and prior year to make sure nothing is impacted, but there is limitation to this current process.  What if there is no impact on current or prior year, but on the years prior to that?
Question: What other global companies do to minimize such reporting impact, especially when they have hundreds of complex reports?
If someone has any info on this, help me in sharing the same.
Thanks all for your support.
Regards,
Murali

Hi,
1) I would recommend you consider doing another delta init w/o data transfer for all years. (of course first you need have a quiet time in the source system, then need to pull the existing data in the delta queue).
If you don't and /or can't have delta loads for some reason, in the source system, find the table(s) for the old periods, check with your DBAs to see if they can create a "database trigger". With that,  when this table is changed, they can send the new/changed record to another copy table that you can create an extract against it.
2)When a major report needs a change or when we need to change an infobject/hierarchy that will impact many reports, we ask our business analyists/super users/reporting gropus to test the key reports. This is of course after we run some queries in our QA environment to compare the results (before and after the change).
I would like to get consensus from all the regions before touching the global reports, and have them send me their agreement on it.  If everybody is happy, we make the change, ask user to test the query and give us a sign off in QA, then send the query/infoobject.hierarchy to production and regenerate the queries in prodcution if necessary. If you are not too comfortable making this change, at least try avoiding the key dates/weeks, maybe send the change to production Friday afternoon so that if it doesn't work you can change it in the weekend etc.
Cheers
Tansu

Similar Messages

  • Best practise around handling time dependency for flat file loads

    Hi folks,
    This is a fairly common situation - handling time dependency for flat file loads. Please can anyone share their experience around handling this. One common approach is to handle the time validity changes within the flat file where it is easily changeable by the user but then again is prone to input errors by the user. Another would be to handle this via a DSO. Possibly, also have this data entered directly in BI using IP planning layouts. There is a IP planning function that allows for loading flat file data but then again, it only works without the time dependency factor.
    It would be great to hear thoughts or if anyone can point to a best practise document for such a scenario.
    Thanks.

    Bump!

  • Best practises Subversion and Data modeler

    hello, i'am looking for some best practises regarding subversion and datamodeler.
    A team of 10 analysts create several releases of our product over time.
    Within one release you'll find several change requests.
    The application itself contains about 700tables so performance is important.
    I want to establish a lean working method were analyst can focus on their job - design.
    Till now I think to create one trunk containing the db model let's call it v17.00
    An analyst could create their designs in separate projects grouped by change request eg CR1234.
    When development starts i would compare the trunk model with their change request to generated the alter script.
    Afterwards i would import their design CR1234 into the trunk.
    Note : it's possible that a change request got cancelled - that's why i opt for a design per change request.
    This way of working seems much leaner than the setup of branches and merging.
    My opinion, being a novice subversion user, is that setting up branches and merging is "more complex" and might causes frustration for designers.
    Anyone having a simular setup or advice ?
    kr
    chris

    Hi Sam,
    Let me add my two cents here, when speaking about MAN deployments the name of the game is MPLS, so I guess you are using the same on your Cat 6500s and connecting your customers on 3550s using Vlans.
    Regarding your questions:
    a) Upgrading Ethernet to L3 for traffic shaping: This is basically done at 3550, so I suppose that's what you intend to do, plus you will be letting Spokes talk to only Hub site, so inter Vlan, atleast between Hub and each spoke will be required, hence inter valn routing. Other way is to configure P2P circuits between Hub site with Vlan mapping (per spoke) and Spoke sites with Port mapping, in this scenario Inter Vlan routing is not a necessity.
    b) Security: This depends on what exact architecure you have deployed, in my case I have simply installed a Gateway router with BGP peering with PEs, a separate VRF alongwith redistribution does the trick.
    Hope I addresses the query correctly, let me know if that helped..
    Cheers
    ~sultan

  • Best Practise for Data Refresh & Hierarchy

    Hi,
    During a recent discussion with one of our BI user groups, the questions were raised as what the best practice are to handle the following two issues.
    Issue 1:
    If entries are posted to the prior periods in SAP R/3 (outside of the daily auto-refresh range), the current process is that the user group will ask us to conduct a manual refresh in BI for the prior periods which are effected.
    Question: Is it possible to set up a trigger in the system, so that BI knows which periods are changed and automatically refreshes data for those periods?
    Issue 2:
    If a hierarchy used in the reports is modified, there might be an adverse impact on the financial data the user group reports. The current process we have in place is to run a group of BI reports for both current year and prior year to make sure nothing is impacted, but there is limitation to this current process. What if there is no impact on current or prior year, but on the years prior to that?
    Question: What other global companies do to minimize such reporting impact, especially when they have hundreds of complex reports?
    If someone has any info on this, help me in sharing the same.
    Thanks all for your support.
    Regards,
    Murali

    Hi Srini,
    1. SAP suggestes to implement data archiving strategy  as early as possible to manage database growth .
    However pople think of archiving when they realise the  problems like large data volumes,slow system resonse time,performance issues etc...
    2. There is a proper way to implement Data Archiving . Database has to be anaylzed first for getting the top DB tables and Archiving objects.
    3. Based on the DB analysis ,Data archiving plan has to be implenemted according to data management guide.
    4. There is a minimum period known as residence time has to be completed before any data to be archived. Once the document is business completed and serverd its minimum required period in the Database ,it can be archived.
    5, Before going for data archiving there are many steps to be followed like analysis,configuration etc that you can see in details at the link below :
    http://help.sap.com/saphelp_47x200/helpdata/en/2e/9396345788c131e10000009b38f83b/frameset.htm
    let me know if this helps you .
    -Supriya
    Edited by: Supriya  Shrivastava on May 4, 2009 10:38 AM

  • Best practice for handling data for a large number of indicators

    I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
    I was curious what others have done in similar circumstances.
    Bill
    “A child of five could understand this. Send someone to fetch a child of five.”
    ― Groucho Marx
    Solved!
    Go to Solution.

    I can certainly feel your pain.
    Note that's really what is going on in that png  You can see the Action Engine responsible for updating the display to the far right. 
    In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified.  So I worked it this way from no choice of mine.  I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway.  Defer Panel Updates was my very good friend.  The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
    (the GUI did scale poorly though!  That is a lot of wires!  I was greateful to Jack for the Idea to make align and distribute work on wires)
    Jeff

  • Best practise for returning data from EJB's

    I have an EJB that runs a query on a backend database and i want to return the data back to my Java GUI. Ideally i would like to pass a ResultSet back but i don't think they are serialisable so this isn't an option.
    What's considered the best way to pass database results back from EJB's to a front end Java application ?
    Thanks for any ideas you guys have

    If you want type-safety, define a VO (value object) that maps to your result-set, extract the data from the result set into the VO, and return an array of the data. Yes, it's extra work on the "back-end," but that's what the back-end is for. Just make sure your client.jar has the VO in it, as well as the Home and Remote interfaces.

  • Best Practise for loading data into BW CSV vs XML ?

    Hi Everyone,
    I would like to get some of your thoughts on what file format would be best or most efficient to push data into BW. CSV or XML ?
    Also what are the advantages / Disadvantages?
    Appreciate your thoughts.

    XML is used only for small data fields - more like it is easier to do it by XML rather than build an application for the same - provided the usage is less.
    Flat files are used for HUGE data loads ( non SAP ) and definitely the choice of data formats would be flat files.
    Also XML files are transformed into a flat file type format with each tag referring to the field and the size of the XML file grows to a high value depending on the number of fields.
    Arun

  • Best practise to move data to new datawarehouse partitions

    Our DW is about 500GB and expected to double in the next three years.
    Largest table has 290m rows, but some fact tables have as little as 1k rows.
    We are also migrating to SQL Server 2012 (2008R2) by building new servers and will split SSRS and DBMS.
    I was thinking I would only partition the larger fact tables, but my dilemma is moving the data to the new servers.
    Trying to avoid having to load each table manually, so would moving the tables to be partitioned to a different database on the current server be a viable option, what about all the current subscriptions and SSRS reports ?
    Then at some point (only 5 years data) we will need to start archiving, so I wanted this physical design to fit in with assisting the archiving process, seems crazy to partition every fact table or is that a better strategy ?

    Hi,
    I will move this thread to SQL Server Data Warehousing forum for further discussion.
    Here is a link to article on partitioning on Microsoft SQL Server:
    Strategies for Partitioning Relational Data Warehouses in Microsoft SQL Server
    http://technet.microsoft.com/en-us/library/cc966457.aspx
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Best way of handling large amounts of data movement

    Hi
    I like to know what is the best way to handle data in the following scenario
    1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
    2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
    3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
    4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
    5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
    6. Is there any better way of doing the above scenario. What other options do I have.
    7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
    Thanks in advance for your suggestion
    sekhar

    Hi,
    You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
    Regards

  • How to handle data in a request object

    I want to know the best way to handle data received from sql query to display on a jsp page and for page navigation. If I store data in a request object then the data is lost after displaying first page. When user hits Next link to view more data I need to do another query to database which I want to avoid. I display only 20 rows at a time. So if my query returns 100 rows I display 5 pages. I need to retain data when user hits next to view other pages. What is the best way to handle this?
    - Raj

    By caching your resultset. There are caching custom tags that perform this function. Check out http://www.servletsuite.com/jsp.htm.

  • Best Practises with ACS Replication & external databases

    I am looking for a best practise with the following scenario:
    2 ACS Servers in 2 separate locations, each providing mutual backup to each other - i.e. all devices/users in Site X point to local ACS Server X 1st and remote ACS Server Y 2nd. In Site Y the devices/users point to the local ACS Server Y 1st and remote ACS Server X 2nd. This works fine; currently Server X replicates the Database to Server Y.
    In the future we will be implementing a remote LDAP database and will forward unknown users to this database for authentication. As I understand it if an unknown user exists on the LDAP database then the ACS Server will create a local account (depending the mapping policy etc) and point the password at the remote LDAP server. If we replicate from Server X to Server Y, but Server Y has created an account for an unknown user will this get deleted on replication? Is there a best practise to handle this scenario?
    Andy

    I could not find a best practices document as such but a lot of ground is covered in the document 'CiscoSecure Database Replication' at http://www.cisco.com/univercd/cc/td/doc/product/access/acs_soft/csacs4nt/acs33/user/sad.htm#wp755988.

  • Best practise in SAP BW master data management and transport

    Hi sap bw gurus,
    I like to know what is the best practise in sap bw master data transport. For example, if I updated my attributes in development, what are the 'required only' bw objects should I transport?
    Appreciate advice.
    Thank you,
    Eric

    Hi Vishnu,
    Thanks for the reply but that answer may be suitable if I'm implementing a new BW system. What I'm looking for is more on daily operational maintenance and transport (a BW systems that has gone live awhile).
    Regards,
    Eric

  • Best object to handle ResultSet data

    Hi,
    Would just like som adivice, and if possible, an example. What kind of object do you think is best to handle data from a ResultSet, with row and columns? ArrayList perhaps?
    All help appriciated
    - Karl XII

    CachedRowSet:
    http://www.javaworld.com/javaworld/jw-02-2001/jw-0202-cachedrow.html
    See if your JDBC driver supports it. - MOD

  • Data Modeling Best Practise

    Hi Friends !
    When designing a system, what are the best practise for Data Modeling? Please share few tips. Thanks.
    With Regards
    Rekha

    Hi,
    below link can be usefull ,
    BI Data Modeling and Frontend Design
    Also you can get the best practive(config guide) from service.sap.com.
    Best practicess
    http://help.sap.com/bestpractices
    http://help.sap.com/bp_bblibrary/600/html/
    Regards,
    Satya

  • SDO_PC, multiple SRIDs - best practise for data model?

    Hi,
    im using UTM and I am getting data covering two zones.
    all my existing data is from zone A.
    tables:
    pointcloud
    pointcloud_blk
    now im getting data with very few points from zone A and most points from zone B. It was agreed that the data delivery will be in SRID for zone B.
    so I tested whether this would work. I had two pointclouds. One with SRID A, another with SRID B. As soon as I put SRID B pointcloud inside, I could NO LONGER QUERY pointcloud with SRID A.
    So it seems to be necessary to use at least another pointcloud_blk, f.e. pointcloud_blk_[srid].
    Question: does another pointcloud_blk for each SRID suffice or do i also need a pointcloud table per SRID. the pointcloud table seems only interesting due to its EXTENT column. But on the other hand this could be queried by "function", since there are only 10 or so records (pointclouds) inside.
    PLZ share your best practises. What does work, what not.

    It is necessary to have one pointcloud_blk table for each SRID since there is a spatial index on that table.
    As for the PointCloud table itself, it is up to you. You can have pointclouds with different SRIDs in that table.
    But if you want to create spatial index on it, you have to use some function based index so that the index
    sees one SRID for the table.
    Since this table usually does not have many rows, this should work fine with one table for different SRIDs.
    siva

Maybe you are looking for