Sliding window for historical data purge in multiple related tables

All,
It is a well known question of how to efficiently BACKUP and PURGE historical data based on a sliding window.
I have a group of tables, they all have to be backed up and purged based on a sliding time window. These tables have FKs related to each other and these FKs are not necessary the timestamp column. I am considering using partition based on the timestamp column for all these tables, so i can export those out of date partitions and then drop them. The price I have to pay by this design is that timestamp column is actually duplicated many times among parent table, child tables, grand-child tables although the value is the same, but I have to do the partition based on this column in all tables.
It's very much alike the statspack tables, one stats$snapshot and many child tables to store actual statistic data. I am just wondering how statspack.purge does this, since using DELETE statement is very inefficient and time consuming. In statspack tables, snap_time is only stored in stats$snapshot table, not everywhere in it's child table, and they are not partitioned. I guess the procedure is using DELETE statement.
Any thought on other good design options? Or how would you optimize statspack tables historical data backup and purge? Thanks!

hey oracle gurus, any thoughts?

Similar Messages

  • How to Flip the Sign using Calculation script for historical data

    I am currently using Essbase 9.3.1. Its required to flip sign for a specific set of accounts for that i am currently using UDA's to flip the sign.
    But now i need to flip the sign for the historical data too. Is there any possible way to flip the sign using calculation scripts for historical data. Kindly let me know your suggestions.
    Many thanks in Advance...
    Edited by: [email protected] on Jul 30, 2009 9:37 PM
    Edited by: [email protected] on Jul 30, 2009 10:58 PM
    Edited by: [email protected] on Jul 30, 2009 11:01 PM

    Of course there is. This is the kind of calc script that only gets run once, so make usre you test it well before doing it on production and make a backup of production before doing it.
    It would be something like
    Fix(time frame, accounts to be flipped, level zero other dimensions)
    actual = actual * -1;
    EndFix
    Cal dim dimensions
    Note, I chose actual, but you could do it for any dimension that has a single or only a couple of members. What ever dimension you choose to do the calculation on, it can't be included in the fix statement

  • Loading data from flatfile to relational table,i am getting SQLLDR error

    Hi,
    While loading data from flatfile to relational table,i am getting SQLLDR error and i am unable to proceed further.
    Source is a flatfile and target is a Oracle database,i used "LKM file to oracle(SQLLDR)" and "IKM sql control append"
    and ran the interface.When i checked the seesion in operator window" after generating "CTL file" successfully
    the session got failed at "Call sqlldr" and was not able to proceed further.
    Environment details:
    ODI 11g
    database:Oracle 11g
    Operating system:Windows server 2008
    The error message it displayed in call sqlldr session file was
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 31, in ?
    File "C:\oracle\product\11.1.1\Oracle_ODI_1\oracledi\client\odi\bin\..\..\jdev\extensions\oracle.odi.navigator\scripting\Lib\javaos.py", line 198, in system
    File "C:\oracle\product\11.1.1\Oracle_ODI_1\oracledi\client\odi\bin\..\..\jdev\extensions\oracle.odi.navigator\scripting\Lib\javaos.py", line 224, in execute
    OSError: (0, 'Failed to execute command ([\'sh\', \'-c\', \'sqlldr DEVELOPER/pass_123@CPRDEV control="F:\\\\flatfile/CROSS_CURR.ctl" log="F:\\\\flatfile/CROSS_CURR.log" > "F:\\\\flatfile/CROSS_CURR.out" \']): java.io.IOException: Cannot run program "sh": CreateProcess error=2, The system cannot find the file specified')
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.execInBSFEngine(SnpScriptingInterpretor.java:345)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.exec(SnpScriptingInterpretor.java:169)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java:2374)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java:1615)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java:1580)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java:2755)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2515)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:534)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:449)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1954)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:322)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:224)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:246)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:237)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:794)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:114)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:619)
    could u give me a solution to sort out this error ASAP.
    thanks,
    keshav.

    This was the following code generated .
    import java.lang.String
    import java.lang.Runtime as Runtime
    from jarray import array
    import java.io.File
    import os
    import re
    import javaos
    def reportnbrows():          
         f = open(r"F:\flatfile/TEST.log", 'r')
         try:
              for line in f.readlines():
                   if line.find("MAXIMUM ERROR COUNT EXCEEDED")>=0 :
                        raise line
         finally:
              f.close()
    ctlfile = r"""F:\flatfile/TEST.ctl"""
    logfile = r"""F:\flatfile/TEST.log"""
    outfile = r"""F:\flatfile/TEST.out"""
    oracle_sid=''
    if len('CPRDEV')>0: oracle_sid = '@'+'CPRDEV'
    loadcmd = r"""sqlldr DEVELOPER/<@=snpRef.getInfo("DEST_PASS") @>%s control="%s" log="%s" > "%s" """ % (oracle_sid,ctlfile, logfile, outfile)
    rc = os.system(loadcmd)
    if rc <> 0 and rc <> 2:
    raise "Load Error", "See %s for details" % logfile
    if rc==2:
    reportnbrows()

  • Best design for historical data

    I'm searching a way to design some historical data.
    Here my case:
    I have millions of rows defining a population, each on 20-30 different tables.
    Those rows are related to one master subject area, let say it is a person.
    The details table are defining status in a period of time (marital status, address, gender (yes that may change in some cases!, program, phase, status, support, healthcare, etc). They have all two attributes that define the time period (dateFrom, dateTo), and one or many attributes that define the status of the person (the measure).
    I know we need a weekly situation for this population since 1998 to yet.
    Problems are those:
    1) the population we will analyze will count some 20 different criteria's (measures), may be we will divide those in some different datamarts to avoid to much complexity
    2) we will drill down from year to week, and will accomplish comparison like year to year, year to previous year, year to date, etc at each level of time hierarchy (year, quarter, month, week).
    The question is :
    1) do we need to transform our data at a week level for each person to determine the status at this level (the data may be updated and will cause this transformation to be refreshed) ?
    1) do we need to aggregate for each level because mixed situation exists due to fact that more than one status may exists for same person at a given time period (more than one status may exists in march by example, how to interpret it ?), that will cause to some logic to be applied, isn't ?
    We will be glad to hear some recommendation/idea about this !
    Thank You

    I would try to get some exact user requirements, with the dataset you have described there are millions of combinations of answers that can be defined and it will be difficult for you to fulfill them all in one place, especially if there are slowly changing dimensions involved.
    The user requirements will determine what levels you transform the data at.

  • Storing XML data in CLOB and relational tables

    I would like to ask whether there is a possibility to store XML data using normal relational tables and CLOBs in the same time. For example I have some XML data (structured data) which I would like update very often and some which are only a kind of description. I found something about it in http://technet.oracle.com/tech/xml/infoocs/otnwp/about_oracle_xml_products.htm . But I do not know how to use Oracle8i views and some functionality of XML SQL Utility to retrieve XML data in one file.
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Maciej Marczukajtis ([email protected]):
    I would like to ask whether there is a possibility to store XML data using normal relational tables and CLOBs in the same time. For example I have some XML data (structured data) which I would like update very often and some which are only a kind of description. I found something about it in http://technet.oracle.com/tech/xml/infoocs/otnwp/about_oracle_xml_products.htm . But I do not know how to use Oracle8i views and some functionality of XML SQL Utility to retrieve XML data in one file.<HR></BLOCKQUOTE>
    Czesc Maciek,
    There are some good examples with XSQL Servlet. From what I understand you have one XML file and you need to save a portion of document in relational tables and other portion in CLOB.
    Yes, you can do that.
    You can do it many ways. I can suggest (2).
    1. Use the views
    2. call your java procedure that will do
    the xml processing, brake it down and insert
    releval frogments into different tables/columns
    null

  • Create XML (using DTD) from multiple relation tables

    Hello and thank you in advanced.
    I'm trying to create an XML document, based on a specific DTD, by selecting information from multiple tables in the database.
    Is there a tool (in XDK maybe?) that will allow me to map my relational tables to a specific DTD?
    I could build the XML manually, but I was hoping that Oracle has already solved this problem with an automated tool.
    Thanks again,
    Sean Cloutier

    Is that the same thing as me writing an XSQL document which contains all of my queries (or views).
    In other words, do I have to map everything by hand or is there a tool to do that for me?
    thanks again,
    Sean
    null

  • Help with universe with multiple related tables/loops

    Post Author: bradwist
    CA Forum: Semantic Layer and Data Connectivity
    Hi. I am relatively new to Business Objects and have been putting together some universes for our use. Most things I can accomplish well enough. However, I've run into an issue that I think fits within this topic. I've got several related tables and I'm looking for the best way to construct the universe to support our queries. The best way to describe them is to say that the database has a table of Person records, and a table of Relationships. One Person may be related to another Person in a number of ways (sibling, spouse, parent, etc). This can be pictured as Person   Relationship     ID -
    < PersonID        -
    < RelatedPersonID or Person1 -< Relationship >- Person2 where Person1 and Person2 really both point to the same physical table. I believe these tend to lend themselves to Aliases. However, each Person table really has a number of subordinate records in tables (addresses, phone numbers, etc). Address1 >--- Person1 -< Relationship >- Person2 ---< Address2 While this can be done (and I've done so), it's difficult to explain to some analysts why this construct is needed in the universe in order to answer a question like: Give me all of Bob's Siblings, their ages, and their addresses. This is a simplification, as our structures actually include other types of Entities (Person, Business, Government, Organizations), all of which can be related to one another. Is there a way that BO Universes will support this structure? Thanks.

    exp system/manager file=exp.dmp log=exp.log full=y at the source database.
    imp system/manager file=exp.dmp log=imp_show.log full=y show=y - create the log file without importing the data
    edit the imp_show.log and extract the statements that are needed to re-create users,roles,alter user and grants.
    Pre-create the tablespaces using SQL*Plus in the target database.
    execute the modified imp_show.log(All DDl's)
    imp system/manager file=exp.dmp log=imp.log fromuser=A touser=B
    Re-compile all the Invalid Objects.
    HTH
    -Anantha

  • Logic for sending data files to multiple instances of Central

    We are using Central Pro Output Server 5.6 with a single Central instance as of default installation on a Windows Server 2003. Data for the transaction files coming from our iSeries system via a printer queue (\\.\pipe\jetform\queuename)
    Now we want to be able to produce more documents from Central much faster and therefore setting up multiple instances of Central. The problem is then where to put the logic for choosing instances.
    The simplest way to do this would be to have iSeries to alternate the data files to different pipes (printer queues) for Central. But as we dont want to change our iSeries configuration for this, is there a way to solve this problem in Central?
    Any help with this is much appreciated

    Central provides no mechanism that I'm aware of that would do anything resembling the job distribution that you are wanting. Central is written to monitor it's input folders and process the files it finds there. Each instance is essentially separate from each other.
    Your source system will need to select the appropriate instance to be used. Either that or you will need to have something between the source system and Central that is doing the distribution. For example, you could have the source system write to a folder that is not being monitored by Central and write a program that runs as a service that does monitor that folder. This program would then distribute the files. The likely drawback of having an intermediary program is that you are likely to not end up getting the documents printed any faster than with a single instance.
    Another possible way, if the source system can create files with different file name extensions, would be to have them all written to the same folder and have each instance checking that particular folder but looking for files with different extensions. This might be problematic, though, because it might also end up with each instance watching the same "control" folder so that doing things like pausing Central would end up with no control of which instance was going to be paused.
    The default setting for Central has it pausing for 5 seconds if it completes a job and there are no more files waiting for it. If your jobs are not coming in faster than Central is processing them then you would be getting some of this delay for your jobs. You could reduce this time or even set Central to process a job as soon as it shows up. I don't know if version 5.6 still has the problem but an earlier version would not "see" a file if it happened to show up exactly when it was looking for more (it was probably showing up milli-seconds after Central looked). This caused that job to just sit there until the next job showed up.
    A major factor in getting the documents produced faster is going to be the speed and number of printers that they are going to - plus the number of pages in each document. For us, the single instance of Central that we are currently using can produce print much faster than our HP9050 printers (50 ppm) can actually print it.
    You must be doing a lot of forms or each job is doing a lot of processing. Our typical print job does 4 tasks (depending on the job this can include things like: passing the file through the transformation agent, updating a mainframe database, FTPing the file to another server for archiving, and producing the print). A typical job with 11 output pages takes only 2-3 seconds. We have a task that runs every morning that retrieves mainframe generated jobs. I just checked one of our servers and it processed 208 jobs in exactly 8 minutes (26 jobs per minute at an average of 2.3 seconds per job). We also thought we'd need multiple instances due to speed but that just isn't the case for us. We have other reasons to move to multiple instances but speed in not a major factor any more.

  • CCMS configuration for historical data of a new application server

    Out CI & app servers are configured in CCMS before I joined the team. Around months back a new application server added but it seems for that CCMS is not capturing histrorical data.
    In ST6N, I get all app servers name including the new one. When I click on the new app server and click on Snapshot then detailed information for CPU,memory etc are displayed but when I click on resources under "Previous hours" or "History" to get CPU,memory etc details then an error pop-up is displayed i.e. "No Performance MTEs  availbale for category CPU" or "No Performance MTEs availbale for category MEM" [availbale <- SAP typo].
    How to make sure that we can get previous hours and as well historical cpu,memory use informations ?

    Hi, the data you are trying to see is collected by saposcol.
    1) Log on to the specific application server and verify that everything is working correctly with saposcol:
    ST06->Operating System Collector->Status
    ST06->Operating System Collector->Available days
    ST06->Operating System Collector->Log file.
    2) Installing an additional App server should have configured the statistical data setup automatically:
    ST03N - check if the the app server is listed.
    ST03N->Collector->Performance Monitor collector->Log
    3) Check transaction RZ03 that there are no conflics in the Operation Mode.
    4) Run the program RSCOLL00 manually on that app server.
    This should give you a starting point .....

  • Database design question about historical data in a group of tables

    Hi Folks,
    I have a group of tables having relationships among them. In order to keep the change history, we can not update the data, instead, we add new data to the table(s) and mark older data as whatever non-current status. They all have timestamps in these tables.
    For example, If table Parent changes, we add new record Parent(new) and keep older record(s). But the table child has not changed, so how could we link the parent and child table(s)?
    One solution is to use a unique sequencial number to identify the snapshot of all these group of tables, so the FK contains this sequencial number to keep all tables in sync from point of time t1 to t2 and so on.
    But the problem is if only one table changes, we have to insert new records to ALL tables with the new sequencial number to indicate a new snapshot of all group of tables, this obviously has lots of redundancy when change occurs in one place only.
    However, If we only adds new records to a changed table, lets say Parent, how could we distinguish the current record in Parent table and its child tables to reflect a consistent snapshot of all tables? Because the record in parent table Parent(t2), Parent(t1) all associate to child(t1), since at time of t2, child table has not changed, only Parent table changed.
    Your opinions are appreciated!
    Thanks a lot.

    There are books on the subject of dealing with time series data. You may need to read one or two as this is a very complex topic. Though not applications of time series classification are complex it is difficult to tell based on what information can go in a post.
    What has to be reflected on the parent when a child row is changed? Do both the old child row and the new child row belong to the same parent?
    What activity at the parent level would affect the child rows? That is, is there any activty on the parent that requires new child rows to be populated?
    One way of tying the child rows to a specific set of parents would be to carry the parent key, timestamp to the child as a begin_parent_timestamp and then also potentially have a end timestamp if a change to a parent ends the relationship. If changes to the parent do not ever end the relationsip then the end_timestamp would not be necessary. In this case if you want to join a parent to only the most recent version of a child you can perform a select parent kye, child key, max(parent_timestamp) from child group by parent key, child key. One child row would match serveral parents.
    Without more specifics it is hard to make suggestions that might prove usable but your table relationships might be too complex to deal with in this kind of forum.
    There is a newsgroup on database theory that may be a good place to seek ideas on this type of problem.
    HTH -- Mark D Powell --

  • Select query for picking data in a dynamic internal table

    Dear All,
    Please help.
    <u>The code is :</u>
    p_table1 = itab_final-tabname1.
            p_field1 = itab_final-fieldname1.         
            SELECT (p_field1) FROM (p_table1) INTO CORRESPONDING FIELDS OF TABLE <dyntable1>.      
    It is working fine when the domain is of CHAR
    The piece of code is not working where domain is DATS, CURR, DEC, etc.
    What shall I do so that it works for other domains also. Please its urgent......
    <u>ERROR that came:</u>
    An exception occurred. This exception will be dealt with in more detail
    below. The exception, assigned to the class 'CX_SY_OPEN_SQL_DB', was not
    caught, which                                                         
    led to a runtime error. The reason for this exception is:             
    The data read during a SELECT access could not be inserted into the    
    target field.                                                          
    Either conversion is not supported for the target field's type or the  
    target field is too short to accept the value or the data are not in a 
    form that the target field can accept

    Check below code
    REPORT zpwtest .
    *** Tables
    DATA: lt_data TYPE REF TO data.
    DATA: lt_fieldcatalog TYPE lvc_t_fcat.
    data : p_field type string ,
           p_table type string .
    *** Structure
    DATA: ls_fieldcatalog TYPE lvc_s_fcat.
    *** Data References
    DATA: new_line TYPE REF TO data.
    *** Field Symbols
    FIELD-SYMBOLS: <fs_data> TYPE REF TO data,
                   <fs_1> TYPE ANY TABLE,
                   <fs_2>,
                   <fs_3>.
    ls_fieldcatalog-fieldname = 'MANDT'.
    APPEND ls_fieldcatalog TO lt_fieldcatalog.
    ls_fieldcatalog-fieldname = 'CARRID'. "Fieldname
    ls_fieldcatalog-inttype = 'C'. "Internal Type C-> Character
    APPEND ls_fieldcatalog TO lt_fieldcatalog.
    ls_fieldcatalog-fieldname = 'CONNID'.
    ls_fieldcatalog-inttype = 'N'.
    APPEND ls_fieldcatalog TO lt_fieldcatalog.
    ls_fieldcatalog-fieldname = 'FLDATE'.
    ls_fieldcatalog-inttype = 'D'.
    APPEND ls_fieldcatalog TO lt_fieldcatalog.
    ls_fieldcatalog-fieldname = 'PRICE'.
    ls_fieldcatalog-inttype = 'P'.
    APPEND ls_fieldcatalog TO lt_fieldcatalog.
    ls_fieldcatalog-fieldname = 'CURRENCY'.
    ls_fieldcatalog-inttype = 'C'.
    APPEND ls_fieldcatalog TO lt_fieldcatalog.
    ASSIGN lt_data TO <fs_data>.
    CALL METHOD cl_alv_table_create=>create_dynamic_table
         EXPORTING
           it_fieldcatalog = lt_fieldcatalog
         IMPORTING
           ep_table = <fs_data>
         EXCEPTIONS
           generate_subpool_dir_full = 1
           OTHERS = 2
    IF sy-subrc <> 0.
    ENDIF.
    *** So <FS_1> now points to our dynamic internal table.
    ASSIGN <fs_data>->* TO <fs_1>.
    *** Next step is to create a work area for our dynamic internal table.
    CREATE DATA new_line LIKE LINE OF <fs_1>.
    *** A field-symbol to access that work area
    ASSIGN new_line->*  TO <fs_2>.
    <b>
    p_field = 'mandt carrid connid fldate price currency' .
    p_table = 'sflight' .
    *** And to put the data in the internal table
    SELECT (p_field)
      FROM (p_table)
      INTO CORRESPONDING FIELDS OF TABLE <fs_1>.
    </b>
    *** Access contents of internal table
    LOOP AT <fs_1> ASSIGNING <fs_2>.
      ASSIGN COMPONENT 5 OF STRUCTURE <fs_2> TO <fs_3>.
      WRITE: / <fs_3>.
    ENDLOOP.

  • Mode for check data from R/3 (with tables name)

    I have some table´s technicals names that will be check if the data loading in this is forwarding to BW.
    Anyone can tell me what´s the procedure to know if data from some R/3´s table (standard) is being load into BW ????
    Regards.

    In this instance - it's via datasource 0MATERIAL_TEXT
    To find this out - I checked the description of the table - then based on experience knew it to be that datasource
    Difficult to give you hard and fast rules for EVERY table!
    This one was simpel it was a view on MAKT - others could be inside thousands of lines of function module reads, inside changepointer BADIs - anywhere!

  • Creating a Price in 1 table, determined by data contained in multiple other tables.

    Hi
    I'm trying to construct a quote tool for my resellers and partners. My software product has a different selling price determined by the licence type, licence size and some vertical market positioning.
    In my example, if my partner ticks the box for "Annual Subscription" and "1 Year", I'd like "Licence Cost" in the table below to present the 1 Year Licnce Price from the table on the top right.
    And if they choose "Lifetime Licence" then it should lookup the licence cost under the LifeTime table on the bottom right.
    I've tried using IF and LookUp but got tied up in all sorts of mess when trying to nest multiple functions referencing multiple tables, any help you can offer is gratefully received.
    Thank you.

    Hi Jerry
    Thanks for your explanation.
    So the logic / workflow is as follows:
    Partner should tick the boxes that pertain to the licence the customer wants to buy (Box 1 below).
    Box 1 should look up the price for the required licence in Box 2, in this case a 2 year subscription (I've removed lifetime licences for now). So the selection from Box 1 should find the following in Box 2 below:
    The combination of criteria selected from Box 1, and the prices in Box 2 should then return the price to quote to the customer in box 3, as below:
    I've simplified the spreadsheet in this example to hopefully help you understand my logic. The reality is we have hundreds of prices as we have multiple licence variants, as well as sector and vertical pricing. Therefore if I can figure out how to get this initial step working, I should be able to take it from there.
    I don't really understand your comment about redundancy, I initially started with a pop-up but couldn;t figure out how to make the other cells respond, so I broke the pop up options into tick boxes. If you have a simplified method of achieving just this step above, I'm completely happy to change my (potentially flawed) methodology.
    Thanks once again!
    Drew
    Message was edited by: The_Drew

  • Data loading for Logistics Data Extraction 2LIS_06_INV through setup tables

    Hi Guyz,
    I have LO Datasource 2LIS_06_INV which I am using for extracting Invoice Verification Data.
    I had intialized and loaded invoice data in BI system and was using chais to load daily.
    Now I want to load data again from scratch from source system.
    As of now I know we need to delete and fill up setup tables in source system keeping the users locked so that no new records are created that particular moment.
    But what about the delta queue and extraction queues.
    Do I need to do something over there because as I understand there will be records in both of them so before filling the setup tables I need to delete both the queues and then fill the setup tables so that duplicate records should be avoided.
    Please let me know what exactly should I do in detail.
    Appreciated in advance.
    Regards,
    raps.

    hi,
    this entirely depends upon the DS, if the DS delta type supports overwrite mode then you need not lock the users, you can start the setup job without deleting the delta queues as well coz even if the record is extracted twice the values won't mess up as the flow will be in overwrite mode. If delta type of DS is Additive in table ROOSOURCE then you need to lock down the users or run the job over a weekend when no users are logged in.
    regard,
    Arvind,

  • Tools for Web data entry to Essbase&Relational

    I am looking for tools that will allow to write back data into Essbase as well as to Relational database.

    The offerings from Hyperion in terms of Hyperion Analzyer and Hyperion Reports both offer reporting capabilities for relational and can write back to Essbase however they can't write back to relational. My best guess would be to look at a third party tool.A first start maybe Alphablox (www.alphablox.com). This offers relational and OLAP functionality although I am unaware of its relational write back - beware this product is also not 'out of the box'.A second product to check out might be Arcplan (www.arcplan.com). Again do not know about its ability to write back to relational but worth checking out.Hope this helps.Paul ArmitageAnalitica Ltd.www.analitica.co.uk

Maybe you are looking for