Issue with Data Loading

Hi Experts,
Need your kind help in overcoming the following issue.
We are doing a full load to an ODS daily from flat file system. Due to certain change request the update rule was changed in between and later on we found out that it was not correct. So the update rule was corrected again. Now its working fine. But still some requests are there where the data has been updated in ODS with the faulty update rule. All the data fields in the update rule for the ODS are using overwrite functionality. The problem lies here is we are not sure if the faulty records in the ODS have not been updated by the following requests. As we are not sure of the exact nature of the data coming in.
From this ODS one Infocube and one more ODS is being updated. The Key figure calculation in the Cube uses the 'maximum', 'minimum' & 'Addition' calculations.
And the load from ODS to cube is delta.
Kindly let me know what can we do to close the above issue.
I was planning to delete the concerned requests from ODS and reconstruct it. But that way I can correct the data in ODS only. I am not sure if the changes made to the ODS data will be properly relayed to the cube in the proper fashion. please let me know if it will be correct?
All the flat file data is still residing in the PSA. Also kindly let me know if there is any method to check the PSA for the requests spanning the whole period apart from going to each request and check it one by one.
Thanks in advance.

Hi Experts,
My concern was Say I delete the requests from the day the data was loaded with thw wrong update rule, I have the data sitting in the PSA and I can reconstruct them. I understand that the data can be corrected in ODS.
But in the cube, some key figures are there with the update method as 'addition'. So say I delete the content of the cube and reload it from the ODS all over again, wont the key figures get corruipted yet again?
Also as I was thinking, say we delete the respective request from the cube as well, and reconstruct the data in the ODS. can I fire the delta request again to capture the changed records to be loaded to cube. to be exact will all the data from the date of the data corruption(as I am going to delete the requests from the cube from that day also) will be loaded to the cube?
Also another thing is will it be a better idea to reconstruct a request in the ODS and run the delta package for the cube.. And go on ..
Kindly suggest..
Thanks in advance !!!

Similar Messages

  • Issue with Data Load Table

    Hi All,
           i am facing issue with apex 4.2.4 ,using the  Data Load Table concept's and in this look up used the
          Where Clause option  ,it seems to be not working this where clause ,Please help me on this

    hi all,
        it looks this where clause not filter with 'N'  data ,Please help me ,how to solve this or help me on this

  • Issue with Data Load to InfoCube with Nav Attrivutes Turned on in it

    Hi,
    I am having a issue with loading data to a InfoCube. When i turn
    on the navgational attributes in the cube the data load is failing
    and it just says "PROCESSED WITH ERRORS". when i turn them off
    the data load is going fine. I have done a RSRV test both on
    the infoobject as well as the cube and it shows no errors. What
    could be the issue and how do I solve it.
    Thanks
    Rashmi.

    Hi,
    To activate a navigation attribute in the cube the data need not be dropped from the cube.
    You can always activate the navigation attribute in the cube with data in the cube.
    I think you have tried to activate it in the master data as well and then in the cube or something like that??
    follow the correct the procedure and try again.
    Thanks
    Ajeet

  • Issue with Data Load

    HI All,
    I am trying to load data from CRM System to ODS. I am coming across warning
    "Processing (data packet): No data "   ( Yellow Clolor )
    Rest all is fine.(Requests (messages): Everything OK , Extraction (messages): Everything OK ,
    Transfer (IDocs and TRFC): Everything OK .
    I checked in CRM Box in RSA3 and i can see around 30 records for this extractor.
    I checked the IDOCs in BE87 and every thing looks fine . can some one help in this issue .
    Thanks,
    Abraham

    In RSMO i dont see any data records beeing Pulled, i tried loading data to the PSA but it gives back the same Warning messages in the Details Tab .
    Requests (messages): Everything OK
    Extraction (messages): Everything OK
    Transfer (IDocs and TRFC): Everything OK
    Processing (data packet): No data ( This is in Yellow Color ) 
    But it says that the data Load is Successfull though it is pulling 0 records .

  • Latest PowerQuery issues with data load to data models built with older version + issue when query is changed

    We have a tool built in excel + Powerquery version 2.18.3874.242 - 32 Bit (No PowerPivot) using data load to data model (not to workbook). There are data filters linked to excel cells, inserted in OData query before data is pulled.
    The Excel tool uses organisational credentials to authenticate.
    System config: Win 8.1, Office 2013 (32 bit)
    The tool runs for all users as long as they do not upgrade to PowerQuery_2.20.3945.242 (32-bit).
    Once upgraded users can no longer get the data to load to the Model. Data still loads to the Workbook but the model breaks down. Resetting load to data model erases all measures.
    Here are the exact errors users get:
    1. [DataSource.Error] Cannot parse OData response result. Error: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
    2. The Data Model table could not be refreshed: There isn't enough memory to complete this action. Try using less data or closing other applications. To increase memory available, consider ......

    Hi Nitin,
    Is this still an issue? If so, can you kindly provide the details that Hadeel has asked for?
    Regards,
    Michael Amadi
    Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
    Website: http://www.nimblelearn.com, Twitter:
    @nimblelearn

  • Issue with Data Loading between 2 Cubes

    Hi All
    I have a Cube A which has huge amount of data. Around 7 years of data. This cube is on BWA. In order to empty out space from this Cube we have created a new Cube B.
    We have now starting to load data from Cube A to Cube B based on created on. But we are facing a lot of memory issues hence we are unable to load data for a week in fact. As of now we are loading based on 1 date which is not useful as it will take lot of time to load 4 years of data.
    Can you propose some alternate way by which we can make this data transfer faster between 2 cubes ? I though of loading Cube B from DSO under Cube A but that's not possible as the DSO does not have that much old data.
    Please share your thoughts.
    Thanks
    Prateek

    HI SUV / All
    I have tried running based on Parallel process as there are 4 for my system. there are no routines between Cube to Cube. There is already a MP for this cube. I just want to shift data for 4 years from this cube into another.
    1) Data packet as 10, 000 - 8 out of some 488 packets failed
    2) Data packet as 20, 000 - 4 out of some 244 packets failed
    3) Data packet as 50,000 - Waited for some 50 min. No extraction. So killed the job.
    Error : Dump: Internal session terminated with runtime error DBIF_RSQL_SQL_ERROR (see ST22)
    In ST22:
    Database error text........: "ORA-00060: deadlock detected while waiting for
      resource"
    Can you help resolving this issue or some pointers ?
    Thanks
    Prateek

  • Issue with Data Load from R/3

    Hi,
    I have created a generic data source in R/3 based on the option Extraction from SAP Query. the SAP Query (Infoset) created in the TCODE SQ02 is based on the tables BKPF and BSEG. I have generated a Data source on this and successfully replicate dinto BW but when i trigger the load uing infopackage it immediately fails and gives the following messages.
    *If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.*
    +Job terminated in source system --> Request set to red+
    Can you point where the issues is. I have tested the source system connection and it is fine as other extractors are working fine and data can be pulled
    Thanks
    Rashmi.

    Hi Rashmi,
    Try the following:
    - RSA3 in source system -> test extraction OK?
    - Shortdump (ST22) found in BW or R/3?
    - Locate the job of the extraction in R/3 (SM37) and look at the joblog...
    Hope this leads you to the cause...
    Grtx
    Marco

  • Insert OR Update with Data Loader?

    Hello,
    can i Insert OR Update at same time with Data Loader?
    How can i do this?
    Thanks.

    The GUI loader wizard does allow for this including automatically adding values to the PICKLIST fields.
    However, if you mean the command line bulk loader, the answer is no. And to compound the problem, the command line version will actually create duplicates for some of the objects. It appears that the "External Unique Id" is not really "unique" (as in constrained via unique index) for some of the objects. So be very careful when you prototype something with the GUI loader and then reuse the map on the command line version.
    You will find that some objects can work well with the command line loader (some objects will not).
    Works well (just a few examples):
    Account (assuming your NAME,LOCATION fields are unique).
    Financial Product
    Financial Account
    Financial Transaction
    Will definitely create duplicates via command line bulk loader:
    Contact
    Asset
    Also be aware that you might hear that during a go-live that Oracle will remove the 30k record limit on bulks loads (temporarily). I have not had any luck with Oracle Support making that change (2 clients specifically in the last 12 months).

  • Issue with Data Provider name in variable screen for BEx Analyzer

    Hello all,
    We got an issue with Data Provider name in Variable screen in BEx Analayzer.
    We want to change the DataProvider name there to Description of the report instead of its Technical name.
    Any inputs are appreciated.
    Thanks
    Kumar

    You have to create a workbook to do this.
    Refresh your query/report. In Bex analyser, there is one toolbar named BEx design toolbox, If you are not able to see it in analyser, right click on the toolbar space of BEx analyser and click on BEx design toolbox. Here, goto to design mode, by clicking on a sysbol like 'A'. after that place the curser where you want to see the Query description. and click on insert text (T) in BEx toolbox. click on it and check "Query description" in constant tab. in the general tab you need to assign a dataprovider, for that assign your query name in workbook settings (in Bex design toolbox). also check the "display caption" in general tab.
    Pravender

  • Issue with data dictionary -Table maintanance generator

    Hi all,
    I have an issue with Data dictionary, table maintenance generator. I have entered some records in a custom table (ZBCSECROLETOGRP) and changed the delivery class from C to A. When I create the table maintainance generator, I am encountered with the following errors:
    1)Field ZBCSECROLETOGRP-PORTALGROUP shortened (new visible length: 000032)
    2)0012 could not be generated
    3)In TCTRL_ZBCSECROLETOGRP field LENGTH has the invalid value 01
    My main motto is to create the table maintainace generator and transport to the furthur systems .
    Please help.
    ThnX in advance,
    Vishal..

    HI,
    Regenerate the table maintenance by selecting the checkbox of "Modified field structure" => new entry & then save.
    Also ensure that the new changes are not affecting old data bcz of data type changes. If that is the case, then delete the old records, regenerate table maint. & re-enter those records which you had deleted.
    Thanks,
    Best regards,
    Prashant

  • Issue with Date Format

    Hi All,
    I m facing an issue with Date format in the prompt. I have used date presentation variables in my column formula as shown below:
    FILTER("SKU Order Details"."Fulfilled Quantity" USING Time."Calendar Date" <= DATE '@{todate}{2900-01-01}')
    The report returns data when I don't select any date range for start & end date prompts on the page. But when I select the start & end date values in the prompt, I m getting the following error:
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 46047] Datetime value 10/22/2009 12:00:00 AM from 10/22/2009 12:00:00 AM does not match the specified format. (HY000)
    I included the following formulas for start & end date prompts:
    Start Date prompt: case when 1=2 then License."Ips Creation Date" else cast ('1.1.1900' as date) end
    End Date prompt: case when 1=2 then License."Ips Creation Date" else cast ('1.1.2900' as date) end
    Can you please help me resolve the issue.
    Thanks,
    Kartik

    Hi Nico,
    I tried putting the format that you mentioned, I m getting an error message.
    My prompts have the following formula :
    Start: case when 1=2 then License."Creation Date" else cast('1.1.1900' as DATE) end
    End: case when 1=2 then Time."Calendar Date" else cast('1.1.2900' as DATE) end
    My column formula has the following syntax:
    FILTER("SKU Order Details"."Fulfilled Quantity" USING Time."Calendar Date" between DATE '@{start}{1900-01-01}' AND DATE '@{end}{2999-01-01}')
    Error Message:
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 46047] Datetime value 11/17/2009 from 11/17/2009 does not match the specified format. (HY000)
    Can you please let me know if something needs to be changed.
    Thanks,
    Kartik

  • Issue with site load time when exported as HTML. Addressed?

    Has the issue with website load time been addressed?
    I believe the site attempts to load the entire image content at the initial visit - meaning any galleries with numerous pictures are all included when first getting to the site. This is causing a HUGE increase in load time. I was told when I last brought it up that it was an apparent issue that needed to be addressed.
    Any type of resolution coming?
    Site for reference: www.dkrecollection.com

    Major updates of Muse are targeted to release roughly every quarter. The 1.0 release was in mid-May. The 2.0 release was in mid-August. A fundamental change to image loading would only appear as part of a major update due to the engineering and testing efforts required.
    As provided in your previous thread http://forums.adobe.com/message/4659347#4659347 the only workaround until then is to reduce the number of images in the slideshow.

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issues with class loader on Windows server

    We are observing some performance issues in our application. We are Using weblogic 11g with Java6 on a windows 2003 server
    The thread dumps indicate many threads are waiting in queue for the native file methods:
    "[ACTIVE] ExecuteThread: '106' for queue: 'weblogic.kernel.Default (self-tuning)'" RUNNABLE
         java.io.WinNTFileSystem.getBooleanAttributes(Native Method)
         java.io.File.exists(Unknown Source)
         weblogic.utils.classloaders.ClasspathClassFinder.getFileSource(ClasspathClassFinder.java:398)
         weblogic.utils.classloaders.ClasspathClassFinder.getSourcesInternal(ClasspathClassFinder.java:347)
         weblogic.utils.classloaders.ClasspathClassFinder.getSource(ClasspathClassFinder.java:316)
         weblogic.application.io.ManifestFinder.getSource(ManifestFinder.java:75)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.application.utils.CompositeWebAppFinder.getSource(CompositeWebAppFinder.java:71)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.CodeGenClassFinder.getSource(CodeGenClassFinder.java:33)
         weblogic.utils.classloaders.GenericClassLoader.findResource(GenericClassLoader.java:210)
         weblogic.utils.classloaders.GenericClassLoader.getResourceInternal(GenericClassLoader.java:160)
         weblogic.utils.classloaders.GenericClassLoader.getResource(GenericClassLoader.java:182)
         java.lang.ClassLoader.getResourceAsStream(Unknown Source)
         javax.xml.parsers.SecuritySupport$4.run(Unknown Source)
         java.security.AccessController.doPrivileged(Native Method)
         javax.xml.parsers.SecuritySupport.getResourceAsStream(Unknown Source)
         javax.xml.parsers.FactoryFinder.findJarServiceProvider(Unknown Source)
         javax.xml.parsers.FactoryFinder.find(Unknown Source)
         javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source)
         org.ajax4jsf.context.ResponseWriterContentHandler.<init>(ResponseWriterContentHandler.java:48)
         org.ajax4jsf.context.ViewResources$HeadResponseWriter.<init>(ViewResources.java:259)
         org.ajax4jsf.context.ViewResources.processHeadResources(ViewResources.java:445)
         org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:193)
         org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:41)
         org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:140)
    On googling this seems to be an issue with java file handling on windows servers and I couldn't find a solution yet. Any recommendation or pointer is appreciated

    Hi shubhu,
    I just analyzed your partial Thread Dump data, the problem is that the ajax4jsf framework ResponseWriterContentHandler triggers internally a new instance of the DocumentBuilderFactory; every time; triggering heavy IO contention because of Class loader / JAR file search operations.
    Too many of these IO operations under heavy load will create excessive contention and severe performance degradation; regardless of the OS you are running your JVM on.
    Please review the link below and see if this is related to your problem.. This is a known issue in JBOSS JIRA when using RichFaces / ajaxJSF.
    https://issues.jboss.org/browse/JBPAPP-6166
    Regards,
    P-H
    http://javaeesupportpatterns.blogspot.com/

  • Issue with a load from R3 to BW 3.5

    Hi Guys,
    We are having an issue here with a load from R3 to BW 3.5 to an ODS and
    a transactional infocube.
    In a daily basis we are running loads to BW from R/3 infosets and all
    of them but one loads fine.
    The one that is having problems is actually the one that loads more
    data, therefore the infopackage is divided into two infopackages.
    In the update rule of the first ODS we are running an initial routine
    in which we are doing a RSDRD_SEL_DELETION (selective deletion) of the
    data to be loaded to both the ODS and the infocube, and is actually
    here where we got the core dump.
    Our first assumption was that maybe there was any yellow request in the
    transactional infocube avoiding the selective deletion but we have
    discarded this.
    We think that, as the only one failing is the one that divides into two
    infopackages, the problem might be that at the moment that the first is
    triying to delete the second one is loading data into the ods and there
    we get the dump.
    The question here is ¿Could this be the problem? ¿How could we
    workaround this if this is the case?
    Thanks for any suggestion.

    Hi,
    Be carefull on ODS activation, if you're using 2 infopackage and/or deleting data from an ODS which is lock for an activation, you'll occurs a dump.
    Check SM12 / ST22 / SM21 and job logs in SM37.
    And if pb still occurs for a load, debug it in start routine.
    Regards,

Maybe you are looking for

  • IMovie speed editor no longer working since Yosemite upgrade

    After upgrading to Yosemite yesterday, I've noticed that the speed editor no longer works. It used to be a turtle icon but now it's a clock (3rd icon from the right in the attached image)..but a clock that can never be clicked on. Also Command R is s

  • Facing problem in transferring data in APD

    Hi All, I have a query which has 2 structures and so the in normal conditions also when the query is executed it takes a lot of time to execute and the output file is also huge. Now the client wants to automate this query by adding one new field in i

  • Sap-is oil and gas

    Hi all, My Qualification is MSc(Geophysics) related to oil and gas exploration.cureently i am working on sap abap from past two years.now i want to shift my career in to sap-is oil and gas. what is sap-is oil and gas. what are the prerequisites for t

  • Burning a CD is driving me up the wall!!! 3

    Hi, my name is Ana. I'm having a party really soon and i wanted to play music from my iTunes. I made a regular playlist titled "Party" but I cant burn it? The Burn CD icon is not there; only the BROWSE icon is there. It will only let me burn the Smar

  • Change the java cup picture in the top left corner

    Does any one knows how to change the java cup picture in the top left corner of the frames and applets Thanks in advance