Full load from r/3 failed due to bad data and no psa option selected

SDN Experts,
Full load from R/3 failed loading in to ODS and in the infopackage on psa selected. how to fix it? can i rerun the infopackage again to load from R/3? May be it will fail again? is this a design defect?
i will assign points. Thank you.
Les

Hi,
There is obsolutely no prob. in re executing the package, but before that, check
your ODS and Info Source is re active and update rules are also active if any..
and you can check the option of target info provider selected properly along with
the PSA option and also check whether you have any subsequent process like
updating data from ODS to Cube and so on..
You can also check the data availability at R/3 for the respective DS by executing
RSA3.
Hope this helps..
assign points if useful..
Cheers,
Pattan.

Similar Messages

  • Full load from a DSO to a cube processes less records than available in DSO

    We have a scenario, where every Sunday I have to make a full load from a DSO with OnHand Stock information to a cube, where I register on material and stoer level a counter if there is stock available.
    The DTP has no filters at all and has a semantic group on 0MATERIAL and 0PLANT.
    The key in the DSO is:
    0MATERIAL
    0PLANT
    0STOCKTYPE
    0STOR_LOC
    0BOM
    of which only 0MATERIAL, 0PLANT and 0STORE_LOC are later used in the transformation.
    As we had a growing number of records, we decided to delete in the START routine all records, where the inventory is not GT zero, thus eliminating zero and negative inventory records.
    Now comes the funny part of the story:
    Prior to these changes I would [in a test system, just copied from PROD] read some 33 million of records and write out the same amount of records. Of course, after the change we expected to write out less. To my total surprise I was reading now however 45 million of records with the same unchanged DTP, and writing out the expected less records.
    When checking the number of records in the DSO I found the 45 million, but cannot explain why in the loads before we only retrieved some 33 millions from the same unchanged amount of records.
    When checking in PROD - same result: we have some 45 million records in the DSO, but when we do the full load from the DSO to the cube the DTP only processes some 33 millions.
    What am I missing - is there a compression going on? Why would the amount of records in a DSO differ from the amount of records processed in the DataPackgages when I am making a FULL load without any filter restrictions and only a semantic grouping in place for parts of the DSO key?
    ANY idea, thought is appreciated.

    Thanks Gaurav.
    I did check if there were more/any loads doen inbetween - there were none in the test system.  As I mentioned that it was a new copy from PROD to TEST, I compared the number of entries in the DSO and that seems to be a match between TEST and PROD, ok a some more in PROD but they can be accounted for. In test I loaded the day before the changes were imported to have a comparison, and between that load and the one ofter the changes were imported nothing in the DSO was changed.
    Both DTPs in TEST and PW2 load from actived DSO [without archive]. The DTPs were not changed in quite a while - so I ruled that one out. Same with activation of data in the DSO - this DSO get's loaded and activated in PROD daily via process chain and we load daily deltas into the cube in question. Only on Sundays, for the begin of the new week/fiscal period, we need to make a full load to capture all materials per site with inventory. The deltas loaded during the week are less than 1 million, but the difference between the number of records in the DSO and the amount processed in the DataPackages is more than 10 millions per full load even in PROD.
    I really appreciated the knowledgable answer, I just wished you would pointed out something that I missed out on.

  • Load from InfoSource 0EMPLOYEE  failed

    Hi all,
    My source data contains some special characters in field TRFKZ.I maintained special characters using RSKC.
    Now when I load data uptill PSA it gets uploaded.but when i try to upload it to the infoobject level,it gets failed.
    The monitor message suggests Load from InfoSource 0EMPLOYEE  failed.
    Please BI Experts,help me to understand the neccessary steps to be taken from my side.
    Thanks,
    Srijith

    Click on the long text or question mark box next to error logs. It will give you the particular record and character for which it is failing.
    Once you get the record go to PSA and check the data in PSA. Sometimes PSA doesnt show all the special chartacters , you need to go to SE11 , enter the PSA table name , go to your record and change the display settings to "SE16 Standard list" From Diaply --> User Settings.
    HAve experienced few characters which are not resolved by RSKC hence have to write a routine to fix them or ask ECC team to fix the data.
    Monika

  • Delta and Full load from a single Delta ODS

    I have delivered 0DS 0GT_DS01 that is being loaded from R/3 as a delta update. This ODS is then loaded to a delivered InfoCube 0GT_C01 as a delta.
    My question is this, can I create update rules and a Infopackage to do a full load from 0GT_DS01 ODS to my custom ODS ZGT_DS01?
    I kind of remember from my BW310 class that in release 3.5,  if I have an ODS to InfoCube Delta update all other updates from that ODS must be a delta also.
    Regards,
    Mike...

    Siva,
    I created my update rule without a problem.
    I created the Infopackage with my ZGT_DS01 ODS at the target, the update mode as FULL and I saved the InfoPackage. After SAVE I went to the drop-down menu, Scheduler ==> Initialization For This Datasource and there is a Delta present.
    The Infopackage works OK when I run is from RSA1, but when I put the Infopackage in Process ChainI get and error "Delta upload is only possible after successful initialization". What I see in the Manage screen for my ZGT_DS01 ODS is a request for a full load followed by a request for a Delta.
    Any Ideas why the infopackage works in RSA1 but not in a process chain?
    Regards,
    Mike...

  • AD 11g Trusted Recon is failing due to invalid Date format for Start Date

    We are using OIM 11g with AD 11g connector.
    we have mapped "whenCreated" attribute of AD to "Start Date" in OIM. We ran Trusted Recon, the recon failed due to invalid date format.
    we got the following error :
    Caused By: oracle.iam.reconciliation.exception.InvalidDataFormatException: Invalid data - 10/19/2012 10:33:30 AM against Date format yyyy/MM/dd HH:mm:ss z for key Start Date
    at oracle.iam.reconciliation.impl.ReconOperationsServiceImpl.convertReconFieldsToOIMFields(ReconOperationsServiceImpl.java:1610)
    at oracle.iam.reconciliation.impl.ReconOperationsServiceImpl.ignoreEvent(ReconOperationsServiceImpl.java:548)
    at oracle.iam.reconciliation.impl.ReconOperationsServiceImpl.ignoreEvent(ReconOperationsServiceImpl.java:535)
    at sun.reflect.GeneratedMethodAccessor9326.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at oracle.iam.platform.utils.DMSMethodInterceptor.invoke(DMSMethodInterceptor.java:25)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    Thanks.

    Caused By: oracle.iam.reconciliation.exception.InvalidDataFormatException: Invalid data - *10/19/2012 10:33:30 AM* against Date format yyyy/MM/dd HH:mm:ss z for key Start Date
    Error is because of invalid date format.
    You need to bring data in required format. As I remember you can configured it in one of the AD configuration lookup.

  • "failed due to bad configuration"

    I have a .air program running on nearly 400 pc's in the field.  I have an upgrade I would like to release silently.    I can get the new .air file to the machines automatically.  I would like to run a batch file that will install it, and hit the "Replace" button when it notices that a older version is already out there. 
    So, I included the Adobe Air Application Installer.exe in the download, I created a batch file that does this: 
    "Adobe AIR Application Installer.exe" -silent -eulaAccepted -desktopShortcut "C:\Program Files (x86)\MascoVSC\vscassets\program\VSC.air" 
    THis runs perfectly when the old version is not on the machine, but if it is, I get the error:  "failed due to bad configuration".  
    Any help would be greatly appriciated.

    Hi Jeff,
    Could you give the ARH tool a try and in the case of an update, do an uninstall first, then install?
    Binaries:
    Mac: http://airdownload.adobe.com/air/distribution/latest/mac/arh
    Win: http://airdownload.adobe.com/air/distribution/latest/win/arh.exe
    Linux: http://airdownload.adobe.com/air/distribution/latest/lin/arh
    Documentation:
    http://help.adobe.com/en_US/air/redist/WS485a42d56cd19641-70d979a8124ef20a34b-8000.html#WS fffb011ac560372f117f43a112b21f7d88d-8000
    Thanks,
    Chris

  • Full restore from Time Machine fails

    I am trying to replace a failing hard drive. The failing drive in in a box. A new one is in the computer.
    When I try to run a full restore from Time Machine it comes up with an error saying the restore failed. No useful information like codes or reasons are given.
    I'd prefer to do a full restore, but I fear the cause is too much system file corruption on the TM drive. I think the only alternative is to recreate the four user accounts and somehow copy the files back to the accounts.
    Before I swapped the drives I set all permissions on the files in each account to admin=read/write, user=read/write, other=read only. I then created a couple incremental backups to Time Machine.
    Currently I have 10.5.0 installed. I have to install 10.4, then upgrade, which is a very lengthy process.
    The old drive is still (barely) functional. While using it though, if I try to run any program, the system hangs for minutes at a time, (BBOD).
    How should I proceed? Is there a way of repairing the backup? Do I try the copying all the files? I have no way of using the Migration feature since I only have the one Mac. I do have a PC.
    Thank you in advance.

    Melnibonean wrote:
    When I try to run a full restore from Time Machine it comes up with an error saying the restore failed. No useful information like codes or reasons are given.
    It's possible that as the HD was failing, it corrupted something in your installation of OSX, and that was backed-up, so when you restore it, you're restoring damaged items.
    Try again, but pick an earlier backup, from before the problems started.
    When you do, once it starts, select Window, then +Show Log+ and +Show All logs+ from the menubar. Watch the messages; if it fails again, note what it says. That will tell us, roughly, where it was, so we may be able to avoid it.
    Currently I have 10.5.0 installed. I have to install 10.4, then upgrade, which is a very lengthy process.
    Huh? Why would you install Tiger?
    How should I proceed? Is there a way of repairing the backup?
    Possibly, depending on what's wrong.
    Instead of selecting +Restore System from Backup+ from the Utilities menu, select +Disk Utility+ and use it to do a +*Repair Disk+* on the backups (via your Leopard Install disc). Then quit DU and try +Restore System from Backups.+
    I have no way of using the Migration feature since I only have the one Mac.
    Yes, you do, if the backups are ok. You can use +Setup Assistant+ or +Migration Assistant+ from your backups (or a clone), not just another Mac. But they both only use the most recent backup, which may be damaged, so try the full restore first.

  • Random Bingads WSDL errors: Parsing WSDL: Couldn't load from...failed to load external entity, Could not connect to host, Parsing Schema: can't import schema from

    Hi,
    I am developing a web tool that accesses client's Bingads accounts via OAUTH2 granting. I am downloading data from clients daily.
    Generally it worked, but for 2 days I am experiencing a weird issue, that in random times (but more and more often though) I receive random error messages from Bing server. I am pasting below a few example from my logs (with timestamp and request/response).
    Must note that I launch these requests from the server where we have the webapp but also I launch it locally. The same is the result.
    The timestamp is in France time (GMT+1).
    Thanks ahead!
    Best regards,
    Steve
    2015-01-14 15:08:05: Service\BingAds::initService => SOAP-ERROR: Parsing Schema: can't import schema from 'https://reporting.api.bingads.microsoft.com/Api/Advertiser/Reporting/V9/ReportingService.svc?xsd=xsd0'
    ---------Soap Request:--------------------------------------------
    <?xml version="1.0" encoding="UTF-8"?>
    <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="https://bingads.microsoft.com/CampaignManagement/v9"><SOAP-ENV:Header><ns1:CustomerAccountId>XXX</ns1:CustomerAccountId><ns1:CustomerId/><ns1:DeveloperToken>XXX</ns1:DeveloperToken><ns1:UserName/><ns1:Password/><ns1:AuthenticationToken>XXX</ns1:AuthenticationToken></SOAP-ENV:Header><SOAP-ENV:Body><ns1:GetDetailedBulkDownloadStatusRequest><ns1:RequestId>108277125</ns1:RequestId></ns1:GetDetailedBulkDownloadStatusRequest></SOAP-ENV:Body></SOAP-ENV:Envelope>
    ---------Response:------------------------------------------------
    <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><s:Header><h:TrackingId xmlns:h="https://bingads.microsoft.com/CampaignManagement/v9">XXXXX</h:TrackingId></s:Header><s:Body><GetDetailedBulkDownloadStatusResponse
    xmlns="https://bingads.microsoft.com/CampaignManagement/v9"><Errors i:nil="true" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"/><ForwardCompatibilityMap xmlns:a="http://schemas.datacontract.org/2004/07/System.Collections.Generic"
    xmlns:i="http://www.w3.org/2001/XMLSchema-instance"/><PercentComplete>100</PercentComplete><RequestStatus>Completed</RequestStatus><ResultFileUrl>https://download.api.bingads.microsoft.com/ReportDownload/Download.aspx?q=XXX</ResultFileUrl></GetDetailedBulkDownloadStatusResponse></s:Body></s:Envelope>
    2015-01-15 05:41:39: Service\BingAds::getCampaigns => SoapFault: Could not connect to host
    ---------Soap Request:--------------------------------------------
    <?xml version="1.0" encoding="UTF-8"?>
    <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="https://bingads.microsoft.com/CampaignManagement/v9"><SOAP-ENV:Header><ns1:CustomerAccountId>XXX</ns1:CustomerAccountId><ns1:CustomerId/><ns1:DeveloperToken>XXX</ns1:DeveloperToken><ns1:UserName/><ns1:Password/><ns1:AuthenticationToken>XXX</ns1:AuthenticationToken></SOAP-ENV:Header><SOAP-ENV:Body><ns1:GetCampaignsByAccountIdRequest><ns1:AccountId>XXX</ns1:AccountId></ns1:GetCampaignsByAccountIdRequest></SOAP-ENV:Body></SOAP-ENV:Envelope>
    ---------Response:------------------------------------------------
    (empty response logged)
    2015-01-15 05:45:00: Service\BingAds::initService => SOAP-ERROR: Parsing WSDL: Couldn't load from 'https://api.bingads.microsoft.com/Api/Advertiser/CampaignManagement/v9/CampaignManagementService.svc?singleWsdl' : failed to load external entity "https://api.bingads.microsoft.com/Api/Advertiser/CampaignManagement/v9/CampaignManagementService.svc?singleWsdl"
    2015-01-15 11:58:46: Service\BingAds::getCampaigns =>
    ---------Soap Fault:--------------------------------------------
    SoapFault catched:
    Could not connect to host
    ---------Soap Request:--------------------------------------------
    <?xml version="1.0" encoding="UTF-8"?>
    <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="https://bingads.microsoft.com/CampaignManagement/v9"><SOAP-ENV:Header><ns1:CustomerAccountId>XXXX</ns1:CustomerAccountId><ns1:CustomerId/><ns1:DeveloperToken>XXXX</ns1:DeveloperToken><ns1:UserName/><ns1:Password/><ns1:AuthenticationToken>XXX</ns1:AuthenticationToken></SOAP-ENV:Header><SOAP-ENV:Body><ns1:GetCampaignsByAccountIdRequest><ns1:AccountId>XXXX</ns1:AccountId></ns1:GetCampaignsByAccountIdRequest></SOAP-ENV:Body></SOAP-ENV:Envelope>
    ---------Response:------------------------------------------------
    (empty response logged)
    2015-01-15 11:59:50: Service\BingAds::initService =>
    ---------Soap Fault:--------------------------------------------
    SoapFault catched:
    SOAP-ERROR: Parsing WSDL: Couldn't load from 'https://api.bingads.microsoft.com/Api/Advertiser/CampaignManagement/v9/CampaignManagementService.svc?singleWsdl' : failed to load external entity "https://api.bingads.microsoft.com/Api/Advertiser/CampaignManagement/v9/CampaignManagementService.svc?singleWsdl"
    ---------Soap Request:--------------------------------------------
    <?xml version="1.0" encoding="UTF-8"?>
    <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="https://bingads.microsoft.com/CampaignManagement/v9"><SOAP-ENV:Header><ns1:CustomerAccountId>XXX</ns1:CustomerAccountId><ns1:CustomerId/><ns1:DeveloperToken>XXXXX</ns1:DeveloperToken><ns1:UserName/><ns1:Password/><ns1:AuthenticationToken>XXXX</ns1:AuthenticationToken></SOAP-ENV:Header><SOAP-ENV:Body><ns1:GetCampaignsByAccountIdRequest><ns1:AccountId>XXX</ns1:AccountId></ns1:GetCampaignsByAccountIdRequest></SOAP-ENV:Body></SOAP-ENV:Envelope>
    ---------Response:------------------------------------------------
    <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><s:Header><h:TrackingId xmlns:h="https://bingads.microsoft.com/CampaignManagement/v9">XXXXX</h:TrackingId></s:Header><s:Body><GetCampaignsByAccountIdResponse
    xmlns="https://bingads.microsoft.com/CampaignManagement/v9"><Campaigns xmlns:i="http://www.w3.org/2001/XMLSchema-instance"></Campaign>........</Campaigns></GetCampaignsByAccountIdResponse></s:Body></s:Envelope>
    2015-01-15 12:05:55: Service\BingAds::getCampaigns =>
    ---------Soap Fault:--------------------------------------------
    SoapFault catched:
    Could not connect to host
    ---------Soap Request:--------------------------------------------
    <?xml version="1.0" encoding="UTF-8"?>
    <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="https://bingads.microsoft.com/CampaignManagement/v9"><SOAP-ENV:Header><ns1:CustomerAccountId>XXXXX</ns1:CustomerAccountId><ns1:CustomerId/><ns1:DeveloperToken>XXXXX</ns1:DeveloperToken><ns1:UserName/><ns1:Password/><ns1:AuthenticationToken>XXXXXX</ns1:AuthenticationToken></SOAP-ENV:Header><SOAP-ENV:Body><ns1:GetCampaignsByAccountIdRequest><ns1:AccountId>XXXXX</ns1:AccountId></ns1:GetCampaignsByAccountIdRequest></SOAP-ENV:Body></SOAP-ENV:Envelope>
    ---------Response:------------------------------------------------
    (empty response logged)

    Hi,
    1. I am using the older version of the PHP library provided by Bing (updated on 1/20/2014), so that is doing the WSDL loadings. I initialize the proxy calling OpticoBingAdsClientProxy providing what it needs, and then do the requests.
    2. I have a cron that reads data from client's accounts. I make several calls, like get search query report, get keyword performance report, get keyword bulk data. As the script progressed the first 2 worked and the third gave error. Or in other cases the
    first request failed. The calls have quite some time in between them since I process data (sometimes even 160 seconds)
    3. I did not change the code ion terms of requests, since as I said I use the PHP library (same credentials, ...).
    As of today (2015-01-16 10:30 GMT + 2) running my script, still have the same issues.
    Thank you!
    Steve

  • Full restore from Time Machine fails what now!

    Just upgraded to 10.8 and all was fine except on crucial pice of software that was supposed to work in 10.8 but didn't.
    I tried to revert back to 10.6 via full restore but it failed. Now I just tried to restore from TM backup of the 10.8 restore point.
    It fails too. All my HDs check out fine. What went wrong and how can I get back to working?

    Melnibonean wrote:
    When I try to run a full restore from Time Machine it comes up with an error saying the restore failed. No useful information like codes or reasons are given.
    It's possible that as the HD was failing, it corrupted something in your installation of OSX, and that was backed-up, so when you restore it, you're restoring damaged items.
    Try again, but pick an earlier backup, from before the problems started.
    When you do, once it starts, select Window, then +Show Log+ and +Show All logs+ from the menubar. Watch the messages; if it fails again, note what it says. That will tell us, roughly, where it was, so we may be able to avoid it.
    Currently I have 10.5.0 installed. I have to install 10.4, then upgrade, which is a very lengthy process.
    Huh? Why would you install Tiger?
    How should I proceed? Is there a way of repairing the backup?
    Possibly, depending on what's wrong.
    Instead of selecting +Restore System from Backup+ from the Utilities menu, select +Disk Utility+ and use it to do a +*Repair Disk+* on the backups (via your Leopard Install disc). Then quit DU and try +Restore System from Backups.+
    I have no way of using the Migration feature since I only have the one Mac.
    Yes, you do, if the backups are ok. You can use +Setup Assistant+ or +Migration Assistant+ from your backups (or a clone), not just another Mac. But they both only use the most recent backup, which may be damaged, so try the full restore first.

  • Full load works, but delta fails - "Error in the Extractor"

    Good morning,
    We are using datasource 3FI_SL_ZZ_SI (Special Ledger line items) to load a cube, and are having trouble with the delta loads.  If I run a full load, everything runs fine.  If I run a delta load, it will initially fail with an error that simply states "Error in the Extractor" (no long text).  If I repeat the delta load, it completes successfully with 0 records returned.  If I then rerun the delta, I get the error again.
    I've run extractions using RSA3, but they work fine - as I would expect since the full loads work.  Unfortunately, I have not been able to find why the deltas aren't working.  After searching the Forums, I've tried replicating the datasource, checked the job log in R/3 (nothing), and run the program RS_TRANSTRU_ACTIVATE_ALL, all to no avail.
    Any ideas?
    Thanks
    We're running BW 3.5, R/3 4.71

    And it's just that easy....
    Yes, it appears this is what the problem was.  I'd been running the delta init without data transfer, and it was failing during the first true delta run.  Once I changed the delta init so that it transferred data, the deltas worked fine.  This was in our development system.  I took a look in our production system where deltas have been running for quite some time, and it turns out the delta initialization there was done with data transfer. 
    Thank you very much!

  • Perform INIT, DELTA, and FULL Load from R3 to BI 7.0

    Hi,
    Could anyone please give me the information about how to perform the following to load data from the R3 system to the BI 7.0 system:
    1. INIT Load
    2. Delta Load
    3. Full Load
    4. RealTime Data Acquisition
    Where can I find the settings for all the above in DTP?
    For Real Time Data Acquisition; Don't I have to load the data from the data soruce to the PSA........
    Does the data goes directly from the data source to the data target throguh Daemon Process?  I read the SAP Help doc's but still little confused.......could any one please explain it in your clear way.
    Thank, I will assign the points.

    the first three are similar to 3.5
    check out this blog by KJ for RDA
    Real-Time Data Acquisition -BI@2004s

  • Can I do Parallel Full loads from ODS to Cube.

    Hai,
       Usually I will do one a full update load from OD'S to Cube. to speed up the process can I do parallel Full update loads from ods to Cube?
    Please advise.
    Thanks, Vijay,

    Assuming that the only connection we are talking about is between a single ODS and a single Cube.
    I think the only time you could speed anything up is in full drop an reload scenario. You could create multiple InfoPackages based on selection and execute them simultaneously.
    If the update is a delta there is really no way to do it.
    How many records are we talking about? Is there logic in the update rule?

  • MARS - Load From Seed File fails

    Hi, everybody.
    I have problem with importing larger number of devices into MARS using Load from Seed File funcionality. I successfully connect to FTP server but I receive following error:
    Status: Errors occured while retrieving csv file from ftp server. sed: can't read /tmp/mars.csv: No such file or directory while executing "exec sed -i "s/\15//g" /tmp/$fileName" (file "./ftpconfig.exp" line 243)
    Any ideas how to resolve it ?
    Thanks in advance
    Marko

    Hello,
    It seems to me that if you have any of the following characters in SNMP string "/\.*[]^$" it could make problems to sed which is parsing the file. Try to change SNMP strings to avoid those characters.
    BR,
    Marko

  • Data load from infoprovider : status is green, but no data is loaded.

    Hi,
    I have created a new application copying the planning application (which was again a copy from the apshell application set). I modified the dimensions, after which I wanted to load data from a BW infoprovider using the datamanager.
    The first thing I noticed in the datamanager, was that the packages were not the same as in the original planning application. In fact I had chosen to copy all objects to the new application. For example, the package 'load data from infoprovider' that was contained in the data management map of the planning application, was not visible anymore in my new application. Instead, there is a package 'Loadinfoprovider' located in the system administrative map. As these packages are exactly the same, the package 'Loadinfoprovider' does not seem to work properly. When you run the package, it says that is has submitted records to your application, but when you run a report on it, or go look to the bpc infocube in BW, no data is visible.
    Does anyone know what may cause this problem ?
    Edited by: Wouter Vanhoutte on Dec 1, 2009 3:46 PM

    Hi Wouter,
    Did you check your transformation file?
    I had the same problem once, my mistake was that I didn't add a line for my key figure amount (AMOUNT=ZAMOUNT). So the status is green but no data transferred because no key figure to transform.
    I hope this will help you
    Mathieu

  • OLEDB select from Analysis Services fails with error codes 0x80040E21 and 0xC0202009

    Hi,
    In IS2008 SP1, I have an OLEDB source component that uses an Analysis Services OLEDB connection to run an MDX statement (set in the "SQL command text" property). This already worked for other cubes with the same MDX statement (apart from different hierarchy
    and cube names) in several other IS packages.
    But in this package, I keep getting the following errors when executing:
    Error: 2011-12-08 14:12:42.70
       Code: 0xC0202009
       Source: myDataflow myOLEDBsource [190]
       Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred.
     Error code: 0x80040E21.
    End Error
    Error: 2011-12-08 14:12:42.70
       Code: 0xC004701A
       Source: myDataflow SSIS.Pipeline
       Description: component "myOLEDBsource" (190) failed the pre-execute phase and returned error code 0xC0202009.
    End Error
    As usual with AS OLEDB, you get warnings for all columns that the data type is unknown and hence set to DT_WSTR(255). This is the case for the working packages and for the non-working one.
    And when I click on the "Preview" button in the OLEDB source, I see the three column two row resultset as expected.
    As I found some other posts with similar error messages that could be resolved by changing the 32 bit and 64 bit setting, I tried in BIDS 2008 (32 bit) on Vista 32 bit, dtexec 32 bit on the same computer, dtexec 64 bit and dtexec 64 bit on a Win2008 server,
    all with the same results.
    SQL Server Profiler shows that the MDX statement is executed without errors when running the package. I do not see any relevant difference between the working packages/cubes and those not working. I re-created the OLEDB source component several times, even
    copied it over from a working package, and still have no success.
    I even executed the statements in an XMLA sheet in Management Studio with the PropertyList taken from the trace, without seeing anything obvious in the result.
    Any ideas what I could do to get this working?
    Is there any reference explaining OLEDB error codes?
    Thanks
    Frank

    Hi,
    I finally got it working. Re-creating the connection manager solved the issue. There was no difference between the not working and the working connection in the Connection Manager Editor. When I compared on XML source level in the package, I found that the
    working version contained Format=Tabular in the connection string, while the not working version did not contain this.
    Frank

Maybe you are looking for