Maintain Cube Not Working

Brijesh,
I built my dimensions, levels and heirarchies successfully and also created cube. Now that I've built my measures and run the maintainance, I'm not seeing any values in them even though I know I should.
Based on my mapping, the keys from the fact are going to the right dimensions (and I even made a simplier, just one dimension --> measure cube as well), but no success. There are cases where I know I shouldn't get any data (based on selected values), but when I make valid selection I see only 0.00 being displayed.
Can you tell where I may have gone wrong here? Are the values made available for selection (the attributes) ONLY supposed to be the same one-to-one values available in the fact table?
**I'm using the simple SUM aggregate function for my measures, and pretty much all the default configurations given.
Brijesh Gaur
Posts: 416
Registered: 04/03/08
Re: where are dimension attributes in AWM - cube viewer?
Posted: Nov 10, 2009 3:21 AM in response to: mikeyp Reply
Attribute is something related to dimension. They are purely the property of the dimension and not the fact. Now you said the data is not visible in the cube and you are getting 0.00 even with a simplier case(one dimensional cube). There are many causes of the value not shown in the cube and some are mentioned below
1. All records are rejcted in the cube maintaince process. For this you can check olapsys.xml_log_log table and see if you can find any rejected records.
2. There is some breakage in the hierarchy of the dimension. It also prevents data from summing up.
Did you tried with the global sample available on OTN? That should be good starting point for you.
You can check the cube as below to find out it is loaded or not. Consider your one dimensional cube. Find out a member which is existing in the dimension and also has some fact value associated with it.
1. Now limit your dimension to that value like this
lmt <dim name> to '<value>'
For compressed composite cube
2. Now check data in the cube like this
rpr cubename_prt_topvar
for uncompressed cube you should do
rpr cubename_stored
#2 should show the same value which is available in the fact.
Thanks,
Brijesh
mikeyp
Posts: 14
Registered: 09/22/09
Re: where are dimension attributes in AWM - cube viewer?
Posted: Nov 13, 2009 1:24 PM in response to: Brijesh Gaur Edit Reply
Brijesh,
Thanks for your suggestions, and here are my results based on that.
1. No records rejected after running cube maintenance
2. I didn't limit my dimension to specific value as you recommended, but made my member the same as my Long and Short description attributes using AWM. (Its a flat dimension i.e. no level or hierarchy since the dimension only has one value/meaningful field.
Based on those steps, I still didn't get the results I was looking for. The fact table has five values for that one dimension and I'm seeing 0.00 for 4 of them, and an inaccurate value for the last one. (this after doing comparison with simple aggregate query against fact)
Do you have any other possible reasons/solutions?
**Loading the Global Schema into our dev environment is out of my hands unfortunately, so that's the reason for the prolonged learning curve.

Brijesh,
Here's the results of what you suggested:
1. Creating test dim and fact table with simple case you provided was successful, and AWM was able to map the same values to cube which was created on top of that model.
2. I took it a step further and changed dim values to be same like existing dim table
2.b. Also replaced test fact table values to mimic existing values as well so it would match what's available in dim table, and here's where the fun / mystery begins
Scenario 1:
I created fact like this...........select dim value, sum(msr) from <existing fact table>
As you can easily tell, my values were already aggregated in the table, and they also match perfectly in cube created by AWM - no issue.
Scenario 2:
Created fact like this............select dim value, msr from <existing fact table>
Quite clear is that my values are no longer aggregated, and broken down across multiple occurences of dim values; did this so that I can verify that the "sum" will actually work when used in AWM.
The results from scenario 2 lead me back to same issue being faced before - i.e. the values weren't being rolled up when the cube was created. No records were rejected, and there was only ONE measure value showing up (and it was still incorrect), and everything else was 0.00
I retrieved this error from the command program that runs in the background. This was generated right after running the maintain cube:
<the system time> TRACE: In oracle.dss.metadataManager.............MDMMetadataProviderImpl92::..........MetadataProvider is created
<the system time> PROBLEM: In oracle.dss.metadataManager.........MDMMetadataProviderImpl92::fillOlapObjectModel: Unable to retrieve AW metadata. Reason ORA-942
BI Beans Graph version [3.2.3.0.28]
<the system time> PROBLEM: In oracle.dss.graph.GraphControllerAdapter::public void perspectiveEvent( TDGEvent event ): inappropriate data: partial data is null
<the system time> PROBLEM: In oracle.dss.graph.BILableLayout::logTruncatedError: legend text truncated
Please tell me this helps shed some light on the main reason for no values coming back; we really need to move forward with using Oracle cubes where we are.
Thanks
Mike

Similar Messages

  • AWM maintain CUBE not refreshing data

    Hi,
    I have refreshed the data that feeds my CUBE and it seems as if AWM is not deleting old CUBE data, when I view the source table for the CUBE in the mapping I can see there is only 1 row but after I maintain CUBE and tick the delete dimension members checkboxes and run process immediately in this session the original data exists. Any help appreciated.
    Cheers,
    Brandon

    hello brandon,
    my solution is quite awkward but it works:
    you have to maintain the cube's measure data where the result would be 0 processed data(wierd huh?).
    To do that you, you can empty one of the dimension's data(members) by emptying the table/view that it is mapped to, giving you a memberless dimension. Then maintain the cube's measures(do not include dimensions). This will give you a result of 0 processed data, and all records are rejected, all the while emptying the cube.
    Then fill back the table/view that is mapped to the memberless dimenstion with data again so it will have its members back when you re-maintain it, then maintain cube data again.
    There's probably a more elegant solution out there, but this one works for sure.

  • Using FORMAT_STRING from SSAS cube - not working consistently

    Here's the deal.  
    I've got an SSAS cube and I am scoping the format of the measures based on dimension members. 
    Like this.
    scope(( [Account].[Level 5].&[I000900000] )); 
    format_string(This) = "(#,0,);#,0,";   
    end scope; 
    I'm then calling this format in the SSRS report using typical SSRS trickery and hacks.  
    Well. When I run the report for one date (this is a GL financial cube), this works perfectly.  When I run the report for a different date, it all of a sudden doesn't work.  When I view the data in excel through a pivot table, everything is fine.
    It's driving me crazy.  Has anyone else experienced something like this.

    Hi Baracus,
    Thank you for your question. 
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Delete overlapping requests from cube not working in processchain.

    In a process chain, 'deletion of overlapping requests from the cube ' step is used.
    Before this step a DTP step runs with a full update to load the cube. This process chain is scheduled every day.
    Issue is, the process chain failed at the DTP step and after correcting and repeating, the step got executed.
    However, the next step after the DTP,'delete overlapping requests from the cube'
    gets executed but without deleting the previous day's request.
    In the step details a message that 'No request for deletion were found' can be seen.
    Then next day when the DTPstep is executed without any problem the 'delete overlapping requests from cube' step is successful
    and the previous requests from cube are deleted.
    the deletion selections in the step ' delete overlapping request from infocube' is
    Delete existing requests
    Conditions:
    only delete requests from same DTP
    Selections
    Same or more comprehensive
    Because of this issue on a particular day because of the presence of 2 days requests the data is getting aggregated and shown as double in the reports.
    Please help.

    Hi Archana,
    When you delete the bad request from target and before repeating your DTP in PC, make sure the bad request deleted from table RSBKREQUEST also.
    If you find the same request in table, first delete the request from table and repeat the DTP in PC.
    Now Delete overlapping step should work.
    As this is not the permanent solution, please raise an OSS for SAP
    Regards,
    Venkatesh

  • Event in maintainance view not working in View cluster

    Hello All,
                I have created a view for a student table, which contains three fields. The third field should be automatically incremented, based on the entries in the first two fields. The functionality is working fine in the view created, as i have used the event01 in
    table maintainance generator->environment->events; with the codes for incrementing the third field automatically.
               After getting this functionality working, i have created a view cluster with the particular view also.
    The problem now is that, when i try to enter values to the table via view in the view cluster, the same event is not functioning for auto-incrementing. While saving the entries, the third field is remaining blank.
    The internal table 'total' which i am using in the view is displaying the table entries in the view created, but the same table is showing null value in the view cluster.
    Can anybody guide me in this?
    Thanks in Advance,
    Shino.

    Hi Shino,
    Let`s say you have a table ZTEST_TABLE1. You create a maintenance view on it ZTEST_VIEW1. You generate the maintenance dialog for the view with Function Group ZTEST_FUGR1. After this you will be able to maintain the table from SM30 using view ZTEST_VIEW1.
    You create view cluster ZTEST_CLUSTER1 and include your view ZTEST_VIEW1 in it. After this you can maintain the table from the view cluster as well. You then create a subroutine pool for the event routines like ZTEST_PROG1. In this program you include the program I mentioned: INCLUDE lsvcmcod.
    Now you create the event 04 - Before saving the data in the database. You specify the routine name eg BEFORE_SAVE and assign the subroutine pool as the main program. There you create the FORM. Example code below.
    I suggest to check the SAP Library, there you`ll find more details as well.
    http://help.sap.com/saphelp_nw04/helpdata/en/62/c302c7de8e11d1a5960000e82deaaa/frameset.htm
    FORM before_save.
      DATA: viewname TYPE vclstruc-object,
            error_flag TYPE vcl_flag_type,
            header TYPE vimdesc,
            wa_total(1000) TYPE c,
            wa_extract(1000) TYPE c.
      FIELD-SYMBOLS: <view>    TYPE ANY,
                     <view_x>  TYPE x,
                     <ent_x>   TYPE x,
                     <viewkey> TYPE x.
    " assign data of current view to global field symbols (<vcl_total>, <vcl_extract> etc.)
      viewname = 'ZTEST_VIEW1'.
      PERFORM vcl_set_table_access_for_obj USING    viewname
                                           CHANGING error_flag.
    " get view name (if you`ll use several views to make it dynamic)
      READ TABLE <vcl_header> INTO header INDEX 1.
    " assign field symbols
      ASSIGN: wa_total                TO <view>    CASTING TYPE (header-maintview),
              <view>                  TO <view_x>  CASTING,
              <view_x>(header-keylen) TO <viewkey>.
    " <vcl_extract> is not sorted
      SORT <vcl_extract>.
      LOOP AT <vcl_total> INTO wa_total.
    " your checks and modifications on the record using <fs_view>
    "    IF <fs_view>-fieldxy...
    " apply the changes to the EXTRACT table as well
        READ TABLE <vcl_extract> INTO wa_extract WITH KEY <viewkey> BINARY SEARCH.
        IF sy-subrc = 0.
    "     apply your changes to <vcl_extract>
        ENDIF.
      ENDLOOP.
    ENDFORM.                    "before_save

  • Call Blocking Based on ANI in a CUBE, Not Working.

    Hello all,
    So I have read the Docs I have found regarding blocking incoming numbers at the CUBE level.  I have configured this and my calls are not getting blocked.  I must be doing something wrong or maybe it is that I have more than one Translation rule on the dial peer.  Here is the config, please let me know what you think.
    voice translation-rule 1
     rule 1 /^\(..........$\)/ /+1\1/
    voice translation-rule 400
     rule 1 reject /610XXXXXXX/
     rule 2 reject /215XXXXXXX/
    voice translation-profile BLOCK
     translate calling 400
    voice translation-profile PSTNin
     translate calling 1
    dial-peer voice 297 voip
     description Incoming Telephone Numbers
     translation-profile outgoing PSTNin
     call-block translation-profile incoming BLOCK
     call-block disconnect-cause incoming call-reject
     preference 3
     destination-pattern ^[2-9]..[2-9]......$
     session protocol sipv2
     session target ipv4:192.168.186.11
     incoming called-number .
     voice-class codec 1  
     dtmf-relay rtp-nte
     no vad
    dial-peer voice 298 voip
     description Incoming Telephone Numbers
     translation-profile outgoing PSTNin
     call-block translation-profile incoming BLOCK
     call-block disconnect-cause incoming call-reject
     preference 2
     destination-pattern ^[2-9]..[2-9]......$
     session protocol sipv2
     session target ipv4:192.168.186.12
     incoming called-number .
     voice-class codec 1  
     dtmf-relay rtp-nte
     no vad
    dial-peer voice 299 voip
     description Incoming Telephone Numbers
     translation-profile outgoing PSTNin
     call-block translation-profile incoming BLOCK
     call-block disconnect-cause incoming call-reject
     preference 1
     destination-pattern ^[2-9]..[2-9]......$
     session protocol sipv2
     session target ipv4:172.18.12.100
     incoming called-number .
     voice-class codec 1  
     dtmf-relay rtp-nte
     no vad
    I have verified I am hitting the correct peer on the inbound call but it looks like the translation rule 1  translation-profile outgoing PSTNin is adding the +1 which I believe is causing it to not match.
    Any help would be great.

    Kirill thanks for the response.  I only used the X here to hide the numbers.  They are not in the config.  Additionally what you are saying about the +1 makes sense however the router will not take /+1610XXXXXXX/.
    Is there a different method that would need to be used?

  • Analytic Workspace Manager possible bug: Maintain Cube hangs

    Hi All,
    I am a newbe, I have been following the tutorial "Building OLAP 11g Cubes". http://www.oracle.com/technology/obe/olap_cube/buildicubes.htm
    After the step "Maintain Cube SALES_CUBE", I got the information box "Loading facts for cube SALES_CUBE"...on the screen for the last 4 hours.
    Is this normal? Should I kill the process and start again?
    I am running Oracle 11g Enterprise Edition Release 11.1.0.7.0 on a Virtual Machine with Windows Server 2008 Standard SP1 with 1GB RAM.
    The Analytic Workspace Manager is 11.2.0.1.0 running on a Windows XP SP3.
    Any help is much appreciated

    I'm getting a similar problem, I cannot maintain cubes that worked fine yesterday:
    An error has occurred on the server
    Error class: Express Failure
    Server error descriptions:
    INI: error creating a definition manager, Generic at TxsOqConnection::generic<BuildProcess>
    INI: XOQ-01706: An unexpected condition occurred during the build: "TxsOqLoadCommandProcessor::generatePartitionListSource-unsupported join condition-table"., Generic at xsoqBuild
    at oracle.olapi.data.source.DataProvider.callGeneric(Unknown Source)
    at oracle.olapi.data.source.DataProvider.callGeneric(Unknown Source)
    at oracle.olapi.data.source.DataProvider.executeBuild(Unknown Source)
    at oracle.olap.awm.wizard.awbuild.UBuildWizardHelper$1.construct(Unknown Source)
    at oracle.olap.awm.ui.SwingWorker$2.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    It all started after I tried to add a calculated measure to an existing cube (something I have done before in 11g...a feature I love).
    This is too bad, so far I've been loving 11g Olap as compared to 10g Olap. But now 11g is starting to crop up these bullshit bugs as well. I guess OLAP is still too crappy to rely on for production....good thing I won't recommend a rollout of this product to my clients. It's a great tool for having fun with development, but using Oracle Olap + AWM in a real company is career suicide.
    AWM 11.2.01

  • 'Get All New Data Request by Request' option not working Between DSO n Cube

    Hi BI's..
             Could anyone please tell me why the option ' Get one Request only' and  'Get All New Data Request by Request' is not working in DTP between Standard DSO and InfoCube.
    Scenario:
    I have done the data load by Yearwise say FY 2000 to FY 2009 in Infopackage and load it to Write-optimised DSO (10 requests) and again load Request by request to Standard DSO and activate each request. I have selected the option in DTP's to  'Get All New Data Request by Request' and its working fine between WDSO and SDSO. But not working between Cube and SDSO. While Execute DTP its taking as a single request from SDSO to Cube.( 10 request to single request).
    Regards,
    Sari.

    Hi,
    How does your DTP setting looks like from below options ? It should be change log, assuming you are not deleting change log data.
    Delta Init. Extraction from...
    - Active Table (with archive)
    - Active Table (without archive)
    - Archive ( full extraction only)
    - Change Log
    Also if you want to enable deltas, please do not delete change log. That could create issue while further update from DSO.
    Hope that helps.
    Regards
    Mr Kapadia
    *Assigning points is the way to say thanks*

  • Filter in DTP load from DSO to cube by Forecast version not working ?

    Hi All
    Can any help on the below code I did to filter data update from DSO to Cube in DTP - to filter by  next period forecast version. This code is not working it is loading data pf present forecast version also  Can any one help please
    data: l_idx like sy-tabix.
    data: L_date type sy-datum,
          t_gjahr  type t009b-bdatj,
          t_buper  type t009b-poper,
          1_period(6) type c.
              read table l_t_range with key
                   fieldname = 'ZFCSTVERS'.
              l_idx = sy-tabix.
       clear: t_buper, t_gjahr.
        L_date = sy-datum.
        call function 'DATE_TO_PERIOD_CONVERT'
          EXPORTING
            i_date  = L_date
            i_periv = 'Z1'
          IMPORTING
            e_buper = t_buper
            e_gjahr = t_gjahr.
    *---> Check if the period is 012, then increase the year by 1 and set
    *period to 001.
        if t_buper = '012'.
          t_gjahr = t_gjahr + 1.
          t_buper = '001'.
        else.
    *---> Increase just the period by 1.
          t_buper = t_buper + 1.
        endif.
        concatenate t_gjahr t_buper+1(2)  into 1_period.
        l_t_range-fieldname = 'ZFCSTVERS'.
        l_t_range-low = 1_period.
        l_t_range-sign = 'I'.
        l_t_range-option = 'EQ'.
           append l_t_range.
              p_subrc = 0.
    sk
    Edited by: SK Varma Penmatsa on Jan 23, 2012 2:30 PM

    Hi Praveen/Raj,
    Basically PCS_PER_PACK is a KF i have in the DSO, which i use to calculate the total number of pieces which i store in a KF in the cube. The transformation rule to calculate TOTAL_PCS is a routine.
    within this routine i multiply PACKS * PCS_PER_PACK to calculate the figure. I do not store PCS_PER_PACK in the cube.
    is it this rule that you want me to check whether aggregation is set to SUM or overwrite? Also this rule should add up the total pcs. if I say overwrite then it might not give me the desired result?
    Thing which i cannot figure out is since the transformation rules go record by record why would it add up the PCS_PER_PACK figure before calculating?
    I cannot access the system at the moment. as soon as i can i will check on it and get back to you.
    thanks once again for you're quick response to my need.
    regards
    dilanke

  • Report on Virtual cube is not working.

    Hi,
    I  created a report on virtual cube, Virtual cube is based on Function module which will pick data from Multiprovider, In the FM mention the import parameter name as " Muli Provider" Name.
    Multiprovider built on SPO object , On SPO object BIA is created, If can de activate the BIA on SPO then the report built on virtual cube is working in the portal. If I can activate the BIA the report is not working in the portal.
    The report is working fine in RSRT and Analyzer either BIA is active and deactive.
    Regards
    GK

    Hi
    The multi cube you created must be comprising of Info cubes, DSO and Info objects. Now this error which you are getting is it for
    - a specific characteristics or
    - any characterisitcs that is entered fourth in the series of filter selections or
    -  the fourth value in a specific characteristics
    Is the characteristics and the value which you use in filter present in all the underlying objects included in the multi cube ? You can check it in each of the objects associated in the multi cube, independently. This will give you an idea as to whether the error reported by the system is genuine or not.
    Cheers
    Umesh

  • Parent Child [MSAS cube is not working in BO]

    Hi
    Does anyone know if the BO is prepared to work with the Parent Child concept? I'm asking that because I've created a universe connected into a MSAS cube and it's not working for hierarchies designed using the parent child concept. When I include the field in the report, it return me a null value, in other words, an error, however if I remove this field the report works fine.
    does anyone suffer of the same issue?
    Thx

    Hi Rody,
    this forum is for the SAP BusinessObjects BI Solution Architecture. I would suggest you post the question into the SAP BusinessObjects product forum for the product that you are trying to use.
    regards
    Ingo Hilgefort

  • ByAccount Aggregation not working correctly in 2 of 3 cubes

    I have 3 financial cubes in the same project.  Essentially the cubes are the same with only the account rollup being different.  For the accounts I have 3 separate parent child account dimensions defined.  The account
    type for each account is defined the same in all 3 dimensions, although there are different rollup groupings.  At this point, I have our main account rollup working correctly.  However, the other two are not rolling up correctly. 
    It appears that these two cubes are treating every account as an "Asset" and aggregating with LastNonEmpty as the aggregation.  I have displayed the "Account Type" field in the solution and I can see that it is set correctly
    in all 3 cubes.
    I have tried removing the account intelligence configurations and then recreating them with no luck, I consistently get the same result.
    Any ideas on what the problem may be are appreciated.
     

    Hi Jgretton,
    If I understand correctly, the byAccount Aggregation works only on one of the cube, right?
    ByAccount Aggregation can be done by running the Define Account Intelligence wizard. To do this, go to the Dimension menu and select Add Business Intelligence and then select 'Define Account Intelligence' from the list of available enhancements.
    In your scenario, please ensure that the seetings are correct on the cubes that ByAccount Aggregation not working. Here is a article that describe how to enable it step by step, please see:
    http://www.packtpub.com/article/measures-and-measure-groups-microsoft-analysis-services-part1
    Hope this helps.
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • RBS maintainance job Orphan cleanup not working

    The problem are that Orphan cleanup won't start up. He starting up. chekcking pools and always stuck on 5 pool. And can delete other pool data.
    The command I'm runing:
    > C: cd "C:\Program Files\Microsoft SQL Remote Blob Storage
    > 10.50\Maintainer" "Microsoft.Data.SqlRemoteBlobs.Maintainer.exe" -ConnectionStringName RBSMaintainerConnection -Operation GarbageCollection ConsistencyCheck ConsistencyCheckForStores
    > -GarbageCollectionPhases rdo -ConsistencyCheckMode r >> SIS_RBS_NOTIME_Maintainer%date%.log
    The Error I'm getting:
    This task has ended. Processed 334 work units total. 0 Work units were incomplete. Needed to delete 1606751 blobs. Succeeded in deleting 1606751 blobs, 0 blobs were not found in the blob store.
    Starting Orphan Cleanup.
    Starting Orphan Cleanup for pool <PoolId 25, BlobStoreId 1, StorePoolId 0x19000000>.
    Skipping the current unit of work because of an error. For more information, see the RBS Maintainer log.
    Starting Orphan Cleanup for pool <PoolId 270, BlobStoreId 1, StorePoolId 0x0e010000>.
    Skipping the current unit of work because of an error. For more information, see the RBS Maintainer log.
    Starting Orphan Cleanup for pool <PoolId 94, BlobStoreId 1, StorePoolId 0x5e000000>.
    Skipping the current unit of work because of an error. For more information, see the RBS Maintainer log.
    Starting Orphan Cleanup for pool <PoolId 122, BlobStoreId 1, StorePoolId 0x7a000000>.
    Skipping the current unit of work because of an error. For more information, see the RBS Maintainer log.
    Starting Orphan Cleanup for pool <PoolId 154, BlobStoreId 1, StorePoolId 0x9a000000>.
    Skipping the current unit of work because of an error. For more information, see the RBS Maintainer log.
    No work is available at this time.
    Other clients, processes or threads may be currently working on other tasks.
    This task has ended. Processed 6 work units total. 5 Work units were incomplete. Needed to delete 0 blobs. Succeeded in deleting 0 blobs, 0 blobs were not found in the blob store. Enumerated 0 blobs, 0 blobs are being considered for orphan cleanup.
    This task has ended.
    The Event viewer shows:
    Message ID:2, Level:ERR , Process:8244, Thread:1
    Skipping the current unit of work because of an error. For more information, see the RBS Maintainer log.
    Operation: WorkExecute
    BlobStoreId: 0
    Log Time: 2015.02.04 23:43:47
    Message ID:3, Level:ERR , Process:8244, Thread:1
    Skipping unit of work <PoolId 25, BlobStoreId 1, StorePoolId 0x19000000> because of an error. For more information, see the RBS Maintainer log.
    Operation: WorkExecute
    BlobStoreId: 0
    Log Time: 2015.02.04 23:43:47
    Exception: System.Data.SqlClient.SqlException: The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
    Internal error in RBS. rbs_sp_gc_get_slice did not find any work unit.
    at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
    at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
    at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
    at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
    at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
    at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)
    at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
    at Microsoft.Data.SqlRemoteBlobs.SqlOperations.ExecuteNonQuery(RemoteBlobCommand commandObject, SqlCommand command)
    at Microsoft.Data.SqlRemoteBlobs.Maintainer.BeginEnumerateCommand.ExecuteDatabaseOperation2()
    at Microsoft.Data.SqlRemoteBlobs.RemoteBlobCommand.ExecuteInternal()
    at Microsoft.Data.SqlRemoteBlobs.RemoteBlobCommand.Execute()
    at Microsoft.Data.SqlRemoteBlobs.Maintainer.OrphanCleanup.ProcessPoolOrSlice(Int32 nestingDepth, SqlRemoteBlobContext context, BlobDetails blobDetails, Request parentRequest)
    at Microsoft.Data.SqlRemoteBlobs.Maintainer.WorkExecutor.ExecutePool(Int32 nestingDepth, SqlRemoteBlobContext context)
    at Microsoft.Data.SqlRemoteBlobs.Maintainer.OrphanCleanup.ProcessWorkUnit(Int32 nestingDepth, SqlRemoteBlobContext context)
    at Microsoft.Data.SqlRemoteBlobs.Maintainer.WorkExecutor.Execute(Int32 nestingDepth)
    Also in database for those pools the work_state = 20. To other pools work_state = 30.
    Are its possible to repair or just skip that pools to clean up the database?

    One more problem. 
    First starting Reference scan:
    Reference Scan is complete for this database.
    Scanned 39623 blobs. Deleted 163 blobs in the range of 0x000000000437ff8a00000001(exclusive) to 0x0000000004389a5100000001(inclusive).
    This task has ended. Processed 619 work units total. 0 Work units were incomplete.
    Second is Starting Delete Propagation:
    Delete Propagation complete for pool <PoolId 256, BlobStoreId 1, StorePoolId 0xff000000>.
    0 Delete for 0 blobs attempted, 0 blobs were deleted. 0 blobs were not found in the blob store. For more information, see the RBS Maintainer log.
    Delete Propagation is complete for this database.
    This task has ended. Processed 355 work units total. 0 Work units were incomplete. Needed to delete 280542 blobs. Succeeded in deleting 280542 blobs, 0 blobs were not found in the blob store.
    Third one is Orphan Cleanup:
    Starting Orphan Cleanup.
    Starting Orphan Cleanup for pool <PoolId 354, BlobStoreId 1, StorePoolId 0x62010000>.
    Enumerated 172683 blobs, 172683 blobs are being considered for orphan cleanup.
    Orphan Cleanup complete for pool <PoolId 354, BlobStoreId 1, StorePoolId 0x62010000>.
    0 Delete for 0 blobs attempted, 0 blobs were deleted. 0 blobs were not found in the blob store. For more information, see the RBS Maintainer log.
    Starting Orphan Cleanup for pool <PoolId 355, BlobStoreId 1, StorePoolId 0x63010000>.
    Enumerated 15124 blobs, 15124 blobs are being considered for orphan cleanup.
    Orphan Cleanup complete for pool <PoolId 355, BlobStoreId 1, StorePoolId 0x63010000>.
    0 Delete for 0 blobs attempted, 0 blobs were deleted. 0 blobs were not found in the blob store. For more information, see the RBS Maintainer log.
    Orphan Cleanup is complete for this database.
    This task has ended. Processed 2 work units total. 0 Work units were incomplete. Needed to delete 0 blobs. Succeeded in deleting 0 blobs, 0 blobs were not found in the blob store. Enumerated 187807 blobs, 187807 blobs are being considered for orphan cleanup.
    This task has ended.
    Starting RBS consistency check with attempt to repair.
    No RBS consistency issues found.
    Consistency check completed.
    Initializing consistency check for stores.
    Starting basic consistency check on blob store <1:FilestreamProvider_1>.
    Consistency check on blob store <FilestreamProvider_1> returned <Success>.
    This task has ended.
    The problem is that Orphan cleanup cleaning just 2 pool's. At all I have 354 pools. Why orphan not cleaning all pools? 
    I have a 4TB of data with RBS which is 2,5 TB real data and 1,5 TB need to be deleted with RBS, but RBS not working. 
    Database recovery model is Simple. After each RBS maitenance I'm running in database SQL Command: checkpoint; and still all files is remaining in filesystem.
    RBS config: 
    config_key config_value
    blob_store_operation_retry_attempts 3
    column_config_version 1
    column_config_version_of_gc_view 1
    complete_delete_propagation_start_time 2015-02-26T19:59:19.357
    complete_orphan_cleanup_start_time 2015-02-26T19:59:22.343
    complete_reference_scan_start_time 2015-02-26T19:47:08.507
    configuration_check_period time 00:00:00
    db_config_version 11
    default_blob_store_name FilestreamProvider_1
    delete_propagation_compatibility_level 105
    delete_propagation_end_time 2015-02-26T19:59:22.337
    delete_propagation_in_progress false
    delete_propagation_start_time 2015-02-26T19:59:19.357
    delete_scan_period time 00:00:00
    disable_pool_slicing false
    garbage_collection_slice_duration days 7
    garbage_collection_time_window time 00:00:00
    gcd_num_blobs_per_iteration 1000
    gcd_work_unit_keep_alive_time time 00:10:00
    gco_enum_num_blobs_per_iteration 1000
    gco_num_blobs_per_iteration 1000
    gco_work_unit_keep_alive_time time 00:10:00
    gcr_num_blobs_per_iteration 1000
    gcr_num_blobs_per_work_unit 100000
    gcr_work_unit_keep_alive_time time 00:02:00
    history_table_max_size 10000
    history_table_trim_size 1000
    index_reorganize_min_fragmentation 10
    index_reorganize_min_page_count 10000
    maintain_blob_id_compatibility false
    max_consistency_issues_found 1000
    max_consistency_issues_returned 100
    min_client_library_version_required 10.0.0.0
    min_client_library_version_supported 10.0.0.0
    orphan_cleanup_compatibility_level 105
    orphan_cleanup_end_time 2015-02-26T19:59:22.610
    orphan_cleanup_in_progress false
    orphan_cleanup_start_time 2015-02-26T19:59:22.343
    orphan_scan_period time 00:00:00
    rbs_filegroup PRIMARY
    rbs_schema_version 10.50.0.0
    reference_scan_compatibility_level 105
    reference_scan_end_time 2015-02-26T19:59:19.320
    reference_scan_in_progress true
    reference_scan_start_time 2015-02-27T07:02:45.193
    set_trust_server_certificate true
    set_xact_abort false

  • Quicklook does not work with WMV files and quick look no longer maintains resized views when viewing from a folder using the up/down arrows

    Quicklook does not work with WMV files and quick look no longer maintains resized views when viewing from a folder using the up/down arrows. Any fixes?

    Same problem here...

  • ODS, Inventory cube, and WM- Trying to run a report but not working! Help

    Hi Gurus,
    Hope somebody can help me here....
    I have an Inventory Cube and an ODS on Material Plant and another on Warehouse Management. For my reports I need fields from three of them so I created a Multi Provider by using these. But I have only Material and Plant as common in three of them, which is used in defining the Multi Provider. Along with these I have used some more characteristics which are not common. So now when I run a query on this MP, I am getting a display wherein the common characteristics get the values that are correct but for uncommon chars the values for the kfs from other data targets become blank and an additional row comes too.
    For example, I have kfs like Blocked stock in the cube and in one of the ODS I have kfs like standard price. So, in the query for the char material group which comes from the ODS there are no values for Blocked stock, I mean I can see a row wherein there is a value for Blocked stock and a blank for Standard Price,  in the additional row there is a blank for Blocked stock and a value for Standard price. This is the case with all the other kfs of different info providers.
    Is it because they are not common in all the info providers? Is there anyway to solve this? RRI will not work as I need the KFs from both the info providers.
    I am stuck now with this problem, so any guidence would be a BIG help.
    Thanks

    Is it because they are not common in all the info providers?
    -->Yes
    Is there anyway to solve this?
    --> Yes and no. If there is a join condition between the ODSs that you can use to get the fields on the same row, you can do an infoset.
    --> To achieve something like an outer join, you can create RKF for your base KFs, and restrict them on uncommon chars with 'constant selection'. This may not work in all cases. Search for a blog by Prakash Darzi on 'Constant selection' to understand this.
    If logically there doesn't exist a relation (try to visualise how you will derive your output row if you were given these two in a file or spreadsheet - is there a logical link to connect the two sets of data?) you cannot expect the results that you look for.

Maybe you are looking for

  • ITunes error when downloading latest version

    I just updated my iTunes to the latest version, and now iTunes will not open and gives me an error stating that I need to attempt to re-download iTunes again.  Which I did, and still get the same error.

  • Applet loads in one jsp page and doesn't in the other

    Hi, I've got a strange problem. I've written an applet which I want to embed in a jsp page in a web application. I've created a very small web application to simplify testing the applet, and all works well in it. But when I try to embed the applet in

  • Ios 8.1.2 ipod touch not syncing with computer.

    it is saying i have to authorize the computer and i did. but still didnt work. i then deotherized and otherized again like 5 times still no. somebody help

  • I cant connect to iTunes on my computer and I have internet

    I cant connect to iTunes on my computer and I have internet. it wont let me log in or get music from itunes from the the download itunes program on my computer.

  • Single Click on simple tree node

    Hi Experts, I have a problem ,please help me. I need event is trigger on single click on node of simple tree.There is event of double click but i need on single click only. If you have any test program please forward it also. Ankur Garg.