Greater than Equal to Functionality when retirivng data from SSAS 2008 cube using SSRS 2012 report not working

I have an SSRS 2012 report and SSAS Cube in 2008. My report criterion requires filtering on measures. So Created measures as dimensions . The report requires >= functionality on the measures. But in the Query designer of 
the report there is =, IN, Within Range ,Excluding range , MDX as the operators.
To achieve my goal. I have “From” and “To” as parameters on the numeric dimension. The “To” parameter I have set as internal and setting
 the default value of  “TO” .By writing another Dataset query that returns the “MAX” 
value and the MAX value returned is a member of the measure converted to dim I confirmed.. So this whole works as >=.The user enters only the “FROM” parameter and the “TO” is set internally. The user can enter any value in the “From” parameter. Any
value that is not even a member.It is a textbox. I cannot use a list of values from the “From” parameter.
But whenever I run the report after entering all the selection criteria
 I keep getting
Error “the restrictions imposed by the constrained flag in the strtomember functions were violated”
I know this means that Max value in the “TO” section is not a member .
I did try
 But I get “syntax for the “&” is incorrect”
If I use Drop down for the “From” and “To” parameters then it works fine. But that’s not what Business Users need.
Please let me know what the options to make this work. I did use parameters that filter the
 Dataset returned it works fine but there is a performance impact.

I think if you use the following method you will be able to compare the members.
CDbl(StrToMember("[Fact RCS CV BLAST].[APPRLIMIT ACH].&[" + @ToFactRCSCVBLASTAPPRLIMITACH + "]").Member_Name)
As you mentioned that you converted your measures as dimensions, you are having an integer value as the member name. In that case use VBA functions with MDX to do the datatype conversion.
Take a look into the following MDX written against Adventure Works;
WITH CALCULATED MEMBER [Measures].[Member Name] AS
), 4
SELECT {[Measures].[Sales Amount], [Measures].[Member Name]} ON COLUMNS,
[Date].[Calendar Year].[Calendar Year].MEMBERS,
CInt(Right(CStr([Date].[Calendar Year].CURRENTMEMBER.MEMBER_NAME), 4)) >= 2005 AND
CInt(Right(CStr([Date].[Calendar Year].CURRENTMEMBER.MEMBER_NAME), 4)) <=2008
FROM [Adventure Works]
I am filtering the Years by using the member names by extracting the Integer portion of the member name and applying data type conversion functions in VBA. So I 
Keep in mind that you have to get rid of the CONSTRAINED clause if your business users can enter anything on the SSRS text box. 

Similar Messages

  • I pull fiftyfour bytes of data from MicroProcessor's EEPROM using serial port. It works fine. I then send a request for 512 bytes and my "read" goes into loop condition, no bytes are delivered and system is lost

    I pull fiftyfour bytes of data from MicroProcessor's EEPROM using serial port. It works fine. I then send a request for 512 bytes and my "read" goes into loop condition, no bytes are delivered and system is lost

    You mention that you send a string to the microprocessor that tells it how many bytes to send. Instead of requesting 512 bytes, try reading 10 times and only requesting about 50 bytes at a time.
    If that doesn�t help, try directly communicating with your microprocessor through HyperTerminal. If you are not on a Windows system, please let me know. Also, if you are using an NI serial board instead of your computer�s serial port, let me know.
    In Windows XP, go to Start, Programs, Accessories, Communications, and select HyperTerminal.
    Enter a name for the connection and click OK.
    In the next pop-up dialog, choose the COM port you are using to communicate with your device and click OK.
    In the final pop
    -up dialog, set the communication settings for communicating with your device.
    Type the same commands you sent through LabVIEW and observe if you can receive the first 54 bytes you mention. Also observe if data is returned from your 512 byte request or if HyperTerminal just waits.
    If you do not receive the 512 byte request through HyperTerminal, your microprocessor is unable to communicate with your computer at a low level. LabVIEW uses the same Windows DLLs as HyperTerminal for serial communication. Double check the instrument user manual for any additional information that may be necessary to communicate.
    Please let me know the results from the above test in HyperTerminal. We can then proceed from there.
    Grant M.
    National Instruments

  • Error when loading data from DSO to Cube

    After upgrading to 2004s we get the following error message when trying to load data from a DSO to a Cube:
    Has anyone experienced a similar problem or have some guidelines on how to resolve it?
    Kind regards,

    Hej Martin,
    Problem solved

  • Dump when loading Data from DSO to Cube

    i get a short dump DBIF_REPO_PART_NOT_FOUND when trying to load data from a dso to a cube. From DSO to DSO it works fine.
    any idea`s?

    the Dump occurs in the last step of the last data pakage in the DTP Monitor.
    I checked the OSS note, but i don´t know how to check the kernel version.
    Now i used another DSO for testing and i got another Dump with a different name.
    After that i logged in again and it works with my test DSO data...
    But still not working with the original one...
    @Kazmi: What detaisl u mean exactly?

  • Record increased when sending data from ODS to cube

    Hello experts,
            when i am sending data from ODS - CUBE i am sending 20 records of full update but in my cube i
    am able to see 200 records, in those records i am getting the old records.Everytime i load the data i am getting the same problem its fetching all the old records.Please suggest me what to do inorder to reslove this problem.
    Thanks alot,

    TO delete the PSA Table record details follow these steps,
    Right click your ODS/CUBE and choose "Show dataflow diagram" on right side panel click "Techincal Settings" ON and you can get your PSA table name. Type this table name in SE11 and delete your records.
    IN RSA1 --> PSA choose your infosource and choose PSA and right click and choose delete PSA data.
    Hope it helps

  • Upload data from excel to crm using BDOC possible or not

    Hi all,
    I need to upload data from excel to crm(opportunity management) .is it possible using bdoc or not. Please provide the list of methods to upload data from EXCEL and also provide the best method for this scenario .

    BDocs are used to transfer data from one SAP system to another like from CRM to ECC or R3.
    If u want to upload data from excel to CRM, this can be done with the help of idocs and not bdocs (method 1 - using LSMW).
    Take help of a crm technical consultant and define lsmw projects. The project will take care of field mapping and conversion. Once that is done, 4 steps need to be done.
    1. Read data
    2. Convert data
    3. Generate idocs
    4. Post idocs
    Once the posting of idocs happen error-free, the data will be available in crm system.
    Another method will be using transaction SECATT.
    Here u can define test scripts and record what all activities are done in a transaction. Then define ur test configs which will contain the excel sheet data and then upload the data.
    Reward with points if this helps.

  • Job cancelled While loading data from DSO to CUBE using DTP

    Hi All,
      while i am loading data frm DSO to CUBE the job load is getting cancelled.
    in the job overview i got the following messages
        SYST: Date 00/01/0000 not expected.
       Job cancelled after system exception ERROR_MESSAGE
    wt can be the reason for this error as i have successfully loaded data into 2 layers of DSO before loading the data into the CUBE

    Are u loading the flat file to DSO?
    then check the data in the PSA in the date field and replace with the correct date.
    i think u r using write optimised DSO which is similar to PSA it will take all the data from PSA to DSO.
    so clear the data in PSA and then load to DSO and then to CUBE.
    If u dont need a date field in CUBE then remove the mapping in transformation in cube and activate and make DTP activate and trigger it it will work.

  • Updating data from PSA to CUBE

    Hi Experts,
    i got a mismatch between R/3 and BW reports, one document number value has not updated in PSA and INFOCUBE, that's why it is showing wrong value in BW report, i know which value and in which field it has missed.
    I want to enter that field value in PSA and again i want to load the data from PSA to  CUBE.
    is it possible or not,
    shell i do like this.
    thanks in advance

    It is very much possible In PSA.
    But it is not the best way to do.
    Maintain that in PSA and do a delta load to the cube.
    assigning pts is the way of saying thanks in SDN

  • Greater than equal symbol are coming across as question mark

    Greater than equal symbols and Less than equal symbols are coming across as question marks in Jasper Reports. Do you have any suggestions of how to solve this?

    This is the same problem as your other thread, keep it in one thread please:

  • Error when trying to load data from ODS to CUBE

      Iam getting a short dump  when trying to load data from ODS to CUBE. The Run time error is 'TYPELOAD_NEW_VERSION' and the short text is 'A newer version of data type "/BIC/AZODS_CA00" was found than one required.please help me out.

    Check this thread.........Ajeet Singh  has given a good solution here.........
    Re: Error With Data Load-Getting Canceled Right Away
    Also check SAP note: 382480..................for ur reference............
    A DART extraction job terminates with runtime error TYPELOAD_NEW_VERSION and error message:
    Data type "TXW_INDEX" was found in a newer version than required.
    The termination occurs in the ABAP/4 program "SAPLTXW2 " in "TXW_SEGMENT_RECORD_EXPORT".
    Additional key words
    Cause and prerequisites
    This problem seems to happen when several DART extraction jobs are running in parallel, and both jobs access table TXW_INDEX.
    If possible, avoid running DART extractions in parallel.
    If you do plan to run such jobs in parallel, please consider the following points:
    In the DART Extract configuration, increase the value of the parameter "Maximum memory allocation for index (MB)" if possible. You can estimate reasonable values with the "File size worksheet" utility.
    Run parallel DART jobs on different application servers.
    As an alternative, please apply note 400195.
    It may help u.........

  • EVDRE error "encountered problem when retrieving data from webserver"

    in a EVDRE we always get the error "encountered problem when retrieving data from webserver"
    When analysing this further we noticed this is always generated when a base member is selected in the CV and the expansion on the row for this dimension is has one of the following expansion settings:
    if we select SELF, the error disappears again and the EVDRE works fine again ... is this normal behavior?
    there is no problem with application nor dimension.
    solved it

    Note that the keyword "ALL" does not include the member itself. This may cause some confusion, but as Harish implies, when you select a base member it finds no matches if your memberset is "ALL".
    If you want to design a report to handle the user moving back and forth between base and non-base members in their CV, you either need to include SELF,ALL or SELF,DEP, or something similar....
    OR you need to get a little fancier, and use EVPRO to look and see if the CV member is base -- evpro(app,member,"CALC") -- and then use some conditional logic in Excel to choose the correct expansion options. If Calc=N, use Self, otherwise use SELF,DEP or whatever you choose. This can be a bit tricky if you then want the user to be able to drill down in the report (especially if you want the workbook option to insert add'l rows on the expansion, rather than replace), but it's all possible.

  • When transferring data from an old hard drive to a new one, is there a way to keep alias files and folders in tact?

    My mac and I rely HEAVILY on the use of alias files and folders. The simple version of my question is: When transferring data from an old hard drive to a new one, is there a way to keep alias files and folders in tact?  These are not symlinks or symbolic links, they're all alias files and folders. 
    Is there any software out there that might help me with a solution?  Are there any tricks or tips you can give me to try?

    Use either Setup Assistant at first start, or Migration Assistant on subsequent occasions and all will be transferred intact. Given your wording I presume you are Unix savvy and will appreciate the problems that duplicate userids would cause, so of you use MA for the migration, make sure the target Mac has no userid equal to that on the source and you will be fine.
    SA does not have these issues since it runs before any user account is created, so it can simply copy over everything. But MA runs after the fact and only solves the issue partially by changing the UID in the user directory, leading to permission problems.

  • Error when uplaoding data from R/3

    hi Gurus,
            I have got a problem when loading data from r/3 to bw .
    i have loaded a init req and i have got more than 1 lac records.
    i found out that there are 2000 records with the same error in PSA.
    i have used a normal staging scenario ..psa and then to data target.
    I know we can do it manually..but anyother way not retrieveing from r/3 again.
    how to solve this problem.
    and plz let me if i can have only correct records into data target preventing the incorrect ones by any specific setting..
    will award points for good answers
    thanks ,

    Hi krishna,
    there was a typo mistake with rohini..Use the transaction RSKC and enter those unpermitted characters..and do the load this time you will not get that error...
    Try to assign points upto your satisfaction level..

  • When transferring data from my old iMac to a newer one do the old Apple apps like iWorks override the new versions installed on the newer computer?

    When transferring data from an older iMac to a newer iMac (refurbished) do the Apple applications like iWorks replace the ones on the newer  computer?
    I have read how to transfer data and using setup assistant but I am concerned about applications from Apple having the same names.

    As you likely know legacy apps that are PPC based will not migrate over. However the data files will. I would urge you to inventory the apps you have and their data files to see how important they are and then look app by app to see what app replacements are available. If you are using Office for Mac prior to the 2008 version you will need to get a new license of the app because the 2004 version was PPC based. The files will come over and are readable by newer version of Office for Mac (I recommend the 2011 version) and also readable by the iWorks apps (Pages, Numbers, Keynote) but those are different apps than Office and work differently and do not have all of the features of the MS Office apps. This is really only a issue if you share files with MS Office users, if you use them for personal work then iWork may be just fine for your needs.

  • Issue when uploading Sales data from DSO to Cube.

    Dear All,
    I have an issue when I am uploading Sales data from DSO to Cube.  I am using BI 7.0 and I have uploaded all sales document level data to my DSO.  Then I use a transformation rule to calculate the Sales Value when I am doing DTP to cube.  Cube has customer wise aggregation data.
    In DSO I have NetPrice(KF) and Delivered_QTY(KF).  I do a simple multiplication routine in the transformation from DSO to Cube.
    At the moment I use Active Table(With out Archive) on the DSO to get the data since this is my first load.
    Issue is that the figure (Sales Value) in the cube is incorrect. I am getting a very large values which is impossible.
    Can someone please help me.

    are you sure that cube has customer wise aggregated data..since it will always aggregate the values of the key figures for the same set of characters...
    did you checked the values for the key figures as well and if even they are inflated or is the problem with this key figure only.
    During the data load the records may aggregates first for the same values of characters and then multiplication will be happening....if that is the case then you may have to multiply the values before storing it in the data package and then let it aggregate....can be achived through start routine...
    but first verify if other key figures are also having the same issue.

Maybe you are looking for

  • Payment Medium Workbench - Custom Format

    Hi , We are in the middle of a global implementation. In our project we would be using Payemnt medium workbench to generate the payment run files from transaction F110. The Bank we will be dealing would require a custom format of MT101 for modificati

  • Auto Printing A PDF Using v7 Onwards

    I have upgraded to version 10 of Acrobat and I am now having an issue with auto printing a PDF. My objective is to load a PDF within a new browser window and for it to print to the default printer without need for the user to do anything. The user wi

  • I can't pull out the SIM card from my iPhone

    The SIM card in my iPhone is stuck now and I can't change the SIM card, what should I do?

  • Standard video and widescreen on same disc?

    Is it possible to burn a standard and widescreen movie on the same disc? Thanks

  • CF Form issues

    90% of my pages use <cfif isdefined("")> to execute a block of code after the submit button is pressed. This works on my test server and I have never had a problem with the code in past websites but I just started using a new hosting company,