High TimeDataRetrieval in SSRS, for simple queries

Hi there
We have a SharePoint 2013 instance running PowerView, that is hitting an SSAS Tabular Model instance in the back-end.
We are seeing very slow response times in PowerView. I dug into things, and I noted that if I use a Report Data Source connection that impersonates the current user, it runs very slowly; whereas if I set in the Report Data Source in SharePoint that it should
"Use the following credentials", and specify the SAME account I am logged in as the one that gives the slow results; it all works lightning fast.
SSAS doesn't seem to be the issue (It has all data in an 'InMemory' SSAS Tabular Model implementation , and CPU and RAM usage are very low throughout; and in reviewing Profiler results, the query responses seems to be the same on both the SLOW and FAST runs)
I checked Fiddler that pointed at the SSRS calls being slow - and on reviewing the SSRS logs, I see the following: (The same 3 queries were run for each; returning the same number of records in each case (roughly 51) - results here are in ms, and are for
the "TimeDataRetrieval" value:
QUERIES WITHOUT EMBEDDED CREDENTIALS:
Query 1 - 3074
Query 2 - 3085
Query 3 - 84
QUERIES WITH EMBEDDED CREDENTIALS:
Query 1 - 76
Query 2 - 61
Query 3 - 9
I also noted that if I run the FAST connection query, then closed the IE window, opened a new IE window, and then used the OTHER SLOW connection, then for the time-being it works fast for every request I make then - as if somewhere that connection is cached
for this user?
Any thoughts would be greatly appreciated.
Thanks
David

Hi Jude_44,
Thank you for your question.
I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
Thank you for your understanding and support.
Thanks,
Wendy Fu
If you have any feedback on our support, please click
here.
Wendy Fu
TechNet Community Support

Similar Messages

  • Extensive IO and CPU for simple queries

    Hi,
    I have a machine using oracle 9.2.0 and solaris 10.
    For every simple queries, it very big IO and a lot of CPU. This is only happend on one particular machine(We have same version db and soalris on other mahcine and it works fine).
    One example queris is when I use Enterprise Manager to get the "configuration" information of the instance,it use 50% IO. I get the trace file and tkprof as following:
    SELECT UNIQUE sp.name, sp.sid, DECODE(p.type, 1, 'Boolean', 2, 'String', 3,'Integer', 4, 'Filename', ' '), sp.value, p.issys_modifiable, p.description FROM v$spparameter sp, v$parameter p WHERE sp.name = p.name ORDER BY sp.name,sp.sid
    call count cpu elapsed disk query current rows
    Parse 4 0.02 0.01 0 0 0 0
    Execute 4 0.00 0.00 0 0 0 0
    Fetch 9 4.36 34.12 7980 0 0 783
    total 17 4.38 34.13 7980 0 0 783
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 5 (SYSTEM)
    Rows Row Source Operation
    261 SORT UNIQUE (cr=0 pr=0 pw=0 time=1214116 us)
    261 HASH JOIN (cr=0 pr=0 pw=0 time=1221296 us)
    361485 MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=370609 us)
    261 FIXED TABLE FULL X$KSPSPFILE (cr=0 pr=0 pw=0 time=19777 us)
    361485 BUFFER SORT (cr=0 pr=0 pw=0 time=6413 us)
    1385 FIXED TABLE FULL X$KSPPCV (cr=0 pr=0 pw=0 time=4180 us)
    1379 FIXED TABLE FULL X$KSPPI (cr=0 pr=0 pw=0 time=7001 us)
    It seems Oracle FTS the X$KSPPCV and X$KSPPI.
    Can anybody give me some suggestion to improve the performance?
    thanks.

    Is there a difference in the query plans on the two machines?
    Did you analyze the SYS and SYSTEM schemas on one system and not the other?
    Are there different initialization parameters on the two machines?
    What do you mean by "it use 50% IO"? I'm not sure what that means and I'm not sure how you're measuring that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • "System Resource Exceeded" for simple select query in Access 2013

    Using Access 2013 32-bit on a Windows Server 2008 R2 Enterprise. This computer has
    8 GB of RAM.
    I am getting:
    "System Resource Exceeded"  errors in two different databases
    for simple queries like:
    SELECT FROM .... GROUP BY ...
    UPDATE... SET ... WHERE ...
    I compacted the databases several times, no result. One database size is approx 1 GB, the other one is approx. 600 MB.
    I didn't have any problems in Office 2010
    so I had to revert to this version.
    Please advise.
    Regards,
    M.R.

    Hi Greg. I too am running Access on an RDP server. Checking Task Manager, I can see many copies of MSACCESS running in the process list, from all users on the server. We typically have 40-60 users on that server. I am only changing the Processor Affinity
    for MY copy, and only when I run into this problem. Restarting Access daily, I always get back to multi-processor mode soon thereafter.
    As this problem only seems to happen on very large Access table updates, and as there are only three of us performing those kind of updates, we have good control on who might want to change the affinity setting to solve this problem. However, I
    understand that in other environments this might not be a good solution. In my case, we have 16 processors on the server, so I always take #1, my co-worker here in the US always takes #2, etc. This works for us, and I am only describing it here in case it
    works for someone else.
    The big question in my mind is what multi-threading methods are employed by Microsoft for Access that would cause this problem for very large datasets. Processing time for an update query on, say, 2 million records is massively improved by going down
    to 1 processor. The problem is easily reproduced, and so far I have not seen it in Excel even when working with very large worksheets. Also have not seen it in MS SQL. It is just happening in Access.

  • Gtksql pkgbuild. Simple GUI for sql queries.

    This is simple program for running sql queries. I made this pkgbuild some time ago but I'm not sure how much it is good. Gtksql should support both mysql and postgresql but postgresql support is broken (probably to old). Developer promises a brand new gtk2 based application. We will see...
    Since I can't make this pkgbuild any better I'm just posting it.
    gtksql PKGBUILD
    pkgname=gtksql
    pkgver=0.4.2
    pkgrel=1
    pkgdesc="Gtk front-end for sql queries"
    url="http://gtksql.sourceforge.net/"
    depends=('lua' 'gtk')
    makedepends=('mysql')
    source=(http://dl.sourceforge.net/sourceforge/gtksql/$pkgname-$pkgver.tar.gz)
    md5sums=('a0ba598027cd49f69f951a31342b51fd')
    build() {
    cd $startdir/src/$pkgname-$pkgver
    ./configure --prefix=/usr
    --with-mysql
    --with-lua
    make || return 1
    make prefix=$startdir/pkg/usr install

    Hello:
    I'm trying to do add a calculated variable to my view that isn't an Entity
    Attribute, its simple really but i don't know if I'm leaving something out or
    not.
    I add the "new Attribute" , call it nCompleted, type - number
    I then check off everything as is required(as documented in Help)
    for an Sql Derived Attribute, the following code is entered in the
    Expression Box :
    Select group_services.office, count(client_group_services.clnt_group_services_id) as nCompleted
    Where group_services_id=client_group_services_id(+)
    And client_group_services_id.result="Completed";
    It isn't working, what am I missing???? Sheena:
    It probably isn't working because of missing 'group by' clause. I don't know the exact detail of your query statement, but suppose you want to count number of employees in a dept, then you're query should look like:
    select dept.deptno, dept.dname, count(emp.empno) as nCount
       from dept, emp where dept.deptno = emp.deptno(+)
       group by dept.deptno, dept.dname  // This group-by is important
    OR if you want to use nested SELECT
    select dept.deptno, dept.dname,
       (select count(emp.empno) from emp where dept.deptno=emp.deptno(+)) as nCount
       from dept

  • Capture @@ROWCOUNT for all QUERIES using SQL AUDIT

    I have a requirement where the customer wants to audit all SELECT queries made to a specific table  and also capture
    "No of rows returned" for these queries made to the table. I was able to capture various SELECT queries happening in the database /Table using SQL AUDIT, Wondering if anybody can suggest how to capture no of rows affected along
    with it, Since we have numerous stored procedures in the system we wont be able to modify existing stored procedures.

    Good day Vish_SQL,
    There are several options that you can use, which fit different cases. for example:
    1. Using extended events (My prefered solution for most cases like this)
    2. Using profiler (older option)
    3. Using view (with the same name as the table, and rename the tables related to the issue) instead of the original table, adding to the view a simple function or SP (CLR for example). the function return the
    column value, but behind the  screen write information to another table. It is a very rare case that you need this option, and I can't recommend
    it.
    4. using applications like GREENSQL which give another level between the application and the SQL Server. The users connect to the server but the server do not listen to remote connection but the external app dose.
    In this case the app get the query and send it t the server (after security or monitoring if you need)
    * it was much simpler if you want to monitor delete, update, or insert for example, since in those cases you could work with AFTER trigger.
    ** I highly recommend you to monitor the application that connect to the SQL Server rather than the SQL Server. if you can.
    Check this:
    http://solutioncenter.apexsql.com/auditing-select-statements-on-sql-server/
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Stored Procedures for Simple SQL statements

    Hi Guys,
    We are using Oracle 10g database and Web logic for frontend.
    The Product is previously developed in DotNet and SQL Server and now its going to develop into Java (Web Logic) and Oracle 10g database.
    Since the project is developed in SQL Server, there are lot many procedures written for simple sql queries. Now I would like to gather your suggestions / pointers on using procedures for simple select statements or Inserts from Java.
    I have gathered some list for using PL/SQL procedure for simple select queries like
    Cons
    If we use procedures for select statements there are lot many Ref Cursors opened for Simple select statements (Open cursors at huge rate)
    Simple select statements are much faster than executing them from Procedure
    Pros
    Code changes for modifying select query in PL/SQL much easier than in Java
    Your help in this regard is more valuable. Please post your points / thoughts here.
    Thanks & Regards
    Srinivas
    Edited by: Srinivas_Reddy on Dec 1, 2009 4:52 PM

    Srinivas_Reddy wrote:
    Cons
    If we use procedures for select statements there are lot many Ref Cursors opened for Simple select statements (Open cursors at huge rate)No entirely correct. All SQLs that hit the SQL engine are stored as cursors.
    On the client side, you have an interface that deals with this SQL cursor. It can be a Java class, a Delphi dataset, or a PL/SQL refcursor.
    Yes, cursors are created/opened at a huge rate by the SQL engine. But is is capable of doing that. What you need to do to facilitate that is send it SQLs that uses bind variables. This enables the SQL engine to simply re-use the existing cursor for that SQL.
    Simple select statements are much faster than executing them from ProcedureAlso not really correct. SQL performance is SQL performance. It has nothing to do with how you create the SQL on the client side and what client interface you use. The SQL engine does not care whether you use a PL/SQL ref cursor or a Java class as your client interface. That does not change the SQL engine's performance.
    Yes, this can change the performance on the client side. But that is entirely in the hands of the developer and how the developer selected to use the available client interfaces to interface with the SQL cursor in the SQL engine.
    Pros
    Code changes for modifying select query in PL/SQL much easier than in JavaThis is not a pro merely for ref cursors, but using PL/SQL as the abstraction layer for the data model implemented, and having it provide a "business function" interface to clients, instead of having the clients dealing with the complexities of the data model and SQL.
    I would seriously consider ref cursors in your environment. With PL/SQL servicing as the interface, there is a single place to tune SQL, and a single place to update SQL. It allows one to make data model changes without changing or even recompiling the client. It allows one to add new business logical and processing rules, again without having to touch the client.

  • What is the best big data solution for interactive queries of rows with up?

    0 down vote favorite
    We have a simple table such as follows:
    | Name | Attribute1 | Attribute2 | Attribute3 | ... | Attribute200 |
    | Name1 | Value1 | Value2 | null | ... | Value3 |
    | Name2 | null | Value4 | null | ... | Value5 |
    | Name3 | Value6 | null | Value7 | ... | null |
    | ... |
    But there could be up to hundreds of millions of rows/names. The data will be populated every hour or so.
    The goal is to get results for interactive queries on the data within a couple of seconds.
    Most queries look like:
    select count(*) from table
    where Attribute1 = Value1 and Attribute3 = Value3 and Attribute113 = Value113;
    The where clause contains arbitrary number of attribute name-value pairs.
    I'm new in big data and wondering what the best option is in terms of data store (MySQL, HBase, Cassandra, etc) and processing engine (Hadoop, Drill, Storm, etc) for interactive queries like above.

    Hi,
    As always, the correct answer is "it depends".
    - Will there be more reads (queries) or writes (INSERTs)?
    - Will there be any UPDATEs?
    - Does the use case require any of the ACID guarantees, or would "eventual consistency" be fine?
    At first glance, Hadoop (HDFS + MapReduce) doesn't look like a viable option, since you require "interactive queries". Also, if you require any level of ACID guarantees or UPDATE capabilities the best (and arguably only) solution is a RDBMS. Also, keep in mind that Millions of rows is pocket change for modern RDBMSs on average hardware.
    On the other hand, if there'll be a lot more queries than inserts, VERY few or no updates at all, and eventual consistency will not be a problem, I'd probably recommend you to test a Key-Value store (such as Oracle NoSQL Database). The idea would be to use (AttributeX,ValueY) as the Key, and a Sorted List of Names that have ValueY for their AttributeX. This way you only do as many reads as attributes you have in the WHERE clause, and then compute the intersection (very easy and fast with sorted lists).
    Also, I'd do this computation manually. SQL may be comfortable, but I don't think It's Big Data ready yet (unless you chose the RDBMS way, of course).
    I hope it helped,
    Joan
    Edited by: JPuig on Apr 23, 2013 1:45 AM

  • Simple queries in portal

    I am having some difficulty writing what I think should be simple queries for the portal. I cannot find any documentation about the tables and how they are linked. For instance, I need a report that gives information about items such as what folder they are in and what content area. I cannot determine how to link the content area into the query.

    Hi ,
    general structure to display an iview
    Role -> Work set-> Page-> Iview .
    If u want to create all these u have to go
    1) Content Administration .-> In the list u will End Conten provided by SAP -> Right Click on -> select create iview (or) page (or ) Work Set or (Role )
    selecte Create role -> U will seee ne page - > give name of role  , role id . -> click on next-> finish .
    u will get like
    u want edit the object right .
    just click on object editing .
    in right side u will get one  page there u will see Entry point should be enabled thats .
    ur role is created now .
    now u have to assign this role partcular user right
    go to User Administration ->give user name -> search -> u will get ur user name -> select it user -> then u will sear for availa ble role -> type ur role id there click on search - > ur role is displayed over there -> Click on ADD -> Save it .
    thats it ur role is added ur required user
    now logoff from poratl and relogin u will see ur role .in top level navigation .
    if u want measn i will send u with screen shot k .
    Regards ,
    venkat

  • Simple queries

    Hiya,
    I'm trying to run simple queries against my database but get the following error:
    Error: Could not fetch DOM element for doc id: 9167 in ..../mydbxml/lib/i386-linux-thread-multi/Sleepycat/DbXml.pm, line 497
    Some simplified test code that still generates this error is:
         $fullQuery = 'collection("/local/scratch/ar283/anthology/dbxml/anthology.dbxml")//PAPER';
         $txn = $mgr->createTransaction();
         $context = $mgr->createQueryContext();
         $expression = $mgr->prepare($txn,$fullQuery,$context);
         $results = $expression->execute($txn,$context);
         my $value;
         while( $results->next($value) ) {
    print "$value\n";
    print $results->size() . " objects returned for expression '" . "$fullQuery'\n" ;
         $txn->commit();
    I've also tried variations on the $results = $expression->execute($txn,$context); line, e.g.
         $results = $mgr->query($txn,$fullQuery,$context);
         $results = $mgr->query($txn,$fullQuery);
    But the same error occurs. Can anyone tell me what I'm doing wrong?
    Thanks,
    Anna

    Have you removed
    any environment files by accident.Not that I'm aware of. What files? From within my code? How could I be doing this?
    Once you are in this situation, it won't be fixed
    other than by recreating the data.Okay, thanks.
    How far do you get in your application before you hit
    this problem? I.e. what is
    working?Well, in the case of the the test query at the end of the building script, I successfully create the database as follows:
    my $theContainer = "/local/scratch/ar283/anthology/dbxml/anthology.dbxml";
    my $env = new DbEnv(0);
    $env->set_cachesize(0, 64 * 1024, 1);
    $env->open("/local/scratch/ar283/anthology/dbxml/",
    Db::DB_INIT_MPOOL|Db::DB_CREATE|Db::DB_INIT_LOCK|Db::DB_INIT_LOG|Db::DB_INIT_TXN|Db::DB_RECOVER, 0);
    my $mgr = new XmlManager($env);
    my $txn = $mgr->createTransaction();
    if ($mgr->existsContainer($theContainer)){
    $mgr->removeContainer($txn,$theContainer);
    $txn->commit();
    $txn = $mgr->createTransaction();
    my $container = $mgr->openContainer($txn, $theContainer, Db::DB_CREATE);
    $txn->commit();
    Then I populate it in a for loop of transactions like:
    my $xmlDoc = $mgr->createDocument();
    $xmlDoc->setContent( $recordString );
    $xmlDoc->setName( $theName );
    $container->putDocument($txn, $xmlDoc);
    $txn->commit();
    That all seems to work fine. When I tag on the following code for querying the container, I get the error message:
    $fullQuery = 'collection("/local/scratch/ar283/anthology/dbxml/anthology.dbxml")//PAPER';
    $txn = $mgr->createTransaction();
    $context = $mgr->createQueryContext();
    $expression = $mgr->prepare($txn,$fullQuery,$context);
    if ($debug){
         print "$0: full query is $fullQuery, expression is $expression\n";
    $results = $mgr->query($txn,$fullQuery);
    $txn->commit();
    I get my debug output of "full query is collection("/local/scratch/ar283/anthology/dbxml
    /anthology.dbxml")//PAPER, expression is XmlQueryExpression=ARRAY(0x87a7770)" then the next line causes the error.

  • Simple queries, but vital!!

    I have 3 very simple queries which I cannot figure out. I
    have little experience on Dreamweaver MX, and am in a position of
    having to edit and launch an existing site...
    1) I am trying to add a basic border around images and tables
    - just a thin black line to make them look neater - how do I go
    about this?
    2) to prepare for publishing, I assume I need to save all my
    pages and images in one place, and assign each link/image an
    address from my C-drive. At the moment everything is stored in, for
    instance, C:\Documents and Settings\Adam Lucas\My Documents\website
    updates\images etc - is this OK or do I need a more simple address
    to prepare for publishing?
    3) how do I go about publishing my website once all the
    finalised elements exist on my C-drive?
    any advice massively appreciated...

    > I have little experience on Dreamweaver MX,
    1. What is your knowledge level of HTML and CSS?
    2. Which Dreamweaver - is it 6.0 or 6.1? If it's the former,
    please update
    it to the latter from the updater on the Adobe site - there
    were quite a few
    quirks in the former that will make your life miserable.
    > 1) I am trying to add a basic border around images and
    tables - just a
    > thin
    > black line to make them look neater - how do I go about
    this?
    Andy has your answer - use CSS.
    > 2) to prepare for publishing, I assume I need to save
    all my pages and
    > images
    > in one place, and assign each link/image an address from
    my C-drive. At
    > the
    > moment everything is stored in, for instance,
    C:\Documents and
    > Settings\Adam
    > Lucas\My Documents\website updates\images etc - is this
    OK or do I need a
    > more
    > simple address to prepare for publishing?
    You *MUST* start with a local defined site. Do you have one?
    Have you read
    DW's F1 help about how to define a local and a remote site,
    and how to PUT
    and GET files?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.dreamweavermx-templates.com
    - Template Triage!
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    http://www.macromedia.com/support/search/
    - Macromedia (MM) Technotes
    ==================
    "marcoppizio" <[email protected]> wrote in
    message
    news:[email protected]...
    >I have 3 very simple queries which I cannot figure out. I
    have little
    > experience on Dreamweaver MX, and am in a position of
    having to edit and
    > launch
    > an existing site...
    >
    > 1) I am trying to add a basic border around images and
    tables - just a
    > thin
    > black line to make them look neater - how do I go about
    this?
    >
    > 2) to prepare for publishing, I assume I need to save
    all my pages and
    > images
    > in one place, and assign each link/image an address from
    my C-drive. At
    > the
    > moment everything is stored in, for instance,
    C:\Documents and
    > Settings\Adam
    > Lucas\My Documents\website updates\images etc - is this
    OK or do I need a
    > more
    > simple address to prepare for publishing?
    >
    > 3) how do I go about publishing my website once all the
    finalised elements
    > exist on my C-drive?
    >
    > any advice massively appreciated...
    >

  • Light webserver for simple, low traffic, static content site?

    I have a simple site that I would like to host on a machine with minimal resources. What is my best bet?
    lighttpd and nginx sell themselves as being lightweight in high traffic environments... But this is not a high traffic environment.
    Apache seems like overkill.
    thhtpd seems to be what I'm looking for, however the last release was in 2003, and I find that worrisome.
    Can anyone offer me any insight?
    Last edited by gabe_ (2011-05-19 20:41:39)

    Awebb wrote:
    gabe_ wrote:lighttpd and nginx sell themselves as being lightweight in high traffic environments... But this is not a high traffic
    That's a language propblem. You assume that having a high traffic environment is a dependency for their "lightweigtness". You should understand the sentence as "lightweight, even in high traffic environments".
    Not necessarily (though I do see your point).
    I know that they are "full featured" httpd's. One of those features is being able to serve dynamic content across thousands of simultaneous connections. I realize I'm splitting hairs here, but I don't need those features. For this reason, I'm highly considering darkhttpd (thanks for the tip, Wittfella).

  • Need Different Selection screen for different Queries in a Workbook

    Hi,
    I have created a workbook with Multiple tabs in BI 7.0.  Each Tab has different Queries and each query has different Selection screens (Variable Selections).
    When i open the workbook and refresh it, the selection screen is appearing only for one query.  All the queries are refreshed by this single selection screen, though each query has different Variable selections.  What i need is a seperate selection screen i.e seperate Variable selection appearing for each queries, when i refresh each one of them.
    Is it possible to do this?  If anybody has tried this, help me in solving this issue.  Thanks for ur time.
    Regards,
    Murali

    Murali,
    If you un-check the 'Display Duplicate Variables Only Once' this WILL solve your problem.
    When you Refresh, you should be presented with a single variable selection dialog box, but it should contain an area for each Query (DataProvider) that is embedded in the Workbook.
    This is the case if the queries are all on the same tab, or on different tabs.
    However, if you have multiple tabs each with a query on it, each query must have it's own DataProvider. If all queries are based on the same DataProvider, it will not work as the Workbook only 'sees' one Query for which it needs variable input.
    If you REALLY want multiple variable selection dialog boxes, then maybe the best way to do this is to have the queries in separate Workbooks.
    If you don't want the User to have to open 5 queries manually, you could use a Macro in each Workbook that runs on opening, to open the next Workbook in the sequence.
    I hope this makes sense!
    Regards
    Steve

  • How to let add external apps for simple users?

    How to let add external apps for simple users? Simple users don't have login server administer rights, so how can they add external apps for their use?

    ...Edit External applications portlet sais:
    Error: An unexpected error occurred: User-Defined Exception (WWC-43000)
    :(((

  • How to use Value Mappings for simple translations?

    Hello,
    I want to use Value Mappings for simple translations in mappings, e.g. from IDoc to Inhouse structures.
    For example unit of quantity:
    IDOC    -->    INHOUSE
    PCE               P
    ABC               A
    How can I use Value Mapping for this? What should be used as Agency, what should be used for Scheme? What about groups? I tried the following: I created a new Value Mapping in Integration Directory:
    - Source Agency: DELVRY05
    - Source Scheme: MENEE (IDoc field name)
    - Target Acency: INHOUSE_DESADV (Name of structure)
    - Target scheme: UNIT (Name of field)
    Then, in the table, I added several lines for translating PCE to P and ABC to A and so on. But I have to define a group name for each line. I used INHOUSE. But than I get one INHOUSE group for each line.
    This seems very complicated for simple translations from A to B. I don't want to use FixValue in Message mappings.
    Any help appreciated.
    Thanks,
    Christoph

    Hello,
    @pavan kumar: Thanks, but I know all Blogs about Value Mappings. That does not help me. And I refer to PI 7.1.
    Lets get it very simple: I want to have exactly the same functionality of "FixValues" as ValueMappings. In 7.1, I need to define a Group for every row / line in the conversion table (e.g. for units of measurement conversion between IDoc and FlatFile). That does not make sense for me, as the Group is always the same, e.g. "Unit of measurement". So I will get dozens of same groups called "Unit of measurement".
    I don't really understand the concept of Groups. Maybe this is just not appropiate for my intenses? Maybe the Group has to be defined as one specific value of Unit of measurement, e.g. "pieces"?
    CHRISTOPH

  • Create a VC Model for BEx queries with Cell Editor

    Hello All,
    I am trying to create a VC Model for BEx queries with Cell Editor.
    BW Development Team has created a BEX query with complex restrictions and calculations using Cell Editor feature of BI-BEX.
    The query output in BEx analyzer is correct where all values are being calculated at each Cell level and being displayed.
    But, while creating VC model, system is not displaying the Cells.Thus, no VC Model can be created.
    I have executed below steps:
    1. Created a VC Model for BEx Query - ZQRY_XYZ
    2. Create Iview -> Create a dataService -> Provide a Table from the Output
    In the Column field system is not showing any of the Cells (present in Cell Editor)
    Please help me to solve this issue.
    Thanks,
    Siva

    Hi
    If 'Cell Editor' is been used then that query must have the structure in it. You have to select that 'structure' object in your 'VC Table'.
    If you select that then you will get the required result in the output. This structure will be the structure where 'cell reference' is used in BI query, You have to select that structures name.
    Regards
    Sandeep

Maybe you are looking for

  • Infotype 0023 Other/Previous Employer

    Good Day Experts I have an important query regarding Infotype 0023 - Other/Previous Employer in Adhoc query [S_PH0_48000510]. When i try to generate the data in IT0023 using Adhoc i cannot generate the data. But if you check all the empoyee in PA20 t

  • Cannot use product key for Vista installation on my Satellite U400-112

    HI, NICE DAY I BOUGHT TOSHIBA SATELLITE U400-112 , WINDOWS VISTA , I FORMATED MY LAPTOP THEN I SETUP WINDOWS VISTA , BUT WHEN I INTRED THE PRODUCT KEY (THAT IN THE LAPTP`S STICKER) , A MESSAGE APPEAR AND TELL THE P K IS INVALID !!! BY THE WAY, MY WIN

  • Marks in PSD's when moving edit layers in PS CC

    I have just upgraded this evening to PS CC after I upgraded I cannot now move any type without it leaving marks over any images in the PSD file......when I save the file as a jpeg etc the marks are still there. They appear even if the image layer is

  • Load bulk of files from folder

    HI , there is a way to create program the load csv file from shared folder but load all the files ? i.e. if in the folder there is 5 files -> load them one after one and the order is not important . Assume that i don't know the file names Regards Jam

  • Setting up profile options

    Hi, in 11.5.10 how to : Setting up profile options 3 new system level profile values in the iProcurement setup section : n POR : Proxy Server Name - Proxy server name if the customer has a proxy setup n POR : Proxy Server Port - Proxy server port if