Queries on multicube preferred why?

Hello BW Experts,
I am in the process of deciding to have the queries on cube / multiprovider. Could any one provide feedback with your experiences which option is preferrable. I am also considering of multiprovider even if there is only one cube.
Thanks very much.
BWer

Hello dear;
Please read this thread for your help. If you have only one cube, to me it doesn't make any sense to have multiprovider, unless you are thinking of may be another cube or provider also will be necessary to create a report in near future. Even for that I would recomends you to create the multiprovider when you have that situation.
Qeries on MultiProviders Vs Underlying InfoProviders
Hope it helps.
Buddhi

Similar Messages

  • Copying queries between multicubes without having the same info objects

    Hello all,
    I have heard that it should be possible to use the debugger in order to copy a report between two multiproviders without having the exact same info objects in both multiproviders. All the info objects in the report are off course available in both multiproviders.
    Could anyone explain to me how this should be done?
    Best regards,
    Fredrik

    Hi Fredrik.
    Take a look at this thread:
    https://forums.sdn.sap.com/click.jspa?searchID=224199&messageID=1193434
    Hope it helps.
    BR
    Stefan

  • For XI-3.0 Latest SP18 but for PI-7.0 SP12(for Nweaver2004s).why

    for XI-3.0 Latest SP18 but for PI-7.0 SP12(for Nweaver2004s).SP18 is latest one....right.but for 2004s SP12 is prefered.why?what is main difference

    Hi,
    Take a look at this thread to see which version of XI 3.0 SP is equivalent to PI SPS.
    Re: Service Pack confusion
    Regards,
    Bhavesh

  • Sap intv questions

    Hi,
    BW Guru's few questions below need answers.
    1.wt type of issues do u handle in implementation and production support?
    2.Tell me the procedure of  process chains?
    3.suppose 3 process chains r running  if one got faieds then wt will u
    do ?
    4.does ODS stores only latest data?
    5.by using Additive option can’t  it stroes historical data?
    6.wt is diff b/w  new data , active data,change log tables?
    7.how do we activate an ods?
    8.diff b/w update rule and transfer rule?
    9.diff b/w uproutine and start routine?
    10.what is structure why we use that?
    11.variable types and processing types?
    12.explain steps of generic extraction and LO?
    13.wt is meant by line item dimension?
    14.how we identify that this dim is a Line item dim?
    15.wt is meant by Early delta initialization?
    16.In ABAP from which tables have u done reports?
    17.how do u gather requirements from user?
    18.how many objects u have created and tell me tech names?
    19.wt is the diff b/w diasplay attributes and navigational attribute?
    20.how to create Infocube?
    21.can we create function module in BW?

    Hi Adrian,
    Is there any documentation on the XML required to create your own
    measures and filters at the universe level in a SAP BI universe?
    [http://help.sap.com/businessobject/product_guides/boexir3/en/xi3_sap_olap_universes_en.pdf] has some guidelines.
    SAP BW InfoCubes as data sources
    - Remote InfoCube: While fully supported, building and deploying universes on remote InfoCubes is not recommended for ad-hoc query-, reporting-, and analysis-use scenarios. Such architecture is generally not expected to meet query performance expectations with interactive queries.
    - MultiCubes and Multi-InfoProviders: Building and deploying a Business Objects universe on top of a MultiCube or Multi-InfoProvider is identical to building and deploying a universe on top of an InfoCube.
    SAP BW Queries as recommended data sources
    - Not all BW metadata features can be retrieved on an InfoCube level
    - BW Queries offer a flexible extension to the data modeling environment.InfoCubes require more effort to change.
    - BW Queries offer significant functionality to create customized data sources that meet end-user requirements.

  • Query 1 shows less consistent gets but more cost than Query 2..

    Hi ,
    SQL> select dname from scott.dept where deptno not in (select deptno from scott.emp)
    Ðñüãñáììá åêôÝëåóçò
    Plan hash value: 3547749009
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    1 |    22 | 4 (0)| 00:00:01 |
    |*  1 |  FILTER            |      |       |       |            |          |
    |   2 |   TABLE ACCESS FULL| DEPT |     4 |    88 | 2 (0)| 00:00:01 |
    |*  3 |   TABLE ACCESS FULL| EMP  |    11 |   143 |  2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "SCOTT"."EMP" "EMP"
                  WHERE LNNVL("DEPTNO"<>:B1)))
       3 - filter(LNNVL("DEPTNO"<>:B1))
    Note
       - dynamic sampling used for this statement
    ÓôáôéóôéêÜ
              0  recursive calls
              0  db block gets
             15 consistent gets
              0  physical reads
              0  redo size
            416  bytes sent via SQL*Net to client
            384  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL>
    SQL> select dname from scott.dept,scott.emp where dept.deptno=emp.deptno(+)
      2    and emp.rowid is null;
    Ðñüãñáììá åêôÝëåóçò
    Plan hash value: 2146709594
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |      |   12 |   564 | 5 (20)| 00:00:01 |
    |*  1 |  FILTER             |      |       |       |            |          |
    |*  2 |   HASH JOIN OUTER   |      |    12 |   564 | 5 (20)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL| DEPT |     4 |    88 | 2 (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL| EMP  |    12 |   300 | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("EMP".ROWID IS NULL)
       2 - access("DEPT"."DEPTNO"="EMP"."DEPTNO"(+))
    Note
       - dynamic sampling used for this statement
    ÓôáôéóôéêÜ
              0  recursive calls
              0  db block gets
              6 consistent gets
              0  physical reads
              0  redo size
            416  bytes sent via SQL*Net to client
            384  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedI have two questions:
    1) which one is preferable.... the first which is less costy to the system or the second which causes less consistent gets to the system and so is considered to be more scalable..????
    2)Whereas the number of rows returned by both queries is 1.. why the is difference in the underlined values in the two queries (values 1 and 12 respectively)?
    I use Oracle10g v.2
    Thanks.. a lot
    Sim

    The less logical I/O's the better.
    So always do it like your query 2 (btw. your title is the wrong way)
    Your example is probably flawed. If I try it in SQL*Plus I get correct results:
    SQL> get t
      1* select dname from dept where deptno not in (select deptno from emp)
    SQL> /
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=3 Bytes=39)
       1    0   FILTER
       2    1     TABLE ACCESS (FULL) OF 'DEPT' (TABLE) (Cost=2 Card=4 Bytes=52)
       3    1     TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=2 Card=1 Bytes=3)
    Statistics
              0  recursive calls
              0  db block gets
             15  consistent gets
              0  physical reads
              0  redo size
            537  bytes sent via SQL*Net to client
            660  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> get tt
      1  select dname from dept,emp where dept.deptno=emp.deptno(+)
      2* and emp.rowid is null
    SQL> /
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=14 Bytes=322)
       1    0   FILTER
       2    1     HASH JOIN (OUTER) (Cost=5 Card=14 Bytes=322)
       3    2       TABLE ACCESS (FULL) OF 'DEPT' (TABLE) (Cost=2 Card=4 Bytes=52)
       4    2       TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=2 Card=14 Bytes=140)
    Statistics
              0  recursive calls
              0  db block gets
              6  consistent gets
              0  physical reads
              0  redo size
            537  bytes sent via SQL*Net to client
            660  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> I'm wondering for instance why you have there 11 rows in emp for query 1 (should be only 1 row) and why you have only 12 rows in query 2 (should be 14 rows)

  • bi:QUERY value= dynamic in the WAD.    How to do it ?

    Hello,
    I need to define the query name somehow when the WAD is executed.
    We have one common web template for all of our queries and that is why I need to dinamicaly say in the WAD code that if the query tech is ZTEST then the real value should be ZTEST_DOC, which is a separate query where we have to store the documents.
    How to do this ?

    During runtime you can get the query ID using text elemets, then using java script you can generate the query document format by appending ther required text to the query ID.

  • FIFO based pricing issue on sales orders!

    This is the situations :
    All setups in the comany are for FIFO. Item Level, Item Group Level, and Company Level.
    1. We purchased 1 item1 for 125 dollars. PO invoice completed. First one in FIFO layer.
    2. purchased 1 more of same  item1 for 150 dollars. PO Invoice completed. Second in FIFO layer.
    3. We want to sell above items to Customer1. Customer1 has price list set to - Last Evaluated Price.
    4. Ran Inventory Audit Report to update the price list LastEvaluatedPrice.
    5. Start order for Customer1 and add 1 item1 to order -
    Price = 125 <== OK.
    Make the above quantity 2 or add another line for item1.
    Error : Price=125 <=== NOT OK, should be 125 for first item and 125 for second item (same item).
    6. Run Audit Report again and repeat step 5.
    Error : No change. Same as step results. That is price=125 instead of 150.
    NOTE: Customer could end up in loss based on above pricing.
    7. Yes there are workarounds :
    7.1  Run Inventory Valuation Report for the item with MovingAverage as the price. Then the evaluated price is right.
    7.2  Right click or run a query and get the price in layers and then calculate your own.
    Note: No workarounds are good for customer. They and their staff do no want to open additional windows or run queries etc.
    QUESTION: Why is FIFO pricing not working for Sales Order?
    Note: Why would anyone use FIFO? It could give them big loss or big profits all unintended?
    Please help? Is this FIFO problem, or my setup problem or something I do not understand?
    Thanks and Cheers!

    Hi  Syed,
    Open sales order select that item and right click in unit price column and select "last prices" option. Then Last prices form will open just deselect BP Code check box and select vendor. you can see there prices in detail level just double click on row which price you want.
    Thanks
    Sachin

  • Problem with OBIEE generated query

    Hi All,
    I'm working on OBIEE 10.1.3.4 version. For generating one report am using five tables , in thses five tables 2 are the fact tables and remaining all are dimensional tables.
    In these five tables am using one or more colums in each table and we joined properly for these five tables(we are not getting any error on these joins). The problem is when ever i get query from OBIEE (Administration--->Manage sessions) its not generating a single query , instead of single query its splits into more queries..
    Why its not generating the single query ,we are assuming that beacause of these reports are taking time to show the results.
    Could you pls suggest me why i'm facing these and also i have these problem for only one report for remainig reports OBIEE is generating sine query.

    Hi,
    Please refer : multiple queries for single report
    Why multiple queries ?
    when we can find the multiple queries in view logn
    Thanks,
    Saichand.v

  • SSO for Enterprise Portal 6 with different Portal and R/3 userIDs

    Hi there,
    We are using SNC library for SAP GUI logon to R/3 and SPNEGO for Web access to EP. What works for us currently is:
    SSO from Windows logon to Portal using SPNego (LDAP as our datasource with AD)
    However once we are inside the portal, the SSO to R/3 using SNC is not working. I have my Portal user mapped to my R/3 user as they are different usernames.
    But, if i launch SAP GUI on its own i can SSO into R/3 no problem.
    So, i have 3 queries here!
    1) Why am i not able to SSO into R/3 once i have SSO into Portal?
    2) Is there any way around the high maintenance of the user mapping?
    3) I have read on SAP Help about "Using an LDAP Directory Attribute as the ABAP User ID" but this will still require user / administrator to maintain the R/3 password.
    Is it possible to disable the R/3 password and thus have no maintenance as the R/3 (ABAP) User ID will be stored in LDAP attribute?
    Hoping you can help...
    Thanks.

    Answers below:
    1)
    When you say "ITS" I assume you are referring to the Integrated ITS in NetWeaver, not the external ITS product ?
    Anyway, if you are referring to Integrated ITS, then surely you are using webgui, not SAP GUI. The webgui is accessed via browser and is not related to SNC or SAP GUI product. The SAP GUI product is a Windows application that uses SNC to authenticate to SAP systems.
    If you are logged onto portal, which is a J2EE application and trying to access webgui which is running on ABAP Engine, then this might not work becasue your SSO2 trust is not setup correctly. Do you see an error in work process log saying anything about why the SSO2 ticket is not accepted ? Also, if ABAP and JAVA are on same system and Java Engine was installed as an add-in, you might need to create new SSO2 certificates to avoid a clash, and change client number from 000 to something else so SSO2 tickets issued in J2EE engine are differentiated from SSO2 tickets issued by ABAP Engine, but they are still trusted through configuration in STRUSTSSO2 t-code.
    2)
    You need to use a different product, which is available from a SAP partner to do this. I am not allowed to mention third party products on this forum, so if you want to know more you will have to contact me offline via email.
    3)
    See answer to question 2.
    Thanks,
    Tim

  • Audio Problems in Adobe Presenter: Help Needed Urgently!!

    I am using trial versions of Adobe Presenter 7 and Adobe Connect 8.
    I created a presentation in in Powerpoint 2002 and published through Adobe Presenter to Adobe Connect Pro. This presentation has voice over recorded through the Adobe Presenter "record audio" option.
    In Adobe Connect Pro, I created a meeting and shared the above mentioned presentation through the "Share Document" option in Connect Pro.
    Now, when I play the presentation in this Adobe Connect Meeting, I am not getting any audio that was recorded in Adobe Presenter. Whereas if I play the published presentation directly it plays the recorded audio.
    Can anyone help me on this?
    1. Are there any settings to be done in Powerpoint or Adobe Presenter or Connect pro?
    2. Is it the problem of Trial Account or something like that?
    Please let me know the solution. Need to work on this ASAP!!!
    Thanks in Advance,
    Yogini

    This is a weird one. If you created a prezo with audio and published it to AC (Adobe Connect) and everything worked fine on your desktop then it should work fine in AC i.e. there are no other settings required on AC's side, it should just play.
    As I am typing this I thought of similar audio problems that we have experienced in the past. A big one would be where another user created a PPT on their laptops and handed it to us. When testing the audio did not work. Reason being is that not all the files were coppied over BUT I must accept that this is not your problem as it definately worked on your PC before you published it to AC.
    Wait there is one setting in the publish settings. In the output options tick "Upload source presentation with assests", then tick audio.
    I cannot think of anything else. There is a man called Heyward Drummond on the Adobe Connect General forum, who is a real master at answering queries like this. Why not post your query there and see if he comes up with an answer?

  • Set the css style of text in a column according to the value of another col

    I'd like to set the css style of text in a column according to the value of another column. Each field may end up with a different style of text as a result, for instance.
    Any ideas? I looked thru the forums but couldn't find anything.
    Thanks,
    Linda

    Does the class=”t7Header” make it into the rendered HTML?
    ---The text "class="tHeader" does not show but the other text is rendered using the style t7Header as defined in the stylesheet! Exactly what I wanted.
    You might want to use a div or a span instead of a p.
    ---Yes -
    What's very cool is we can create a display column that is dynamically filled with the html and style wrappers based on a lookup to see what style should be applied according to the actual data value. This is critical as our tables are all dynamic so I can't depend on using the additional APEX methods to control the display of a column (as the # of columns in the view vary from instance to instance) and I did not want the display specs to muddy up my SQL queries.
    I wonder why this is not well documented. It is so easy!
    Thanks again for your help.
    Linda

  • Time series function TODATE is VERY slow

    While creating a report with data at different aggregation levels I run into serious performance problems. The situation is as follows. The datamart consists of a basic star with one fact table, containg sales amounts and three dimensions:
    - Product
    - Store
    - Time
    Dimension Time has a hierarchy: BookYear --> Period --> Week
    I created a measure: "MyModel"."T_SALES"."SalesAmount"
    I also created a derived measure to calculate the sales as a year to date. In this case for period this is a week to period measure (week is the lowest time level) PeriodSalesWtD: TODATE( "MyModel"."T_SALES"."SalesAmount", "MyModel"."Dim_Time"."BookPeriod")
    This query is VERY slow.
    In Oracle Answers I select only one week (200806) for which I want to see the SalesAmount for:
    - that week
    - the period that week is in up to that week (200802)
    - the year that week is in up to that week (2008)
    - Only a specific product group number (3). Otherwise the query won't ever return...
    OBIEE issues the following queries to the database:
    Query 1:
         select T2392.DEPBKWK as c3,
         T2392.DEPBKJR as c4,
         T2392.DEPBKPR as c5
         from
         W_TIJDDIM T2392
         order by c4, c3
    This query reads all records from the time dimension table, columns week, year and period. I do not understand why it does not filter on the weeknumber (200806).
    Query 2:
         select T2313.DEVRIWD as c1,
         T2313.TIJDKEY as c5
         from
         W_ARTDIM T2032,
         W_OMZWEEK T2313
         where ( T2032.DEIKANR = 3 and T2032.ARTKEY = T2313.ARTKEY )
         order by c5
    This returns per time dimension key (c5) all SalesAmounts (c1). I do not understand why this is not aggregated nor filtered on the requested weeknumber.
    Query 3:
         select T2392.TIJDKEY as c3,
         T2392.DEPBKJR as c4,
         T2392.DEPBKWK as c5
         from
         W_TIJDDIM T2392
         order by c4, c5
    This query reads all records from the time dimension table, just like query 1, but now key, year and week. I do not understand why it does not filter on the weeknumber (200806).
    Query 4:
         select T2392.DEPBKPR as c3,
         T2392.DEPBKWK as c4,
         T2392.DEPBKJR as c5
         from
         W_TIJDDIM T2392
         order by c3, c4
    Once more the time dimension is read, but this time the columns period, week and year. Cannot see why the time dimension is queried three times.
    Query 5:
         select T2313.DEVRIWD as c1,
         T2313.TIJDKEY as c5
         from
         W_OMZWEEK T2313,
         W_ARTDIM T2032
         where ( T2032.DEIKANR = 3 and T2032.ARTKEY = T2313.ARTKEY )
         order by c5
    Gets SalesAmounts (not aggregated, why?) per time key.
    Query 6:
         select T2392.TIJDKEY as c3,
         T2392.DEPBKPR as c4,
         T2392.DEPBKWK as c5
         from
         W_TIJDDIM T2392
         order by c4, c5
    Again the time dimension is read completely, now returning time key, period and week.
    Query 7:
         select sum(T2313.DEVRIWD) as c1,
         T2392.DEPBKWK as c2,
         T2392.DEPBKPR as c3,
         T2392.DEPBKJR as c4,
         T2392.DEPBKWK * -1 as c5,
         T2392.DEPBKPR * -1 as c6,
         T2392.DEPBKJR * -1 as c7
         from
         W_ARTDIM T2032,
         W_TIJDDIM T2392,
         W_OMZWEEK T2313
         where ( T2032.DEIKANR = 3 and T2392.DEPBKWK = 200806 and T2032.ARTKEY = T2313.ARTKEY and T2313.TIJDKEY = T2392.TIJDKEY )
         group by T2392.DEPBKWK * -1, T2392.DEPBKPR * -1, T2392.DEPBKJR * -1, T2392.DEPBKWK, T2392.DEPBKPR, T2392.DEPBKJR
         order by c2
    Overlooking all of these queries, I can understand why this is slow, but what to do about it??? Any help will be appreciated.
    Message was edited by:
    Paul Laman

    There are examples of this in the OBIA applications. What you need to do is to do a range join to the time dimension ie txn_date between first_day_of_year and the report_Date. However you need to map along every level of the time dimension since a query request at a higher level will not return correct results by mapping to the base table always. Hence you need to map day - month - qtr - year as separate Logical table sources.

  • Working around DB outages

    Does anyone have a strategy for working around DB outages when calling a database? For example, if I run a query, and that query takes more than 5 seconds, how do I cancel the query so my app doesn't hang waiting for the query to return?
    Sidebar - do you have any idea how to do this with web service calls?

    Thank you, I am looking to see if the DB2 driver supports setQueryTimeout. That would be perfect I think.
    AJchell:
    Q: And how are you going to decide whether to use 5 seconds or not? What if one query takes 10 seconds on average?
    A: The queries are quick, if the database latency gets to be over whatever the threshold is, then I want to lay off it and use cached content instead of the real deal.
    Q: And what if the network is loaded so all queries take 10 seconds?
    A: Then I hit my cache (I don't think this is a possibility, but you never know).
    Q: And what do you do with updates? Telling the user it didn't work after 5 seconds when it actually did work after 15 probably isn't going to be a good idea.
    A: I agree with you, I forgot to mention that this is not transactional, therefore there are no updates. All select queries.
    Q: And why worry about this in the first place? If the user gets a failure back in 5 seconds versus 30 then what are they going to be doing with that extra time? They won't be able to use the database in either case after that until it comes back up.
    A: That's where the cache will come into play. 30 seconds is just too long per the requirements of this app.
    Q: And how often do you expect this to occur? Is it worth worrying about what the users are doing for 30 seconds once a year? (And yes I do know of cases where it is important but I know of many others where it isn't.)
    A: Not often. But the criticality of the app is high enough that 30 seconds (in reality would be more than that) is too much time for the app to be down. This is for a truely HA app.
    My plans -
    I think what I am going to do here is try to do the setQueryTimeout to x amount of seconds, and if that is hit, do not go to the database for y minutes in case of failure (using a static variable for DB down). So if there is an outage, I would bypass all DB calls until that DB comes back up (Which I will check for every y amount of minutes).
    Any thoughts? Thanks for the help.

  • Doubt on Constraints in Oracle?

    Hi Friends,
    Constraints are the business rules that are enforced on the data that is stored in the tables.
    We can define constraints in three forms :
    1.using Constraint feature provided by Oracle i.e CHECK,NOT NULL,PRIMARY KEY .....
    2.Using Triggers in the database
    3.Writing the logic at the Application Level.
    Among all these only (1) is preferred,why?
    what exactly is the difference in the above approaches?
    Are the remaining approaches time consuming ?

    The main reason why CONSTRAINTS are to be preferred over TRIGGERS or other approaches is that it is the accepted approach to solving the problem. The fact that constraints are faster than triggers is noteworthy but irrelevant. Relational theory uses keys to assert relational integrity: constraints are the way to implement those keys. If I was looking at a new system I would expect to see keys in the data model implemented as constraints in the schema. I would not expect nor want to have to look at triggers to see what relational integrity is enforced.
    Using triggers instead of constraints is possible but it is like driving on the wrong side of the motorway: it's harder, it's dangerous and only a fool or a drunk would choose to do it.
    Cheers, APC
    blog: http://radiofreetooting.blogspot.com

  • Is SQL Server the new Access?

    This is a condensed summary of the job market as I've been observing it so far this year and maybe in retrospect last year too.  It may or may not be appropriate to this forum, if you are reading it then so far maybe it has relevance.
    But it appears to me as if virtually all new, large-scale work is being moved off of SQL Server and onto cloud and "big data" platforms possibly (but not frequently) including Azure.
    What is being left on SQL Server are legacy systems and smaller applications where generalist developers can totally fend for themselves, with the help of a competent DBA platform staff, and just let the power of the hardware overcome any technical
    issues.  Most of these apps seem to be whatever fits on a standard version and runs under a VM.  In effect, SQL Server is becoming the new Access!
    One additional small attempt at forum relevance: insofar as this market shift might be true, I would attribute it to two engine-related things.  First, the very unfortunate decision by Microsoft that for SQL 2012 licensing for big multicore servers
    became far more expensive.  It seems many shops stayed on SQL 2008 for a long time (still!) because of this and have barely looked at SQL 2014.  Second is that frankly SQL Server is not scaling very well beyond the terabyte range without a lot of
    high-levels of expertise involved - and especially when cheap management tries to run that multi-terabyte database on 32gb production servers!*  SQL Server's reputation for being easy to use, plug-and-play, may be threatening the success of these larger-scale
    apps and is leading to new choices in platform.
    Anyone else have any observations along these lines, pro or con?
    Thanks,
    Josh
    *and if you think that's a myth, it is exactly what happened at my last gig, leading to $20,000,000 in development being thrown away - and no, they simply could not be talked into understanding or mitigating the issue.

    Josh,
    I would think most of this is going to depend on the company, resources, and business models. I added my 2 cents though, enjoy.
    But it appears to me as if virtually all new, large-scale work is being moved off of SQL Server and onto cloud and "big data" platforms possibly (but not frequently) including Azure.
    I personally don't look at Azure as a big data platform. Currently it has its limits in both processing power and disk IO, has its own issues (such as AD integration, which is possible but takes extra hardware and setup) and has an SLA that you may or may
    not be able to live with.
    Do I see more things moving Azure - yes, but it's mostly because they are small and is based on cost + expertise. Most of the time there is no in house DBA and it is cheaper to have some other company worry about hosting it, providing the services, hardware,
    power, etc. You get redundancy and an out of the box SLA for small applications that are fractions of the cost compared to the licensing + hardware + personnel costs of in-house items.
    Remember when everything was run on mainframe, then personal computers came out, now everything is migrating back toward a mainframe feel. It is just how it cycles. "on prem private clouds" are nothing more than beefy hardware with a single purpose all packaged
    together (such as offering from HP and Dell) and something a shop of sufficient size can do on their own with enterprise class hardware - again if you have the space and expertise.
    What is being left on SQL Server are legacy systems and smaller applications where generalist developers can totally fend for themselves, with the help of a competent DBA platform staff, and just let the power of the hardware overcome any technical
    issues. Most of these apps seem to be whatever fits on a standard version and runs under a VM. In effect, SQL Server is becoming the new Access!
    This could just be the company that you work for, or someone up higher that hears a key word and latches on. I can tell you that I'm actually in the process of migrating multiple Oracle databases to SQL Server with application upgrades. That isn't counting
    the other databases that are currently SQL Server at 1TB+ but it depends on what is considered a small application. To a small company almost all applications are large, versus a large company with 10k+ employees and then anything less than 1k+ users is small.
    It also depends on if you're a dev shop or a 3rd party shop. For example, we bought software that runs on a closed source version of MySQL. It happens, that's what they use.
    First, the very unfortunate decision by Microsoft that for SQL 2012 licensing for big multicore servers became far more expensive. It seems many shops stayed on SQL 2008 for a long time (still!) because of this and have barely looked at SQL 2014.
    Yeah, this made me sad. Following Oracle and IBM was not the way to go, but not our decisions. The interesting thing is, if you read the licensing doc, anything you have with SA can be converted to cores so it isn't a total loss and shouldn't be a reason
    to stay at older versions. I still have instances of 2000 running (omfg) that are somehow alive... it's because the software company is no longer around yet we still use that product, not because we don't want to upgrade (read WE as I insist harshly). In fact
    I just had an install guide emailed to me about a possible new product and their "supported" version of SQL Server was 2005 SP3, it's all about what they develop on and if they test it on newer versions. Since that takes money most of the time they don't and
    it makes me sad with a few drops of angry and then tons of push back until they support something newer.
    This is how it will be especially in regulated markets such as stock trading and gambling where products must be "approved" by the boards or commissions before it can be used. I liked to joke that dev shops were 4 years behind, retail and hospitals were
    8 years behind and the government still uses pencil and paper.
    Second is that frankly SQL Server is not scaling very well beyond the terabyte range without a lot of high-levels of expertise involved - and especially when cheap management tries to run that multi-terabyte database on 32gb production servers!
    That is a huge "it depends", I'm not even going to try and lsit the myriad of items that can attribute to this. Very rarely when I am investigating is this an issue to scaling with SQL Server. Most of the time I find two main culprits - the first is a database
    design that is terrible and leads to the problems at hand or the other being an application that doesn't properly use the database. When I say properly, I mean HOLDLOCK hints hard coded into application queries and then wondering why concurrency is terrible.
    The developers just don't know, understand, or care as their job is to ship a product.
    If it makes you feel any better, I have successfully run (for years) a 1+ TB db on a server (2008) with 16 GB of RAM. The database lends itself to the type of workload that doesn't require extremely large datasets in memory and is more of an oltp style with
    large and long archives. This is why I say "it depends", because not every database is equal.
    Anyone else have any observations along these lines, pro or con?
    There will always be a new technology and change in the wind. That's inevitable. Some companies and execs understand this and take caution, some stick to old school (still have AS/400 style batch processing for "real-time" data), some always jump on the
    newest thing. YMMV.
    I look at it as another tool available to us.
    Sean Gallardy | Blog |
    Twitter

Maybe you are looking for