Best practises for using Excel functions on Power Pivot Data

Hi
How do you suggest to perform calculations on cells in a power pivot table? Obviously the ideal approach is to use a DAX measure. But given that DAX doesn't have every function, is there a recommended way of eg adding an "extra"  ( ie just
adjacent)  column to a pivot table. ( in particular I want to use beta.inv )  
I could imagine one option of adding some VBA that would update the extra column as the pivot table itself refreshed ( and added more/less rows)
thanks
sean

Hi Sean,
I don't know what's your expected requirement regarding this issue, maybe you can share some sample data or scenario to us for further investigation. As we know, if we need to add extra column to PowerPivot data model, we can directly
create a calcuated column:
calculated columns:
http://www.powerpivot-info.com/post/178-how-to-add-dax-calculations-to-the-powerpivot-workbooks
There are some different between Excel and DAX functions, here are the list for your reference:
Excel functions (by category):
http://office.microsoft.com/en-us/excel-help/excel-functions-by-category-HA102752955.aspx
DAX Function Reference:
http://msdn.microsoft.com/en-us/library/ee634396.aspx
Hope this helps.
Regards, 
Elvis Long
TechNet Community Support

Similar Messages

  • Best Practises for using Content Types

    We had third party vendor who migrated and re structured contents and sites in Sharepoint 2010. I noticed one unusual thing : In most cases they new separate site content types for every library in a site i.e. even if two libraries contain same set of metadata
    columns they created separate site content types by duplicating it from the first one and gave it unique name and used in second library.
    My point of view for content type is that they are used for re usability, i.e. if another library is needing same set of metadata columns then I would just reuse existing content type rather than creating another content type with different name by inheriting
    it from the first one and having same set of columns.
    When I asked vendor reason for this approach (for every library they created new content types and for libraries needing same set of meta data columns they just inherited from a custom site content type and created another duplicate one with same set of
    meta data columns and gave it different name in most cases name of library ) they said they did that to classify documents which I did not agree with because by creating two document libraries classification is already done.
    I need some expert advice on this, I will really appreciate: I understand content types are useful and they provide re usability but,
     A) Do we need to create new site content types whenever we create new library ? (Even though we are not going to re use them)
    B) What is best practice : if few libraries are needing same set of metadata columns
    1) Create site a content type and reuse it in those libraries ? or
    2) Create a site content type and create new content types by inheriting from site content type created at first and just give them different name even though all of them are having same set of columns  ?
    I need expert advice on this but following is my own opinion on this
    I do not think point A) is a good practice and should not be used, we should create site content type only when we think it will be re used and we do not need to create site content type every time we create document library. Also I do not think point 2)
    of B is a good practice as well
    Dhaval Raval

    It depends on the nature of the content types and the libraries. If the document types really are shared between document libraries then use the same ones. If the content types are distinct and non overlapping items that have different processes, rules or
    uses then breaking them out into a separate content type is the way forward.
    As an example for sharing content types: Teams A and B have different document libraries. Both fill in purchase orders, although they work on different projects. In this case they use the same form and sharing a content type is the no question approach.
    As an example for different content types: A company has two arms, a consultancy where they send people out to client sites and a manufacturing team who build hardware. Both need to fill in timesheets but whilst the metadata fields on both are the same the
    forms are different and/or are processed in a different manner.
    You can make a case either way, i prefer to keep the content types simple and only expand out when there's a proven need and a user base with experience with them. It means that if you wanted to subdivide later you'd have more of a headache but that's a
    risk I generally think works out.

  • What is best practise for using .movelast .movefirst?

    I was told at some point that I should always execute a .movelast   then .movefirst before starting to work on a recordset, so that I was sure all records were loaded.  But if there are NO records, I get a "no current record" when the .MoveLast statement is executed in the following code.  But if I use rstClassList.RecordCount before the .movelast, can I count on it's being valid?
    Also, I was unable to paste this code into this post.  I had to re-type it.  Is that expected behavior? Not to be able to paste stuff in?
    ls_sql = "Select * from tblStudents"
    Set rstClassList = CurrentDb.OpenRecordset(ls_sql)
    rstClassList.MoveLast
    rstClassList.MoveFirst
    li_count = rstClassList.RecordCount
    TIA
    LAS

    You do a MoveLast in order to make the RecordCount accurate.  If you access it before that, the results are unreliable.
    You want to do a MoveLast, then a RecordCount, and only then a MoveFirst if the count is greater that zero.
    That being said, the DataWindow is how people normally work with a database from PowerBuilder.

  • Best Practises for doing Master Scheduling using SNP

    Hello Gurus ,
                         Can you please suggest the best practises for doing Master Scheduling using SNP . Which engine to use , what would that mean etc
    Regards,
    Nick

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Basics:  Best practise when using a thesaurus?

    Hi all,
    I currently use a function which returns info for a search on our website, the function is used by the java code to return hits:
    CREATE OR REPLACE FUNCTION fn_product_search(v_search_string IN VARCHAR2)
    RETURN TYPES.ref_cursor
    AS
    wildcard_search_string VARCHAR2(100);
    search_results TYPES.ref_cursor;
    BEGIN
    OPEN search_results FOR
    SELECT
              DCS_PRODUCT.product_id,
              DCS_CATEGORY.category_id,
              hazardous,
              direct_delivery,
              standard_delivery,
              DCS_CATEGORY.short_name,
              priority
              FROM
              DCS_CATEGORY,
              DCS_PRODUCT,
              SCS_CAT_CHLDPRD
              WHERE
              NOT DCS_PRODUCT.display_on_web = 'HIDE'
              AND ( contains(DCS_PRODUCT.search_terms, v_search_string, 0) > 0)
              AND SCS_CAT_CHLDPRD.child_prd_id = DCS_PRODUCT.product_id
              AND DCS_CATEGORY.category_id = SCS_CAT_CHLDPRD.category_id
              ORDER BY SCORE(0) DESC,
              SCS_CAT_CHLDPRD.priority DESC,
              DCS_PRODUCT.display_name;
    RETURN search_results;
    END;
    I want to develop this function so that is will use a thesaurus in case of no data found.
    I have been trying to find any documentation that might discuss 'best practise' for this type of query.
    I am not sure if I should just include the SYN call in this code directly or whether the use of the thesaurus should be restricted so that it is only used in circumstances where the existing fuction does not return a hit against the search.
    I want to keep overheads and respose times to an absolute minimum.
    Does anyone know the best logic to use for this?

    Hi.
    You want so much ("... absolute minimum for responce time...") from OracleText on 9.2.x.x.
    First, text queries on 9.2 is so slowly than on 10.x . Second - this is bad idea - trying to call query expansion functions directly from application.
    My own expirience:
    The best practise with thesauri usage is:
    1. Write a good searcg string parser which add thes expansion function (like NT,BT,RT,SYN...) directly in result string passed through to DRG engine.
    2. Use effective text queries: do not use direct or indirect sorts (hint DOMAIN_INDEX_NO_SORT can help).
    3. Finally - write effective application code. Code you show is inefficient.
    Hope this helps.
    WBR Yuri

  • Best practices for using AUTOARCHIVING in Exchange 2010

    Hi guys!
    Exchange 2010 SP3 environment. We have 150 users. We bought two 2TB in RAID1 disk and they Will be used for Autoarchive DB's. We have currently about 10 Production DB's named by department. We have created 10 Archive DB's named by department and added Word
    ARCHIVE at the end of the databasename.
    We have 150 users. Some of them (let's say 50) are having 20 - 30 GB of PST files, all the others have about 4-5GB PST files.
    What would be the best practise for putting quota restrictions? As I can see by the default a user is limited with 50GB per archive mailbox. In this scenario where 50 users have more then 20GB+ PST files long, and all the other less then 4GB what would
    be the best practise for setting up quota limitation on newly created 10 ARCHIVE db's to achieve optimal solution?
    with best regards,
    bostjanc

    Hi,
    As far as I know,  by default, in Exchange 2010 SP1, the archive warning quota is set to 45 gigabytes (GB) and the archive quota is set to 50 GB. And we can depend on the following command to set the quotas for all mailboxes in one database:
    Get-Mailbox -ResultSize Unlimited | Where {$_.ArchiveDatabase -ne $null} | Set-Mailbox -Archive Quota 20GB -ArchiveWarningQuota 19GB
    Please note that this is not at database level but mailbox level.
    For more information, you can refer to the following articles:
    http://technet.microsoft.com/en-us/library/dd979795.aspx#AQ
    http://social.technet.microsoft.com/Forums/exchange/en-US/599b2871-6fcc-482f-845b-b59dec342097/usedatabasequotadefaults-for-archive-mailbox?forum=exchange2010
    Thanks,
    Angela Shi
    TechNet Community Support

  • Best Practice for using multiple models

    Hi Buddies,
         Can u tell me the best practices for using multiple models in single WD application?
        Means --> I am using 3 RFCs on single application for my function. Each time i am importing that RFC model under
        WD --->Models and i did model binding seperately to Component Controller. Is this is the right way to impliment  multiple            models  in single application ?

    It very much depends on your design, but One RFC per model is definitely a no no.
    Refer to this document to understand how should you use the model in most efficient way.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/705f2b2e-e77d-2b10-de8a-95f37f4c7022?quicklink=events&overridelayout=true
    Thanks
    Prashant

  • Using Excel function in workflow?

    Hi all,
    Does anybody know if it is possible to use Excel functions in workflows? Indeed, I need to set the style of a metadata according to the value of another metadata.
    Thanks in advance.
    fx

    Hi,
    According to your post, my understanding is that you wanted to set the style of a metadata according to the value of another metadata like Excel function.
    I don’t think workflow can do it.
    We can use the workflow to set the data, but there is no action to set the style.
    What did you mean set the style of a metadata? Did you mean the format?
    If so, you can use the conditional formatting.
    With conditional formatting, you can easily create a Data View that applies a style to a selected HTML tag or data value when the data meets criteria that you specify.
    You can also set conditions that change the visibility of an HTML tag or data value, so you can show or hide data altogether.
    You can apply the conditional formatting using SharePoint Designer, there is an article for your reference.
    Conditional Formatting in SharePoint 2013
    Thanks & Regards,
    Jason
    Jason Guo
    TechNet Community Support

  • Best practices for using the knowledge directory

    Anyone know when it is best to store docs in the Knowledge Directory versus Collab? They are both searchable, but I guess you can publish from the Publisher to the KD. Anyone have any best practices for using the KD or setting up taxonomies in the KD?

    Hi Richard,
    If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0. 
    The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0.  The current plan is not to bring cost detail into the Service Catalog module.

  • Best practices for using the 'cost details' fields

    Hi
    Please could you advise us to the best practices for using the 'cost details' field within Pricing. Currently I cannot find the way to surface the individual Cost Details fields within the Next Generation UI, even with the tick box for 'display both cost and price' ticked. It seems that these get surfaced when the Next Generation UI is turned off, but cannot find them when it is turned on. We can see the 'Pricing Summary' field but this does not fulfill our needs, as some of our services have both recurring and one-off costs.
    Attached are some screenshots to further explain the situation.
    Many thanks,
    Richard Thornton

    Hi Richard,
    If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0. 
    The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0.  The current plan is not to bring cost detail into the Service Catalog module.

  • Best Practise for connecting to Ethernet based device

    Hi,
    I have inherited a system where we have a cDAQ-9181 controlling an vehicle access barrier, with a LabView application on  a PC talking to it via Ethernet.
    (The application is very simple - press a button > send a value to the 9181 unit > opens the barrier )
    All works fine most of the time.
    ( We occasionally get network related errors. The LabView application sometimes thinks another PC has reserved the unit, or gives “error 89130 - device not available for routing” )
    The users would now like to be able to easily run the application from a second PC ( not at the same time ), but this seems to be a problem. If I exit the application on PC “A” and run it on PC “B” it struggles to reserve the chassis, and throws the “89130” error and I have to restart the unit via MAC.
    While I’m a “veteran” control programmer, I’m new to LabView, and would be very grateful for any pointers on “best practise” for talking to devices via Ethernet, or any specific suggestions for handling multiple PCs talking to a single device.
    Thank You.
    Tim.

    Hi Tim,
    Thank you for your post and welcome to the NI forums.
    There are lots of knowledgebase articles on our website and you should be able to find documentation for most of our hardware.
    There is a good troubleshooting guide for cDAQ Ethernet here (http://ae.natinst.com/public.nsf/web/searchinternal/e67b4e4749f378ff862577270059bd4b?OpenDocument) - it outlines the steps to take to ensure you have a stable a connection as possible. You may have already seen it, but the quick-start guide for your specific device may also be worth consulting for best practices. Are these helpful?
    As for using more than one PC - this shouldn't be too much of an issue. I would expect that the resource isn't being closed correctly - when you exit the App on PC 'A', how are you closing off the resource?
    Best regards,
    Eden S
    Applications Engineer
    National Instruments UK & Ireland

  • Best Practise for rebooting ISE Nodes?

    Hello Community,
    I administer an ISE installation with two nodes (I am not an ISE Specialist, my job is just to manage the user/mac-adresses... but now I have to move my ISE Nodes from one VMWare Cluster to another VMWare Cluster.
    (Both VMWare environments are connected to our enterprise network, but are different environments. vMotion not possible)
    I would shutdown ISE02, move it to our new VMWare environment and start it again.
    Than I would do this with our ISE01 Node...
    Are there any best practises for doing this? (Shutdown application first, stopl replikation etc)?
    Can I really simply reboot an ISE Node - or have I consider something bevor I doing this? After I doing this?
    Any tasks after reboot?
    Thank you for any answer!
    ISE01    
    Administration, Monitoring, Policy Service    
    PRI(A), SEC(M)
    ISE02    
    Administration, Monitoring, Policy Service    
    SEC(A), PRI(M)

    There is a lot to consider here.  If changing environments means changing IP Address and IP Scopes, then your policies, profiles, and dACLs would also have to change among other things.  If this is the case, create a new ISE VM in the new environment using the built in evaluation license and recreate the deployment from the old environment using the addressing scheme of the new environment.  Then spin-up a new Secondary node and register it on the Primary.  Once this is done, you can re-host the license from your old environment onto your new environment.  You can use this tool to re-host:
    https://tools.cisco.com/SWIFT/LicensingUI/loadDemoLicensee?FormId=3999
    If IP Addressing is to remain the same, it gets simpler. 
    First, and always, perform a configuration and operational backup.
    If downtime is not an issue, or if you have a maintenance window of an hour or so: Simply shut down both nodes.  Transfer them to the New Environment and turn them on, Primary Node first, of course.
    If downtime is an issue, shut down the Secondary Node and transfer it to the New Environment.  Start the Secondary Node and when it is up, shut down the Primary Node.  Once services on the primary node have stopped, promote the Secondary Node to Primary Node.
    Transfer the OLD Primary Node to the New Environment and turn it on.  It should assume the role of Secondary Node.  If it does not, assign that role through the GUI.
    Remember, the correct way to shut down an ISE node is:
    application stop ise
    halt
    By using these commands, the risk of database corruption decreases by about 90% (Remember to always backup).
    Please Rate Helpful posts and mark this question as answered if, in fact, this does answer your question.  Otherwise, feel free to post follow-up questions.
    Charles Moreton

  • Best practises for replication

    Hi,
    I want to know what is best practise for duration of replicaation of database between two Cisco ACS.
    Regards,
    Atif.

    Hi Atif,
    The replication time interval should always be higher.
    Reason: Everytime you replicate the data it requires ACS services to restart so doing this frequently may affect your production enviroment.
    However, if you want to replicate internal user's password then there is an option to replicate password changes right awayvwithout a full replication.  You can enable this option under System Configuration -> Local Password Management.  With this enabled you could potentially set the replications to a larger interval.
    It also depend how often you do changes in your ACS. If its normal then I would say set it to every sunday 12:00 PM.
    This is how replication happens:
    The primary ACS stops its authentication and creates a copy of the ACSinternal database components that it is configured to replicate. During this
    step, if AAA clients are configured properly, those that usually use the primary ACS fail over to another ACS. The primary ACS resumes its authentication service.
    After the preceding events on the primary ACS, the database replication process continues on the secondary ACS. The secondary ACS stops its authentication service and replaces its database components with the database components that it received from the primary ACS. During this step, if AAA clients are configured properly, those that usually use the secondary ACS fail over to another ACS. The secondary ACS resumes its authentication service.
    HTH
    Regards,
    JK
    Plz rate helpful posts-

  • How get OBJECT_ID for using ARCHIV_BARCODE_GLOBAL function?

    Hello experts:
    Do you know how can I get the object_id that I must fill for using ARCHIV_BARCODE_GLOBAL function?
    Please any help is very helpfull.
    Best regards and thanks in advance for your time.
    Miriam

    I think you can get this in the interface. Ideally this gets populated in TOA01 or TOA02 or TOA03 table.
    also check table
    BDS_BAR_EX
    BDS_BAR_IN
    Thanks
    Arghadip

  • Which is the best way for a called function to identify caller class name.

    Which is the best way for a called function to identify the caller class name .
    1)Using sun.reflect.Reflection from called function
                    Class caller = Reflection.getCallerClass(2);
                    System.out.println("Caller Class Name ::"+caller.getName());2) Analyzing current threads stack trace from called function
                    StackTraceElement[] stElements=Thread.currentThread().getStackTrace();
                    System.out.println("Caller Class Name ::"+stElements[3].getClassName());Is there any alternate ways to achieve the same .Which is the best way ?
    Called function doesn’t have any arguments, I don’t want t pass any arguments from caller function to called function.
    Plz help.
    With kind regards
    Paul

    798185 wrote:
    Which is the best way for a called function to identify the caller class name .
    Is there any alternate ways to achieve the same.SecurityManager
        // 0 is the anonymous SecurityManager class
        // 1 is this class (also works in static context)
        // 2 is calling class
        static Class getClass(int i) {
            return new SecurityManager() {
                protected Class[] getClassContext() {
                    return super.getClassContext();
            }.getClassContext();

Maybe you are looking for

  • Can only save as .cptl

    2Hello! I've looked, but I didn't see any other discussions that address the particular issue that I am experiencing. I'm using Captivate 5 on a Mac and when I select "Save as" for my projects, I only have the option to save as a .cptl type file and

  • BPM and Substitution rule (Delegate mechanism)

    can we create substitution rule (Delegate mechanism )using BPM

  • KVM Distortion

    I'm having problems with my KVM switch on my machine. I am getting distortion, all images are shifted over to the right, say 1/4" with a transparent hue. Here is my setup: Computer 1: AMD 1.3mghz Asus board Asus Geforce3 ti200 Windows 200 Server Comp

  • Door in Solaris

    hi, Can anyone suggest a website/article that describes development and description/role of doors in UNIX or Solaris... the files starting with dr-xr--r--

  • How to clear express edition database.

    Hi, db has remnants(sequences, views, etc) of former project. How can i entirely clear the express edition(10g) database to work without remnants of the previous project?