Best Practice for Significant Amounts of Data

This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
So, I guess this brings up a few questions:
1)     Am I doing something wrong and did I miss something that would prevent this problem?
2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
b.     Would it even work if it had that many data ranges in it?
c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
d.     Other ideas  that Iu2019m missing?
FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
Any thoughts or guidance would be appreciated.
Thanks,
David

Hi David,
I would leave your query
"Am I doing something wrong and did I miss something that would prevent this problem?"
to the experts/ gurus out here on this forum.
From my end, you can follow
TOP 10 EXCEL TIPS FOR SUCCESS
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
Please follow the Xcelsius Best Practices at
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
In order to reduce the size of xlf and swf files follow
http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
Hope this helps to certain extent.
Regards
Nikhil

Similar Messages

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practices for converting SAP HR data (4.7 to ECC)

    Hello Experts ...
    We are going from 4.6 to ECC ... no upgrade ..it will be a new implementation ...
    I am looking for best practices to convert SAP HR data from one sap instance(4.6) to another(ECC) ...
    I am not sure if direct input or LSMW or any other method/tool is the best way ...
    Will really appreciate and award point if I can get good advise or documentation ...
    Let me know if my question is not clear enough.
    Thanks,

    Hi
    You can check SAP Marketplace and there follow the download link. You will require a SAP Marketplace login to download the Best Practices. Fortunately Best practices for ECC 5.00 and 6.00 are available there but they are country specific versions. I know HCM for US is available there.
    Reward points, if helpful.
    Regards
    Waz

  • Best practice for phone number column data type

    Hi,
    I hope I have posted this in the right forum, if not just notify me and i will remove it to the correct area.
    What would be considered best practice for storing a phone number.
    Number of Varchar2.
    Ben

    Well I was thinking that Number would have the following disadvantages,
    1. Users entering phone numbers into a form may be tempted to pre format with spaces, thereby throwing up an error.
    2. Mobile phone numbers may have leading zeros
    3. Calculations are not carried out on phone numbers.
    I was leaning towards a varchar2 type.
    Ben

  • Best practice for having separate clone data for development purposes?

    Hi
    I am on a hosted Apex environment
    I have a workspace containing two instances/ copies of the application: DEV and PROD
    I would like to be able to develop functionality and data in/ with the DEV instance and then insert it into DEV.
    I gather that I can insert pages from DEV to PROD via Create -> New page as copy -> Page in another application
    But I don't know how I can mimic this process with database objects, eg. if I want to create a new table or manipulate the data in an existing table in a DEV environment before implementing in a PROD environment.
    Ideally this would be done in such a way that minimises changing table names etc when elevating pages from DEV to PROD.
    Would it be possible to create a clone schema that could contain the same tables (with the same names) as PROD?
    Any tips, best practices appreciated :)
    Thanks

    Hi,
    ideally you should have a little more separation between your dev and prod environments. At the minimum you should have separate workspaces each addressing separate schemas. Apex can be a little difficult if you want to move individual Apex application objects, such as pages, between applications (a much requested improvement), but this can be overcome by exporting and importing the whole application. You should also have some form of version control/backup of export files.
    As far as database objects go, tables etc, if you have tns access to your hosted environment, then you can use SQL Developer to develop, maintain and synchronize between your development and production schemas and objects in the different environments should have identical names. If you don't have that access, then you can use the Apex SQL Workshop features, but these are a little more cumbersome than a tool like SQL Developer. Once again, scripts for creating and upgrading your database schemas should be kept under some sort of version control.
    All of this is supposing your hosting solution allows more than one workspace and schema, if not you may have to incur the cost of a second environment. One other option would be to do your development locally in an instance of Oracle XE, ensuring you don't have any version conflicts between the different database object features and the Apex version.
    I hope this helps.
    Regards
    Andre

  • Best practices for administering Oracle Big Data Appliance

    -        Best practices as part of administration of Oracle Big Data Infrastructure
    -        How do we lock down max space usage per project
    Eg: Project team A can have a max limit of 10 TB space allocated
    -        Restricting roles, access ( Read, Write), place holder for common shared artifacts
    -        Template/procedure for code migration across dev,qa and prod environments etc

    Your data is bigger than I run, but what I have done in the past is to restrict their accounts to a separate datafile and limit its size to the max that I want for them to use: create objects restricted to accommodate the location.

  • Best Practice for Storing Program Config Data on Vista?

    Hi Everyone,
    I'm looking for recommendations as to where (and how) to best store program configuration data for a LV executable running under Vista.  I need to store a number of things like window location, values of controls, etc.  Under XP I just stored it right in the VIs own execution path.  But under Vista, certain directories (such as C:\Program Files) are now restricted without administrator rights, so if my program is running from there, I dont think it'll be able to write its config file.
    Also right now I'm just using the Write to Spreadsheet File block to store my variables.  Does this sound alright or are these better suggestions?
    Thanks!
    Solved!
    Go to Solution.

    I fopund some stuff on microsoft page. Here the link and a short past from taht document:
    http://www.microsoft.com/downloads/details.aspx?FamilyID=BA73B169-A648-49AF-BC5E-A2EEBB74C16B&displa...
    Application settings that need to be
    changed at run time should be stored in one of the following
    locations:
     CSIDL_APPDATA
     CSIDL_LOCAL_APPDATA
     CSIDL_COMMON_APPDATA
    Documents saved by the user should be
    stored in the CSIDL_MYDOCUMENTS folder.
    Can't tell you more as I have no Vista around to look for the CSILD stuff.
    Felix
    www.aescusoft.de
    My latest community nugget on producer/consumer design
    My current blog: A journey through uml

  • Best practice for large form input data.

    I'm developing an HIE system with FLEX. I'm looking for a strategy, and management layout for pretty close to 20-30 TextInput/Combobox/Grid controls.
    What is the best way to present this in FLEX without a lot of cluter, and a given panel being way too large and unweildy.
    The options that I have come up with so far.
    1) Use lots of Tabs combined with lots of accordions, and split panes.
    2) Use popup windows
    3) Use panels that appear, capture the data, then, make the panel go away.
    I'm shying away from the popup windows, as, that strategy ALWAYS result in performance issues.
    Any help is greatly appreciated.
    Thanks.

    In general the Flex navigator containers are the way to go. ViewStack is probably the most versatile, though TabNavigator and Accordion are good. It all depends on your assumed workflow.
    If this post answers your question or helps, please mark it as such.

  • Best Practice for ViewObjects when inserting data through pl/sql procedure

    My applications is oracle form based enterprise level application and we are now developing new module in ADF 11g but there is restriction that all data insertion, updation, and deletion will be through oracle pl/sql procedures. Now my question is that adf pages should be binded with ViewObjects based on Entity Object or with Viewobjects not based on Entity / sql query. Currently I have developed pages with programmatic ViewObjects which are neither based on Entity Objects nor on sql query. In those view objects, i create transient attributes and then used it to create adf pages. Then on save, i extract the data from ViewObject's current row and pass it to procedure. This is working fine but just wondering whether this approach is ok or there is better alternative for that. Ideally i want to create ViewObjects based on EntityObject but don't finding any way to synchronize entityObjects with data inserted through procedures.

    Hi,
    I create a EO for the Database-View and override the doDML()-Method. For insert/update and delete I call the pl/sql-functions.
    See "38.5 Basing an Entity Object on a PL/SQL Package API" in Oracle® Fusion Middleware Fusion Developer's Guide for Oracle Application Development
    Framework.

  • Best Practice for Package Implementation of Data Manipulation

    Hi,
    Would like to ask which is better implementation for data manipulation (insert, update, delete) stored procedure for a single table.
    To create a single procedure with input parameter for the action such as 1 for insert, 2 for update and so on
    or
    to create separate procedures for each like procedure pInsData for insert, pUpdData for update...

    Hi,
    Whenever you create a procedure it resides as a seperate object in database.
    In my opinion its better to create a single procedure which takes care of all DML concern to a table, rather than creating different procedures for each DML.
    If your number of DML are more and interrelated then its better to create a package and put all related DML procedures in the package concern to one transaction or table. This is because whenever you will call a package entire package will be placed in the memory for a particular session. So if you create different procedures for DML then you need to call the procedures each time you want it to be executed.
    Twinkle

  • Best Practice for Using Static Data in PDPs or Project Plan

    Hi There,
    I want to make custom reports using PDPs & Project Plan data.
    What is the Best Practice for using "Static/Random Data" (which is not available in MS Project 2013 columns) in PDPs & MS Project 2013?
    Should I add that data in Custom Field (in MS Project 2013) or make PDPs?
    Thanks,
    EPM Consultant
    Noman Sohail

    Hi Dale,
    I have a Project Level custom field "Supervisor Name" that is used for Project Information.
    For the purpose of viewing that "Project Level custom field Data" in
    Project views , I have made Task Level custom field
    "SupName" and used Formula:
    [SupName] = [Supervisor Name]
    That shows Supervisor Name in Schedule.aspx
    ============
    Question: I want that Project Level custom field "Supervisor Name" in
    My Work views (Tasks.aspx).
    The field is enabled in Task.aspx BUT Data is not present / blank column.
    How can I get the data in "My Work views" ?
    Noman Sohail

  • Best Practice for disparately sized data

    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    A

    Angel 1058 wrote:
    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    AHi A,
    It depends... if there is a relationship between keys and sizes, e.g. if this or that part of the key means that the size of the value will be big, then you can implement a key partitioning strategy possibly together with keyassociation on the key in a way that it will evenly spread the large entries across the partitions (and have enough partitions).
    Unfortunately you would likely not get a totally even distribution across nodes because of having fairly small amount of entries compared to the square of the number of nodes (btw, which version of Coherence are you using?)...
    Best regards,
    Robert

Maybe you are looking for

  • Trouble enabling S-Video out (to TV) on a T41

    I followed the instructions here: http://support.lenovo.com/en_US/detail.page?LegacyDocID=MIGR-58625 (I have the first option - ATI Radeon Mobility 9000) Right click on desktop -> Properties -> Settings -> Advanced -> Displays They show a red power b

  • How can i get rid of apps in the "purchased" section of app store on iphone

    the new purchased section of itunes which i guess is a part of icloud, is storing apps that i did download onto my phone, and then delete them later on, and those apps are not stored in itunes on my computer!!  im just wondering if there is a way tha

  • 2 Logical Components for 1 Business Process Step

    Hello All, I'm looking for a way to assign in SolMan --> transaction Solar01 - Tab Structure - Business Process Step, 2 logical components for 1 business step. When creating the business step I have to assign 1 specific logical component to this step

  • Insignia LCD 32 inch TV sound (works, but really fuzzy/lame)

    just yesterday i purchased the 32 inch dvd/TV LCD. Set up was easy, but wow, the sound is awful especially on non HD channels..it was really bad when I went to watch a show in the Comcast OnDemand section. It's sort of a fuzzy/static-y sound. Anyone

  • Default Value for Sold-To Value

    Hello -    We are on version 3.2 of Solution Manager, and are using Change Management for Urgent Transports. All is working, but want to make some items better. When the user goes to transaction CRMD_ORDER, and create a change request, the field Sold