How to analyses what takes memory

Hi,
I have a library with several functions written in labview and compiled into shared dll. Library provides functions to be used in test cases for Production (init, write, read, change, deint, etc). In init function some files are created, connections are established, and some functional globals are created to pass some data between functions.
This library is used at the end in TestStand. I test it first in CVI.
When I start prepared in CVI interface, it takes at start about 55MB. When I hit init for the first time it increases ~10MB. Deinit frees only some kB. Another init adds ~1 - 2MB, and every another init adds ~1-2MB. It seems like some memory is not released. That is not a problem at a test phase, but Production does not do restarts, they just run it on and on, and suddenly they might face a lack of memory.
There are several global variables passed via shift registers, but there should be always the same part of memory used. I read into internal structure xml file that is of size about 400kB, but even if it is read at every init it should be overwritten.
However how to deallocate place for such global.
I was trying to use DesktopExecutionTraceTolkit, but I am unable to analyze 400k lines.
Is there any other way to analyze what takes the memory, and what keeps it after vi ends?

Hi Piotr,
thank you for hints.
I used function "request deallocation" before, but it gave not any better results.
Filtering result from Desktop Execution Trace Tolkit gave me still klines to be analysed. It is impossible.
I have never used In Place Element structure, but I can check if it can bring some positive results.
As a clarification:
INIT function reads some files (and close them after reading) and stores some of this info in functional globals (in registers) - ini file, xml file, log file. It also establish a connections to targets. This refnums are the only ones that are not closed until DEINIT funciton is used. Deinit function mailny closes the connections. I made a mistake in my first post. DEINIT function does not free any memory, it just takes only some kB.
I made some tests.
I removed reading of xml file. The consumption of memory decreased a lot as expected. But still memory was allocated and not freed in every start.
I removed then almost all functional globals (but one responsible for holding of refnums to targets), but still some kB was taken and not released.
I dont really understand whay the memory that should be self deallocated, is not. I write to some registers, and when i write again, it seems like some other components are created in memory but I still talk to the same register.
Or maybe there is a problem in CVI environment? It could not be, that the momory is not released when vi finishes execution.

Similar Messages

  • How to analyse the main memory usage in SAP ERP systems?

    Dear expert,
    I'm doing a research work about analysing the main memory usage in SAP ERP systems.
    I would like to find out what is load in buffers and when. That means, which process have the control of these memories and which are always performing something, tables loaded, and so on. Becuase I tried to isolate the space needed by a simple webservice call (create one material) in my test system, but even after a $SYN there are something stored in the buffers. I use a BAPI to avoid the execution of the SAPGUI and its repercussion in the system (I know the BAPI called uses resources too, but when I run this BAPI to get the statistics, it's like ST02, I get different values). Could someone help me or recommend something specific to read? Thanks a lot in advance.

    Dear expert,
    Thanks a lot for your answer. The point is now that I want to isolate the memory used by a webservice that I call, I mean, I would like to know how many memory is this webservice using in each buffer. And could you tell me where could I read something about the order that things happen in SAP System when a webservice is called (always memory related), that's which steps are done to store data in buffers and so on. Thanks in advance.

  • I 5c fell in toilet and won't turn off. have it in rice, how long will it take?

    I 5c fell in toilet and won't turn off. have it in rice, how long will it take?

    How long will what take? You should leave the phone in rice for at least 4-5 days. At that point, plug the phone in and let it charge for about 15 minutes and see if it turns on. Accept the fact your phone will probably never work properly again.

  • What takes up 'other' memory usage

    Im not sure if anyone else has this issue, but while syncing my iPod, the 'Other' usage rose almost half a gb in an instant. what takes up sp much memory, and how do I sotp it?

    iTunes also uses the "Other" section as a temporary file, that's why it can change the size while you are syncing, for example if you add updated versions of apps, the data is stored there until the new app has been transferred.
    A file size up to 1GB of "other" is normal, if it is getting bigger, there might be corrupt data inside resulting from a sync that went wrong. You can reduce the size again to normal by restoring the device.

  • What is tuxedo9.1/bin/sql and how to restrict amount of memory it uses?

    Hi,
    We have AIX 5.3 server running Oracle 10.2 database and Tuxedo, a few days ago the database crashed as
    there was no free memory (this was configmed by AIX log).
    According to our monitoring (Open View) memory was consumed by two 'sql' processes
    (where 'sql' is process name): within one hour each process went from 24,000 to 800,000 memory pages.
    I searched the box and found executable named 'sq' in tuxedo9.1/bin:
    ls -l tuxedo9.1/bin/sql-r-xr-xr-x 1 abcPadm abcP 73128 Oct 24 2008 tuxedo9.1/bin/sql
    Would you be able to tell me what is this 'sql' executable and how to restrict amount
    of memory it consumes (other than through ulimit)?
    Thanks
    Sev

    Hi Sev,
    As Ian says, the SQL support in Tuxedo where Tuxedo actually provided a relational database resource manager has long been deprecated and certainly hasn't been supported in over 10 years. The Tuxedo sql program in the Tuxedo bin directory is an interactive SQL utility. I seriously doubt that it is the same executable that caused your memory problem. If it is the same executable, there isn't really much Oracle is going to be able to do for you as that component hasn't been supported for years. You would really need to change the application to use a standard supported SQL utility and relational database.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

  • What is "installing 1 item" I clicked by mistake and my screen is showing me this. How long does it take to do the update? I'm worried that it doesn't change. What can I do about it?

    What is "installing 1 item" I clicked by mistake and my screen is showing me this. How long does it take to do the update? I'm worried that it doesn't change. What can I do about it?

    Cocoi,
    since we don’t know what this one item being installed even is, how could we know how long it would take to install?

  • What is the average duration of 1 full SAP life cycle or 1 end-to-end implementation. How long does it take to prepare DEV, QAS and PRD?

    What is the average duration of 1 full SAP life cycle or 1 end-to-end implementation. How long does it take to prepare DEV, QAS and PRD in any company?

    Anand,
    let me start with saying that the question you ask may not help you to determine the duration of your project. As Ryan and others stated the duration of the project is highly dependent on the scope of the solution you are implementing, geographical scope, amount of modifications/enhancements, number of languages, number of users that need to be trained, amount of standard processes customer is able to re-use in the implementation and many other factors (like quality of implementation contractor you will chose and availability of customer and implementors resources). I can probably go on for another couple lines, but I guess you get the idea.
    With that out of the way let's talk about some example implementations that will give you an idea - Ryan did great job outlining what I would call traditional approach above. I have couple examples where customers leveraged innovative deployment strategies that are available today. In particular the project teams leveraged pre-packaged services like RDS, World Template or Best Practices as their baseline solution and they built from there. Second acceleration technique customers now leverage is the deployment in the SAP HANA Enterprise Cloud to accelerate the time to initial setup of the system and thus move from traditional blueprinting to scope validation exercise that further shortens the time. There are other acceleration techniques we see applied in some cases like use of iterative implementation of the delta requirements on top of the baseline solution.
    Let me offer few examples to illustrate what I explained above:
    ERP implementation at Schaidt Innovations with 3 months long deployment of ERP solution using ERP RDS as a baseline (you can view their testimonial here - Schaidt Innovations: SAP ERP on HANA in the cloud - YouTube)
    Customer in Asia with global template deployment that leveraged SAP ERP for Manufacturing with deployment to cloud and 9 countries rollout (Japan, Korea, China, Taiwan, Hong Kong, UK, Germany and US). The initial deployment took 4 months for first country and 2 months for rollout into the additional 8 countries - so total of 6 months. The original plan using traditional approach with full blueprint and heavy configuration was estimated to be more than double of the actual deployment time.
    There are many other examples where customers follow the assemble-to-order delivery model for their project and gain significant benefits doing so. I suggest you to review some of the recordings we did in 2013 about this approach and if you are member of ASUG review the Agile ASAP sessions we did for ASUG PM SIG.
    Link to webinars: SAP A2O Webinar Series Schedule
    Let me know if you have any questions.
    Jan

  • HT2736 i have purchased an itunes voucher through itunes, but it has not yet come through to me email. how long does it take?? And what do I do if it doesn't come??

    I have purchased an itunes gift voucher through itunes, but i have not yet recieved it. How long do they take to come through?? What do I do if it doesn't come??

    Place the iOS device in Recovery Mode and then connect to your computer and restore via iTunes. The iPod will be erased.
    iOS: Wrong passcode results in red disabled screen
    If recovery mode does not work try DFU mode.
    How to put iPod touch / iPhone into DFU mode « Karthik's scribblings

  • How do move what is on my startup memory over to the other memory modules?

    I installed extra memory into my Mac.  I have used up all of the start up memory (per the notice that comes up every time memory is tried to be accessed). How do move what is on my startup memory over to the other memory modules?

    Activity Monitor's window shows the RAM being used; it does not show how much of your hard drive is being used. Here is how you find out how much of your hard drive is being used:
    control click on your hard drive icon and you will get a Get Info window like this:
    On my MBP, the hard drive is 239 GB of which 59 GB are used and 179 GB are available.
    Please post your hard drive get info screenshot.

  • I just got one of the new iMac.  following the on screen instructions to do migration assist.  I guess it connected via wifi to my old iMac.  now it seems to have stopped.  what can I do to not mess it all up?  how long will it take on wifi?

    I just got a new iMac.  was following the onscreen instructions to do migration assistance and I guess it connected to my old iMac via wi fi. It never instructed me to connect via firewire.  Now it seems to have stopped.  what can I do that won't completely mess it up?  how Long will it take to do over wi fi?

    Your new iMac does not have a Firewire port so you ned this adapter:
    http://store.apple.com/us/product/MD464ZM/A/apple-thunderbolt-to-firewire-adapte r
    to connect the two machines together and migrate simply follow the instructions in:
    http://support.apple.com/kb/HT4413?viewlocale=en_US&locale=en_US#1

  • HT201364 How easy is it to upgrade to Mavericks from 10.6.8? How long does it take and what benefits do you get, please?

    How long does it take to upgrade to Mavericks and what benefits for the 10.6.8 user? Is a straightforward download? Thank you, Kevin
    What application do I select to facilitate the download?

    Check here for compatibility of 3rd party Software you may be using...
    http://roaringapps.com/apps:table
    Also note that Rosetta is no longer supported in Mavericks, Mountain Lion and Lion.
    kevart wrote:
    How long does it take to upgrade to Mavericks and what benefits for the 10.6.8 user...
    MAVERICKS...
    Check here to see if your Mac is compatible...
    http://www.apple.com/osx/how-to-upgrade/
    If so... would recommend at least 4GB of RAM... if not 8 GB
    It is important to get the Correct and Matching RAM
    See Here  >  OWC RAM  >  http://www.macsales.com
    The above site also has videos on how to Install RAM should you need it...
    NOTE:
    It is both Prudent and Recommended to Backup Before any Major Update or Upgrade.
    Mac World Installing Mavericks.
    http://www.macworld.com/what you need-to-know.html

  • How do I get out of kernel panic?  kernel panic upon boot.  I did that hard drive cleanup and reinstalled Lion.  What about memory?  Anyone think I need to upgrade my memory?  I have 8gig.

    I get kernel paic upon boot.  I did that hard drive cleanup and reinstalled Lion.  What about memory?  Anyone think I need to upgrade my memory?  I have 8 G of upgraded RAM. (20 inch iMac core 2 duo)

    If you're able to boot, launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ If you’re running Mac OS X 10.7 or later, open LaunchPad. Click Utilities, then Console in the page that opens.
    Select the most recent panic log under System Diagnostic Reports. Post the contents — the text, please, not a screenshot. For privacy’s sake, I suggest you edit out the “Anonymous UUID,” a long string of letters, numbers, and dashes in the header and body of the report, if it’s present (it may not be.) Please don't post "shutdownStall" or "hang" reports.
    If you can't boot in the usual way, try a safe boot. The instructions provided by Apple are as follows:
    Be sure your Mac is shut down.
    Press the power button.
    Immediately after you hear the startup tone, hold the Shift key. The Shift key should be held as soon as possible after the startup tone, but not before the tone.
    Release the Shift key when you see the gray Apple icon and the progress indicator (looks like a spinning gear).
    During startup, you’ll see a progress bar, and then the login screen, which appears even if you normally log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
    Safe mode is slower than normal, and some things won’t work at all.
    Note: If FileVault is enabled under Mac OS X 10.7 or later, you can’t boot in safe mode.

  • How to identify what are all the events are created in background jobs?

    Hi all,
    how to identify what are all the events are created for  background jobs. And what events gets triggered for a particular job.
    thanxs
    haritha

    Hi Haritha,
    JOB is a program which starts to a determined point of time and executes some standard programs in the system. JOBs can be planed to a determined point of time on the regular basis (every night, for example) or to some discret time moments. So, the JOB can be planed and then will be started automatically without the manual start.
    Realtime programs are understood in the most cases as actual program execution which is started by somebody to the actual moment of time.
    Typically per JOBs some special processes will be started that should be executed automatically and regularly: for example, IDOC application, some correction reports, statistic updates etc.
    Standard jobs are those background jobs that should be run regularly in a production SAP System These jobs are usually jobs that clean up parts of the system, such as by deleting old spool requests.
    Use
    As of Release 4.6C, the Job Definition transaction ( sm36 ) provides a list of important standard jobs, which you can schedule, monitor, and edit.
    Standard jobs are those background jobs that should be run regularly in a production SAP System. These jobs are usually jobs that clean up parts of the system, such as by deleting old spool requests.
    for more information you can go thru the following thread:
    http://help.sap.com/saphelp_nw70/helpdata/en/24/b884388b81ea55e10000009b38f842/frameset.htm
    About Events:
    Events have meaning only in the background processing system. You can use events only to start background jobs.
    Triggering an event notifies the background processing system that a named condition has been reached. The background processing system reacts by starting any jobs that were waiting for the event.
    Types of Events:
    There are two types of events:
    1.)System events are defined by SAP. These events are triggered automatically when such system changes as the activation of a new operation mode take place.
    2.)User events are events that you define yourself. You must trigger these events yourself from ABAP or from external programs. You could, for example, signal the arrival of external data to be read into the SAP system by using an external program to trigger a background processing event.The event scheduler processes an event if the event is defined in the system.
    For example, if a system (System 1) receives an event from another system (System 2), the event scheduler of System 1 processes the event only if it is defined in System 1. That event does not need to be defined in System 2 (the sending system).
    You define an event by assigning a name (EVENTID) to it. When defining an event, you do not define the event arguments.
    for more information you can go thru the following thread:
    http://help.sap.com/saphelp_nw04s/helpdata/en/fa/096e2a543b11d1898e0000e8322d00/frameset.htm
    When you schedule the process chain or infopackages the jobs associated with it run in the background mode. In case you want to create a job for a specific activity you can do so in SM36. You would be creating jobs that would get executed in any one of the options:
    1. Immediate
    2. Date & Time
    3. After event.
    4. After job.
    5. At Operation mode.
    In case you want to view the job logs go to sm37.
    Also Pls check DB02 for database performance and ST03 for workload .
    Analyse u will have an idea ,
    *pls assign points,if info is useful**
    Regards
    CSM reddy
    null

  • How can i disable the memory test when i start up photoshop? my computer freezes two minutes en then photoshop starts, i know there is notting wrong whit my memory in my computer

    when i startup PS then i look two minutes to the startup-logo en see that my memory is being checked.
    how can i disable this memory chek in PS?
    this problem started a few days ago en this version off PS is working on my computer ol most a year without problems
    i have checked my hardware very intensive and there is nothing whrong with my installed  ram
    this is my system info, sorry that this is not translated into englis, i am from the netherlands and i can just a litle englisch
    Adobe Photoshop Versie: 13.0.1 (13.0.1.3 20131024.r.34 2013/10/24:21:00:00) x64
    Besturingssysteem: Windows 7 64 bits
    Versie: 6.1 Service Pack 1
    Systeemarchitectuur: Intel CPU-familie:6, Model:10, Stepping:5 met MMX, SSE-integer, SSE FP, SSE2, SSE3, SSE4.1, SSE4.2, HyperThreading
    Aantal fysieke processors: 4
    Aantal logische processors: 8
    Snelheid processor: 3073 MHz
    Ingebouwd geheugen: 16375 MB
    Beschikbaar geheugen: 13944 MB
    Hoeveelheid beschikbaar geheugen voor Photoshop: 14750 MB
    Geheugen in gebruik door Photoshop: 93 %
    Blokgrootte van afbeelding: 128 kB
    Niveaus voor afbeeldingscache: 4
    OpenGL-tekenen: Ingeschakeld.
    OpenGL-tekenmodus: Standaard
    OpenGL-modus Normaal toestaan: Waar.
    OpenGL-modus Geavanceerd toestaan: Waar.
    OpenGL-modus Oude GPU's toestaan: Niet gedetecteerd.
    Leverancier videokaart: NVIDIA Corporation
    Renderer videokaart: GeForce GTX 460/PCIe/SSE2
    Scherm: 2
    Grenzen:= boven: 0, links: -1600, onder: 1200, rechts: 0
    Scherm: 1
    Grenzen:= boven: 0, links: 0, onder: 1200, rechts: 1920
    Nummer videokaart: 1
    Videokaart: NVIDIA GeForce GTX 460
    OpenCL niet beschikbaar
    Versie stuurprogramma: 9.18.13.4475
    Datum stuurprogramma: 20141112000000.000000-000
    Stuurprogramma videokaart: nvd3dumx.dll,nvwgf2umx.dll,nvwgf2umx.dll,nvd3dum,nvwgf2um,nvwgf2um
    Videomodus: 1920 x 1200 x 4294967296 kleuren
    Bijschrift videokaart: NVIDIA GeForce GTX 460
    Videokaartgeheugen: 1024 MB
    Rechthoekige structuurgrootte video: 16384
    Serienummer: 92298830452574092077
    Toepassingsmap: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\
    Tijdelijk bestandspad: C:\Users\gert\AppData\Local\Temp\
    Werkgeheugen van Photoshop heeft asynchrone I/O ingeschakeld
    Werkvolume(s):
    C:\, 223,5 GB, 112,9 GB vrij
    Map met vereiste plug-ins: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Required\
    Primaire map voor plug-ins: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Plug-ins\
    Map voor extra plug-ins: niet ingesteld

    Photoshop doesn't do a "memory test".
    When Photoshop says "measuring memory" during startup, it is asking the OS how much RAM is available and what hard disks are available for scratch disks.
    If you have network volumes mounted that are unavailable, then the OS may take a while to time out on accessing those drives.
    The same goes if you have a bad USB drive connected.

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

Maybe you are looking for