Improve cube process time

Hi,
I am processing cube with process full. its taking around 15 mins. I planned to improve this by adding indexes to the database. I used SQL Trace while processing & later used the result to the database tuning advisor to get recommendations. I got the
recommendations & i have included those changes to the Database & then processed the cube but there is no improvement. I even tried to get the expensive queries from the trace & run in SSMS with Query execution plan but I am not able to figure
out. Please help.
Thanks
sush

Hi susheel1347,
According to your description, you want to improve the cube processing. Right?
In Analysis Services, cube processing is performed in Analysis Services by executing Analysis Services-generated SQL statements against the underlying relational database. Here we have some advices on improving the cube processing:
Use integer keys if at all possible
Use query binding to optimize processing
Partition measure groups if you have a lot of data
Use ProcessData and ProcessIndex instead of ProcessFull
For more information, please refer to links below:
SQL Server Best Practices Article
Improving cube processing time
If you have any question, please feel free to ask.
Simon Hou
TechNet Community Support

Similar Messages

  • Improving WIP Calulation/Settlement/Revaluation Processing Time

    Hi,
    Currently, we need about 2 days to process the a/m jobs in SAP.
    At any point, we have about 50,000 active production orders with routing of above 300 operations each.
    Does anyone has any high end solutions/options to improve our processing time?
    Thanks,
    Teo

    Hi,
    A runtime of two days is way too long. Please check note 393686 first and maybe 545932.
    Regards, Michael

  • Issue with processing time of JDBC receiver adapter

    Hi all,
    We are using PI 7.1 EHP1.
    We have an issue with JDBC receiver adapter taking too much time to process messages.
    We are using XML SQL format message protocol with INSERT_UPDATE as the document format.
    Each message can contain multiple records, i.e. 1 single message can result in many updates/inserts.
    Currently, time taken to process in 6-12 secs which is quite high considering the messages are not very large in size.
    We sent the statements to Oracle DBA to see if anything about the queries being used is causing issues. Awaiting inputs.
    In the mean time, wanted to check if there is anything that can be done from PI side that will help us improving the processing time.
    Thanks in Advance,
    Sailaja.

    Hi,
           i think the main cause is  query taking long time to execute in DataBase.
          -> increase the read time and response time in the JDBC receiver adapter.
      In the advanced mode table section of sender channel and receiver channel configurations, we can set driver properties for each DB connection. Any such property would have to contain prefix 'driver:'(with out quotes).
    For Oracle Database JDBC thin driver 10.2.0.3 version, the property oracle.jdbc.ReadTimeout helps to set read timeout while reading from the socket. Also for setting login time out in Oracle, we use oracle.net.CONNECT_TIMEOUT. To set these two properties use as follows: driver:oracle.jdbc.ReadTimeout 1000 driver:oracle.net.CONNECT_TIMEOUT 1000 The TimeOut Driver properties like ReadTimeout and CONNECT_TIMEOUT are in milliseconds.
    Refer note 1078420 for more details
    please go through this blog i hope it will help you.
        http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c059d583-a551-2c10-e095-eb5d95e03747
    regards,
      ganesh
    Edited by: ganesh.nijampudi on Oct 25, 2011 12:00 PM

  • Slow processing time, via Command line, with Reader

    I'm using Reader 9, via command line, to process PDFs in a 3rd party application and it's taking much longer to process files this way than with Acroplot or Ghostscript.  Is there a way to improve this processing time?
    thanks

    Ladies and gentlemen.....I have solved my problemo!
    The reason why the rest of the reports were getting the 'could not open file.' error was due to the fact that each discoverer instance launched was
    trying to access the same standard log file simultaneously.
    I modified each cmd file to write logging info to individual files: eg
    /logfile "H:\Projects\DRP Import Modelling\Automation\input2_log.txt"
    Now that's there's no contention with logging, the reports are firing off beautifully in parallel :)
    I am definitely having a beer this evening!
    Thanks for everyone's input...kept me on the righteous path :)

  • How to reduce process time in report

    Hi all..
    Is there any technique to reduce process time in report on programmer side??
    Plz help me...

    Hi
    check this and ensure that your code is as per the stds
    1) Dont use nested select statements
    2) If possible use for all entries in addition
    3) In the where addition make sure you give all the primary key
    4) Use Index for the selection criteria.
    5) You can also use inner joins
    6) You can try to put the data from the first select statement into an Itab and then in order to select the data from the second table use for all entries in.
    7) Use the runtime analysis SE30 and SQL Trace (ST05) to identify the performance and also to identify where the load is heavy, so that you can change the code accordingly
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d0db4c9-0e01-0010-b68f-9b1408d5f234
    ABAP performance depends upon various factors and in devicded in three parts:
    1. Database
    2. ABAP
    3. System
    Run Any program using SE30 (performance analys) to improve performance refer to tips and trics section of SE30, Always remember that ABAP perfirmance is improved when there is least load on Database.
    u can get an interactive grap in SE30 regarding this with a file.
    also if u find runtime of parts of codes then use :
    Switch on RTA Dynamically within ABAP Code
    *To turn runtim analysis on within ABAP code insert the following code
    SET RUN TIME ANALYZER ON.
    *To turn runtim analysis off within ABAP code insert the following code
    SET RUN TIME ANALYZER OFF.
    Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    Avoid for all entries in JOINS
    Try to avoid joins and use FOR ALL ENTRIES.
    Try to restrict the joins to 1 level only ie only for tables
    Avoid using Select *.
    Avoid having multiple Selects from the same table in the same object.
    Try to minimize the number of variables to save memory.
    The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    Avoid creation of index as far as possible
    Avoid operators like <>, > , < & like % in where clause conditions
    Avoid select/select single statements in loops.
    Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    Avoid using ORDER BY in selects
    Avoid Nested Selects
    Avoid Nested Loops of Internal Tables
    Try to use FIELD SYMBOLS.
    Try to avoid into Corresponding Fields of
    Avoid using Select Distinct, Use DELETE ADJACENT
    Check the following Links
    Re: performance tuning
    Re: Performance tuning of program
    http://www.sapgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    check the below link
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    See the following link if it's any help:
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Check also http://service.sap.com/performance
    and
    books like
    http://www.sap-press.com/product.cfm?account=&product=H951
    http://www.sap-press.com/product.cfm?account=&product=H973
    http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Performance tuning for Data Selection Statement
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    Debugger
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
    http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
    http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
    Run Time Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
    SQL trace
    http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
    CATT - Computer Aided Testing Too
    http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
    Test Workbench
    http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
    Coverage Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
    Runtime Monitor
    http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
    Memory Inspector
    http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
    ECATT - Extended Computer Aided testing tool.
    http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
    Just refer to these links...
    performance
    Performance
    Performance Guide
    performance issues...
    Performance Tuning
    Performance issues
    performance tuning
    performance tuning
    You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.
    1 Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    2 Avoid for all entries in JOINS
    3 Try to avoid joins and use FOR ALL ENTRIES.
    4 Try to restrict the joins to 1 level only ie only for 2 tables
    5 Avoid using Select *.
    6 Avoid having multiple Selects from the same table in the same object.
    7 Try to minimize the number of variables to save memory.
    8 The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    9 Avoid creation of index as far as possible
    10 Avoid operators like <>, > , < & like % in where clause conditions
    11 Avoid select/select single statements in loops.
    12 Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    13 Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    14 Avoid using ORDER BY in selects
    15 Avoid Nested Selects
    16 Avoid Nested Loops of Internal Tables
    17 Try to use FIELD SYMBOLS.
    18 Try to avoid into Corresponding Fields of
    19 Avoid using Select Distinct, Use DELETE ADJACENT.
    Regards
    Anji

  • How to improve the execution time of my VI?

    My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.

    Bryan,
    If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
    If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
    I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
    Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
    LabVIEW Champion . Do more with less code and in less time .

  • Processing time in NWBC

    We are seeing real delays in the processing time in NWBC compared to regular SAP, is there anything we can do to improve this performance?
    Vickie

    Are you aware of the Performance Aspects chapter (and sub chapters) of the application help? The first thing that comes to my mind is that NWBC is not using SAP GUI for Windows to render classic dynpro transactions but SAP GUI for HTML aka webgui. Share also the version and patch level of your NWBC for Desktop, SAP_BASIS version and SP level and NWBC ABAP runtime version and patch level.

  • SSAS 2008 - How to get processing times per dimension / measure group?

    Hi experts!
    SSAS 2008. I am doing analysis and I'm trying to get information (from dmv or log) about processing times per dimension / measure group. Any ideas how to do that?
    Thanks,

    also in DMV there's no column recording the processing time, so we suggest you using SSAS AMO to programmatically get the state and last processed date time. Please see:
    Analysis Management Objects (AMO)
    Hi John,
    Thanks for you info, As
    Simon Suggested there is no DMV columns available.
    You can use below link for more information.
    Programming Administrative Tasks with AMO
    Cube
    partition attributes for last processed (timestamp and status)
    Thanks
    Suhas
    Mark as Answer if this resolves your problem or "Vote as Helpful" if you find it helpful.
    My Blog
    Follow @SuhasKudekar

  • Downsizing a fat ODS to optimaze the processing time!

    Hello all,
    Working with SAP NetWeaver BI 7.0, a customizing FI_CO OPC ODS with 300.000.000 records extracted from Sap R/3, was enhanced with 2 new fields and since then it has generated big processing time – about 20 hrs and same times basis team needs to kill it!  This ODS has been loading a Cube and the new enhanced fields will be loaded to another Cube.
    As a newbie and Sap BI also newbie at this project, I’ve been requested to analyze the model and give a better option to optimize the loading process, options such as should this ODS be deleted and data be extracted straight to the Cubes, or downsizing this ODS by removing the olds records of it and saving it in other new ODS?
    Anyone could come up with a bright idea?
    Thanks in advanced

    Hi all,
    Thanks for all replies (Alex & Jorge)…
    But I've just learned today, the problem above is not exactly what is going on!  In reality the problem is not about processing overtime while loading the COPS ODS, but after the 2 new attributes were added to the structure COPS ODS the "Transport" of it is taking big overtime, caused by the needy of BW to re-index the ODS since the 300.0000.000 records are already there! As I said before, I need to redesign the model and suggest some good options to solve the problem. Some options could be deleting data from ODS and keeping the InfoCubes; or deleting data prior to current fiscal year from COPS ODS and keeping historical InfoCubes; or updating everything then perform logical partitioning then keep details in the ODS and aggregate data in Infocubes, also could be moving all COPS ODS data to a temp, apply OPC changes to it and then move data back!
    I need to show the advantage or disadvantage of my options and “must choose” the better solution!
    Please, any good suggestion?
    Many thanks in advance
    Bia

  • Report (not Page) caching to improve report loading times

    We are trying out Cystal Reports Server 2008 as a replacement for Crystal Reports CR XI R2 for an ASP.NET Web Application. Running some tests we found that the same report loaded from the server was far slower than loading the same report locally, using the CRXI SDK.
    Being the Clone/Refresh strategy on the standalone SDK several times faster than the reportAppFactory.OpenDocument on the server platform, i think that the server is loading from scratch the report each time I execute the OpenDocument method. So what I am seraching for is, Is there a way to setup or tweak report caching on Crystal Server 2008? What I need is not page caching, but the whole report, which is used different databases each time it's executed (implying different data for each execution).
    Some insight on the problem:
    As I posted on a previous thread we are moving from CR XI due to a limit on the number of active instances suported by the SDK (74 instances). We reach this limit because of a workaround for improving report execution times, leaving instances of each report active, so the next execution just recicles the active instance, greatly reducing the overall process time. Therefor introducing Crystal Server is a setback on this module (due to it's actual performance).
    Part of the problem resides on the complexity of the report. Using the standalone SDK the report takes several seconds to load from disk, and several seconds to execute. Regretably, we are not at liberty to change the reports' structure, so optimizing it is beyond posibility for now.
    Thanks in advance,
    Gustavo

    Hello Maggie,
    >> How can I get an all-encoompassing CSV file of the main report (Page 2014)?
    This might be possible using the advanced print server configuration, with BI Publisher, using the same technique that is being used to print master-details reports (which is a type of a multi-region report) - http://www.oracle.com/technology/products/database/application_express/html/configure_printing.html . The standard print server configuration only supports reports with a single region. If you have BIP in your organization, that’s great. Otherwise, CSV files don’t warrant it.
    The only other option, I can see, is to create the CSV file manually, using the technique described in the following Blog entry, by Scott Spendolini - http://spendolini.blogspot.com/2006/04/custom-export-to-csv.html .
    Regards,
    Arie.

  • How can I improve the response time of the user interface?

    I'm after some tips on how to improve the response time to mouse clicks on a VI front panel.
    I have  data acquistion application which used to run fine, but after spending a couple of weeks making a whole bunch of changes to it I find that the user interface has become a bit sluggish.
    My main GUI VI has a while loop running 16 times a second, updating some waveform charts and polling about a dozen buttons on the front panel.
    There is sometimes a delay (variable, but up to 2 seconds sometimes) from when I click on a button to when it becomes depressed. I have wired the iteration terminal of the while loop to an indicator on the front panel and can see that the while loop is ticking over during the delayed response to the mouse click, so I know that the problem is not that the whole program is running slow, just the response to mouse clicks.
    Also, just for debugging purposes, I have indicators of the iterations of all the main while loops in my program on the front panel, so I can see that there are no loops running abnormally fast either.
    One thing I've tried is to turn off multi-threading, and this does seem to work - the response to mouse clicks is much faster. However, it has the side effect of making the main GUI while loop run less evenly. I was trying to get a fairly smooth waveform scrolling across the screen, and when multi-threading is off it gets a bit jerky.
    Any other suggestion welcome..
    (I am using LabVIEW 7.1, Windows 2000).
    Regards,
    Mark.

    Hi Altenbach,
    Thanks for your reply. In answer to your questions:
    I am doing both DAQ board and serial data acquisition. I am using NIDAQ traditional for the DAQ board, and VISA for the serial. I have other similar versions of this program that do only DAQ board, or only serial, and these work fine. It was only when I combined them both into the same program that I ran into problems.
    The multiple while loops are actually in separate VIs. I have one VI that acquires data from the DAQ card, another VI that acquires data from the serial port, another VI that processes the data and saves to file, and another VI, the GUI VI, that displays the data in graphs and charts.  The data is transferred betwen the VIs via LV2 globals.
    The GUI VI is a bit more complicated than I first mentioned. It has tab control, with 4 waveform charts on one page, 4 waveform graphs on another page, and 3 waveform graphs on another page. The charts have a history length of 2560, and 16 data points are added 16 times a second. The wavefom graphs are only updated once per minute.
    I don't use the value property at all, but I do use lots of property nodes for changing the properties of the graphs and charts e.g. changing plot colours, Y scale range etc. There is only one local variable (for the Tab control). All the graphs and charts have data wired directly to their terminals.
    I haven't done any profiling yet.
    I am building arrays in uninitialised shift registers, but this is all well under control. As the experiement goes on, more data is collected and stored, and so the memory usage does gradually increase, but only to the extent that I would expect.
    The CPU usage is 100%, but I thought this was always the case when using NIDAQ  with DAQ cards. Or am I wrong about this? (As a side note, I am using NIDAQ traditional, but would NIDAQmx be better?)
    Execution priority of the GUI vi (and all the other VIs for that matter) is set to normal.
    The program is a bit large to post here, and I'm not sure if my company would be happy for me to publicise it anyway, so I suspect that this is turning into one of those questions that are going to be impossible to answer.
    Just as a more general question, why would turning off multi-threading improve the user interface response?
    Thanks,
    Mark.

  • Regarding the tracking the live progress of the cube processing.

    Hello,
             Good Morning.  As we are having multiple cube applicatons as part of our project, there are multiple cubes for which we need to evaluate the completion time for the cube processing.
    By referencing one cube, the cube processing is taking one hour one day and some other day, It is taking an hour for completion. Beacuse of this reason, we could not be able to asses the time for completion of cube processing.
    Is there any chance to track the live status of the cube processing?. Which means, the expected time for completion? How many measure group processing are completed and how many left?
    Please provide your inputs on this.
    Thanks in advance.
    Regards,
    Pradeep.

    Hi Kumar,
    According to your description, you want to monitor the progress of cube processing. Right?
    In Analysis Services, there are several tools can help us monitor the processing performance, like SQL Profiler, AS Trace, Performance Monitor, etc. We can also use XMLA command or DMV to get the information. Please see:
    Monitoring processing performance
    Monitoring and Tuning Analysis Services with SQL Profiler
    However, if you want to get the exact live data of the cube processing, Olaf's script can be an effective solution to get the current processing status for some measures or dimensions. But some information still can't be traced, like expected time for completion,
    etc. So for your requirement, it can't be fully achieved.
    If you have any question, please feel free to ask.
    Simon Hou
    TechNet Community Support

  • Full Optimize withtou cube processing

    Hello,
    I'd like to run Full Optimize from SSIS (Administrative Task), but without full processing of the cubes. The reason is performance - I'd just like to move data into FACT tables and then run full process of whole database manually. Is it possible via some attributes, like ProcessMode or ProcessOption?
    Thanks.
    Radim

    Sure, during the whole operation the system is unavailable, goal is to shorten processing times. It's much faster to do full process on database once than process each  cube in serial way. And I didn't want to reinvent the wheel (wb+fac2 -> fact) with sql script.
    For example, for light optimize you can disable processing of fac2, so same option would be nice for full process also.
    Best regards,
    Radim

  • Cube Processing approach when Process only the Current Partition?

    Could you validate my SSAS Processing strategy for the given scenarios:<o:p></o:p>
    Background about cube and data:<o:p></o:p>
    A Sales cube has Partitions for each year for "Sales" Measure Group and it associated with dimensions "Product" and "Sales
    Rep". Both are type 1 Dimensions.<o:p></o:p>
    Here some time user will re-classify the products, product Hierarchy(Product -> Sub Category - > Category);  <o:p></o:p>
    Similarly re-classify Sales Rep (Sales Rep –> District Manager -> Regional Manager)<o:p></o:p>
    Processing strategy:<o:p></o:p>
    1. Process(full process) only the current Partition every day.<o:p></o:p>
    2. Perform "Process update” for all the dimensions. (going for process update, as Dimension full process is processing all the old partitions
    of measure groups)<o:p></o:p>
    Questions:<o:p></o:p>
    1.   
    What are disadvantages when processing only the current partition? <o:p></o:p>
    2.   
    Does the old partitions data will roll up as per the hierarchy changes, when I go for Dimension “process update” options.<o:p></o:p>
     Thanks,
    Liyasker Samraj K

    1.   
    What are disadvantages when processing only the current partition? <o:p></o:p>
    2.   
    Does the old partitions data will roll up as per the hierarchy changes, when I go for Dimension “process update” options.<o:p></o:p>
    The strategy looks good. Partitioning is the way to go to reduce processing time. However, keep in mind that partitions are only supported in the enterprise version.
    1. Other than not being able to refresh older data from other partitions, i don't see a downside in processing the most recent partitions. 
    2. Yes. A process update should touch all the dependent partitions.
    SS

  • How to improve the load time of my swf group

    Hi,
    I need help to have some tricks to improve my load time on my swf captivate online traning. My training has 6 sections and it takes 3 minutes to download each time I open the window of the training. It takes too much time and if there are 50 users at the same time, it will take lots of my website bandwidth. Do you have any tips on captivate settings or other tips to help reduce my training download time? I do not understand why the 6 modules loading simultaneously and not every time I click to start a new part of training.
    Can you help me with my problem?
    Thank you

    Bryan,
    If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
    If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
    I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
    Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
    LabVIEW Champion . Do more with less code and in less time .

Maybe you are looking for

  • New unibody hard drive and ram

    When I purchased my new Mini the store information tag showed the hard drive as a 7200 rpm, that is one of the main reasons why I bought it. So, I unknowingly (Till today) have a 5400 rpm Mini and as of now, I'm a bit underwhelmed with the performanc

  • Sound Problem After Installing iOS 8.1

    Hello, after updating my iPad to iOS8.1 it started making weird noises, kind of the same ones you get while blocking it or getting a notification but with a little of a "robotic" twist. Restoring it seemed to had fix it until I tried to get the 8.1 a

  • Windows 7 hangs at desktop when installing IEEE 1394 Controller driver

    Hi Guys, This is a repetitive problem for me, my win7 always hangs at the desktop whenever it tries to install the driver for the IEEE 1394 Controller. Does anyone have a solution for me? Thanks.

  • 30 day rolling average of a single variable based on 1 hour averages?

    Can anyone recommend the best way to keep a 30 day rolling average of a single variable based on 1 hour averages?  I am thinking about using Citadel with and SQL query to update the average.  If there's a better way, I would appreaciate any ideas.

  • HOW TO TURN OFF PANDORA MUSIC

    Everytime I try to watch a video with sound, music starts playing in the background on top of the sound from the video but I can't find where it's coming from. I was listening to internet radio and pandora yesterday but I closed the webpage.  I can't