Cumul d'essai et statistique

Bonjour,
j’ai réalise l’acquisition d’un capteur d’effort piézo me permettant de mesurer l’effort d’un choc.
J’ai donc réalisé un Vi assez simple, avec déclenchement de l’acquisition de la mesure lorsque l’on dépasse un certain niveau + affichage de ce choc et réalisation d’un rapport.
Ce que je cherche à faire maintenant, mais je n’ai pas trouvé comment faire, c’est lors d’une journée d’essai, voir de plusieurs jours d’essai, cumuler tous les chocs vus supérieure à un certain niveau, faire un graphique de répartition des efforts (pic d’effort vue lors de chaque choc) avec moyenne et écart type.
En conclusion je souhaiterai éviter le post-traitement manuel afin d’éviter tout risque d’erreur.
Question subsidiaire, est il possible de faire ce cumul sur le même fichier sur plusieurs jours alors que la baie de mesure doit être coupée le soir ?
Merci pour vos réponses

Bonjour,
Il nous faudrait un peu plus d'infos pour vous aider, votre problème est assez vague. en tous cas, il vous faudrait mettre le vi qui détecte un choc dans une boucle. Les données de chaque choc pourront ainsi être empilées dans un tableau. A la fin de la journée, on devra arrêter la boucle par une action manuelle. S'en suivra un post-traitement automatique des infos contenus dans le tableau de résultats. Au démarrage du lendemain, il suffira d'ajouter une boite de dialogue qui permetet de choisir si on continue le test de la veille ou si on commence un nouveau test. Dans ce cas, soit in ouvre le fichier précédent pour écrire à la suite, soit on crée un nouveau fichier. Attention toute fois à la taille des données, si le test dure toute une journée et qu'il y a de nombreuses données vous pourriez avoir un problème de mémoire si les buffers ne sont pas vidés.
Ce ne sont bien sur que des pistes de travil...
Francis M | Voir mon profil
Certified LabVIEW Developer

Similar Messages

  • Item category related to Quotation, Statistical value field

    Hi All,
    IN my Project, Item category Y003 which is used in creation of Quotions, for which Statistical value is 'X' which means "No cumulation - Values can not be used Statistically". My requirement is I need to change this filed value to 'Y' which means " No Cummulation - Values can be used statistically". If I change this filed value, it will solve issue currently I am facing but I would like to know what are the other implications.
    Could any one of you please let me know what will be implications if I change Statistical value from 'X' to 'Y'.
    Thanks in Advance
    Anil

    Hello,
    The maintain Functionality of this field is to ADD the Item values to the Sales document header values
    If the value is Blank then all the item values will be added to the header values
    If the Values is X then No Cummulation and the values are not statistical
    If the Value is  Y then No cummulation and the values are statistical
    Apart from the above there is no impact in the sales document

  • Cumulation of lower level item prices to higher level items

    Hi,
    We have a specific requirement that we use high-level materials that is not relevant for pricing and lower level items connected to higher-level item that are RA and billing relevant. How do we modify the pricing procedure to cumulate (roll-up) the lower level items to higher level item which is not price relevant but statistical value should be updated at this level for billing purpose.
    We want to achieve this using/modifying only pricing procedure and not via user exit..
    Request SAP experts to provide some inputs to this issue..
    BR/Rajasekhar

    Hi rajsekhar
    But generally if you want to transfer the cost of the sub item to  main item then you can  check the box Cumulative cost in copy control VTFL . But your requirement is quite opposite .So you need to go to assign a New Reqt / Alt calc type  to the cond type , telling that the header item cost which is not relevant for pricing has to be transferred to sub item .So you need to integrate with the ABAP'er to give inputs to him and giving our requirement also
    Regards
    Srinath

  • Cumulative Moving Average?

    Hi,
    I would like to use Signal Express to create a cumulative moving average, running "every 100 data" points, from data point to data point until the end of my recorded signal.
    So, for example, I would be producing a new signal from my initial signal; creating an average from data point 1 to 100, then an avearage of data point 2 to 101, 3 to 102, and so forth until the end of my recording. Is there a step in signal express that can accomplish such a task?
    My hope is in the end to create a sort of running average of  the sums of squares, every 100 data points, from my initial recording.
    I have first recorded a voltage signal, and then by using the Formula step I have squared each value of my recorded sample. Now all I seek to accomplish is the "running average" of this newly squared signal.
    Seeking any and all advise. Many thanks.
    GYepes

    I find the description of the task a little confusing.  The title is "cumulative moving average" which seems contradictory to me.  A moving average analysis is a statistical time-series method of smoothing a signal's frequencies with a wavelength equal to the time span of the average.  Frequencies greater or lesser are not smoothed.  The method does not accumulate anything, thus the contradiction.  If the task is to recalculate the average for all samples recorded for every project iteration, then that is more like a cumulative average?  From the more detailed description it seems the former is more likely the task.  Additionally I am unsure if the task is to "process" a signal in real time or to "analyze" a pre-recorded signal?
    Let me encourage the exploration of the Process steps for real time signals, and Analysis steps for pre-recorded signals.  Signal Express has sophisticated numerical methods to meet most needs.  Take a look at "Processing - Analog Signals - Time Averaging" as a first guess to satisfy your needs.  I sincerely hope this helps to clarify the task - a problem well defined is half solved (anon.).

  • Library cash statistic in trace file

    Hello!
    I'm get ora-00600, and in one of trace files I see library cash statistic :
    LIBRARY CACHE STATISTICS:
    namespace gets hit ratio pins hit ratio reloads invalids
    My question - is this statistic cumulative or current ?
    Please advise .
    Thanks and regards,
    Paul

    Paul wrote:
    I'm get ora-00600, and in one of trace files I see library cash statistic :
    LIBRARY CACHE STATISTICS:
    namespace gets hit ratio pins hit ratio reloads invalids
    My question - is this statistic cumulative or current ? In a trace file dump it's the current contents of the structure under v$librarycache, which means it's the stats since instance startup.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • SAS vs Oracle comparison for statistical modeling

    Hi,
    I am working on a project that require a lot of statistical analysis. as we are in a preliminary phase of determining which way to go, someone recommended to use SAS.
    would you be able to share your experience/comments related to SAS vs Oracle comparison when it comes down to using statistical models within each of these applications. and is there a list of all statistical models that Oracle offers (it could be handy to compare it with our requirements.)
    Plus, would also like to know if there is a way to test these models... just like APEX is offered free for testing purposes @ apex.oracle.com
    Thanks in advance

    You don't go into much detail regarding what types of statistical technques that you might want to use. First, Oracle both partners with SAS and competes and SAS. In terms of Oracle technology for statistics and models, we ship about 50 basic statistical technques with EVERY Oracle Database for free. Those stats include below. See SQL Reference Guide for details:
    Descriptive Statistics
    DBMS_STAT_FUNCS: summarizes numerical columns of a table and returns count, min, max, range, mean, median, stats_mode, variance, standard deviation, quantile values, +/- n sigma values, top/bottom 5 values
    Correlations
    Pearson’s correlation coefficients, Spearman's and Kendall's (both nonparametric).
    Cross Tabs
    Enhanced with % statistics: chi squared, phi coefficient, Cramer's V, contingency coefficient, Cohen's kappa
    Hypothesis Testing
    Student t-test , F-test, Binomial test, Wilcoxon Signed Ranks test, Chi-square, Mann Whitney test, Kolmogorov-Smirnov test, One-way ANOVA
    Distribution Fitting
    Ranking functions
    rank, dense_rank, cume_dist, percent_rank, ntile
    Window Aggregate functions (moving & cumulative)
    Avg, sum, min, max, count, variance, stddev, first_value, last_value
    LAG/LEAD functions
    Direct inter-row reference using offsets
    Reporting Aggregate functions
    Sum, avg, min, max, variance, stddev, count, ratio_to_report
    Statistical Aggregates
    Correlation, linear regression family, covariance
    Linear regression
    Fitting of an ordinary-least-squares regression line to a set of number pairs.
    Frequently combined with the COVAR_POP, COVAR_SAMP, and CORR functions
    Kolmogorov-Smirnov Test, Anderson-Darling Test, Chi-Squared Test, Normal, Uniform, Weibull, Exponential
    Additionally, Oracle has a Database Option called Oracle Advanced Analytics which delivers 12+ hi-performance, data mining algorithms (e.g. clustering, decision trees, regression, association rules, anomaly detection, text mining, etc.) as native SQL functions that can be called from SQL, the R language or the Oracle Data Miner workflow GUI (ships with SQL Developer). There is a LOT more information on the OAA Option here http://www.oracle.com/technetwork/database/options/advanced-analytics/index.html?ssSourceSiteId=ocomen.
    Hope this helps. cb

  • Is subtotal values considered for Cumulation

    Dear All,
    I have a basic doubt in the pricing procedure.
    For example we have a condition type PR00 and it has a value 1000.
    Next to that, we have a Net value with the sub total 1 assigned. So obviously this also carries the value of 1000. And it is not statistical.
    In this case, system will cumulate both the values right. If it cumulated, our calculation is wrong. Either system should consider the value in PR00 or in Net value.
    How system is managing these kind of things.
    Please explain.

    System does not add up the price and subtotal because of the logic of calculation mentioned in pricing procedure.
    For example if
    Step 10 - PR00
    Step 20 - Subtotal (from 10 to 19)
    Then for further calculations you give from as 20 but not 10. So that value is not considered.
    If subtotal is used then - it is considered as the basis for further calculations, generally.
    And for Net Value (for the document) - the value is considered for the condition types based on their condition class (price, discount/surcharge). Subtotal has no condition type and cond. class.
    And Subtotal 1 helps the value to populate into item tables (VBAP, VBRP)
    Thanks

  • Using Cumulative Condition Type KUMU

    Dear SAP Gurus
    I have a requirement here that when a packaged item is sold with some accessories, the order confirmation and billing output should only showed the total price on the main item. We are not using sales bom for this purpose.
    For example in SAP order entry screen:
    Line 1      Main item   net price  $100
    Line 2      accessory   net price  $  20
    But on billing output to show:
    Line 1      Main item   net price $120.00 
    Line 2      accessory   net price  $    0.00
    I am planning to use KUMU condition type to capture all the total net price of the 2 items into line 1.
    Is it normal for KUMU to be determined at the sub items as well ?
    Any ideas on how we could control the output print logic ? i.e to print only the KUMU condition type on main item if it exist.
    Thanks

    Hi friend,
    As per my information KUMU  is a cumulative condition. If you have sales BOM and if you carry pricing for the sub items and if you want to display the main item in the invoice with the price. This condition can be used to cumulate the sub item price and display it at the main item level.
    I think  this might be given you a idea of this condition type
    As pe my knowledge...
    The standard system condition type KUMU facilitate us  to display the total of the net values of an item and all the sub-items belonging to that item.
    Line: -
    Constraints OF KUMU cumulative condition type -
    Cumulative conditions cannot be used as header conditions and processed manually.
    When we copy a S O (sales Order) to a billing doc, the condition rate and the condition value of a cumulative condition is Freez. This means that the condition is not redetermined when it is copied regardless of the pricing type. The net value of the total is not redetermined even if the individual net values have changed.
    as already suggested KUMU condition is used as item condition.This is a statistical condition and alternative calculation type is 36.The item should be higher level item and prices of all lower level items assigned to this higher level item are cumulated.
    Regards,
    Rajeev Sharma

  • SQL Query statists?

    I'd like to know if there is any easy, and convenient way, for someone to execute an SQL query an calculate, these measurments relative to the query:
    1) The I/O performed
    2)Number Read I/O
    3) Number of Write out I/O
    (such that 2+3 = 1
    4) Number of buffered reads
    5)Query Execution time
    6) Query CPU usage
    I've heard mention of such statisctis in views such as V$OSSTAT, etc.
    But these views give the current values, and not the specific cumulative values. Such as: cummulative CPU usage time since start of the query; cumulative I/O since query begin, etc...
    What is the right approach to this. Is it through the V$SESSION view? Would you about it by storing the V$SESSIOn values before the query, you run the query, and get the new V$SESSION values?

    Well, actually i stayed here a little longer to try you part 2 of the manual.
    It worked fine, following comes the output, originating from a spool file, of my first experiment:
    Connected.
    SQL> set timing on trimspool on linesize 250 pagesize 999
    SQL>
    SQL> -- system environment can be checked with:
    SQL> -- show parameter statis
    SQL> -- this show a series of parameters related to statistics
    SQL>
    SQL> -- this setting can influence your sorting
    SQL> -- in particular if an index can satisfy your sort order
    SQL> -- alter session set nls_language = 'AMERICAN';
    SQL>
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set arraysize 15 termout off
    SQL>
    SQL> spool diag2.log
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    PLAN_TABLE_OUTPUT
    SQL_ID  b4j5rmwug3u8p, child number 0
    SELECT USRID, FAVF FROM  (SELECT ID as USRID, FAVF1, FAVF2, FAVF3,
    FAVF4, FAVF5   FROM PROFILE) P UNPIVOT  (FAVF FOR CNAME IN   ( FAVF1,
    FAVF2, FAVF3, FAVF4, FAVF5)) FAVFRIEND
    Plan hash value: 888567555
    | Id  | Operation           | Name    | Starts | E-Rows | A-Rows |   A-Time   |
    Buffers |
    |   0 | SELECT STATEMENT    |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |*  1 |  VIEW               |         |      1 |      5 |      5 |00:00:00.01 |
          8 |
    |   2 |   UNPIVOT           |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |   3 |    TABLE ACCESS FULL| PROFILE |      1 |      1 |      1 |00:00:00.01 |
          8 |
    Predicate Information (identified by operation id):
       1 - filter("unpivot_view_013"."FAVF" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    26 rows selected.
    Elapsed: 00:00:00.14
    SQL>
    SQL> spool off
    SQL>
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Pr
    oduction
    With the OLAP, Data Mining and Real Application Testing options
    C:\Documents and Settings\Administrator\My Documents\scripts\oracle\99templates_
    autotrace>my_part2_template.bat
    C:\Documents and Settings\Administrator\My Documents\scripts\oracle\99templates_
    autotrace>sqlplus /NOLOG @my_part2_template.sql
    SQL*Plus: Release 11.1.0.7.0 - Production on Qui Jul 9 22:00:39 2009
    Copyright (c) 1982, 2008, Oracle.  All rights reserved.
    Connected.
    SQL> set timing on trimspool on linesize 250 pagesize 999
    SQL>
    SQL> -- system environment can be checked with:
    SQL> -- show parameter statis
    SQL> -- this show a series of parameters related to statistics
    SQL>
    SQL> -- this setting can influence your sorting
    SQL> -- in particular if an index can satisfy your sort order
    SQL> -- alter session set nls_language = 'AMERICAN';
    SQL>
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set arraysize 15 termout off
    SQL>
    SQL> spool diag2.log
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    PLAN_TABLE_OUTPUT
    SQL_ID  b4j5rmwug3u8p, child number 0
    SELECT USRID, FAVF FROM  (SELECT ID as USRID, FAVF1, FAVF2, FAVF3,
    FAVF4, FAVF5   FROM PROFILE) P UNPIVOT  (FAVF FOR CNAME IN   ( FAVF1,
    FAVF2, FAVF3, FAVF4, FAVF5)) FAVFRIEND
    Plan hash value: 888567555
    | Id  | Operation           | Name    | Starts | E-Rows | A-Rows |   A-Time   |
    Buffers |
    |   0 | SELECT STATEMENT    |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |*  1 |  VIEW               |         |      1 |      5 |      5 |00:00:00.01 |
          8 |
    |   2 |   UNPIVOT           |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |   3 |    TABLE ACCESS FULL| PROFILE |      1 |      1 |      1 |00:00:00.01 |
          8 |
    Predicate Information (identified by operation id):
       1 - filter("unpivot_view_013"."FAVF" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    26 rows selected.
    Elapsed: 00:00:00.01
    SQL>
    SQL> spool off
    SQL>
    SQL>
    SQL> -- rem End of Part 2
    SQL> show parameter statis
    NAME                                 TYPE        VALUE
    optimizer_use_pending_statistics     boolean     FALSE
    statistics_level                     string      ALL
    timed_os_statistics                  integer     5
    timed_statistics                     boolean     TRUE
    SQL> quitIf you notice, at the end of the execution I print my statistics session environment. The statistics_level was set to ALL, as you advisied. But the output I obtained seems a lot more incomplete than the one I got from using the autotrace feature.
    Am I missing something. Could it have something to do with the fact that I am running as system and not as sysdba? System shoul have enough permissions to access its session environment statistic values.
    May be it's just a language issue (I'm not a native speaker either) but your understanding of Oracle's read consistency model seems to be questionable.No, you could be right; my understanding is questionable indeed. I am familiar with general concepts of concurrency.
    Things like: Read uncommited data:
    T1 Writes A; T2 Reads A -> Here is a conflict
    This enough for you to not be able to guarantee that the execution is serializable.
    T1 Reads A, T2 Writes A and commits, T1 Reads A - You get another confli, the Unrepeatable read.
    And so on.
    I am also familiar with the different levels of atomicity that databse systems in general give you.
    Conflict Serializable, normally implemented by using the strict phase locking mechanism.
    Repeatable Reads, you lock the rows you access during a transaction. You are guaranteed that those data values you access do not change value; but other entires in the table could be put.
    Unrepeatable reads. Only the data you modify is guaranteed to stay the same. Only you write locks are kept throughout the transaction. And so on.
    But anyway...
    What you explained in your post is more or less what I was saying. In you case much more clear than in mine.
    For instance, if a thread T1 reads A; a thread T2 Writes on A
    In oracle, you could have the thread T1 read A again without geting an Unrepeatable Read error. This is strange: in a normal system you directly get an exception telling you that your vision of the system is inconsistent. But in oracel you can do so, because oracle tries to fetch from the Undo Table Space that same data objects consistent with the view of the system you had when you first accessed it. It looks for a block with an an SCN older than the current version SCN. Or something like that. The only problem is that those modified blocks do not stay indefinitely there. Once a transaction commits you have a time bomb in your hands. That is, if you are working with that is not at its most current version.
    But you are quite right, I have not read enough about Oracle concurrency. But I have a good enough understanding for mu current needs.
    I can not know everything, nor do i want to :D.
    My memory is very limited.
    My best regards, and deepest thanks for your time and attention.
    Edited by: user10282047 on Jul 9, 2009 2:41 PM

  • Using cumulative distribution graphs

    Is there a cumulative distribution graph in version 2008?

    You can do a HISTOGRAM in the Graph Expert. Or you can "roll your own" using a line or scatter graph mapped against a function you provide.
    If all else fails (which it shouldn't), track down our CRChart add-in for CR2008, which has better support for statistical charting inside of CR.
    -Dan
    DISCLAIMER: I work for the company, threedgraphics.com, that makes CRChart. This product costs money.

  • Relire plusieurs rapports d'essais simultanément

    Bonjour,
    Je cherche à établir des statistiques sur un ensemble d'essais réalisés avec Labview. J'ai crée mon VI d'analyse afin de comparer les données entre elles ( valeurs max , écarts types etc...) mais je ne sais pas comment spécifié à ce VI qu'il doit faire l'analyse sur tout les essais inclus dans le dossier que je spécifie.
    Merci d'avance pour votre aide.
    Aurelien
    Résolu !
    Accéder à la solution.

    Bonjour,
    Pourquoi ne pas utiliser le VI inclu dans LabVIEW ?
    Cordialement,
    Pièces jointes :
    exemple_dossier.vi ‏8 KB

  • Non-cumulative Values not showing in Inventory Management Queries

    Hi:
    Has anyone had a problem with the new version and Non-cumulative key figures not showing up in Bex for Inventory Managemet reports? Specifically, they showed and validated back to ECC for our Development and QA boxes but now in our Regression box they are all showing zeros. Our Cumulative values are all showing correctly still within the Regression box. For example, Total Receipts and Total Issues are correctly poplating values but Total Stock is not.
    I have checked and validate that the configuration in the Regression box matches or Dev and QA box.

    Found the problem.  It had to do with the compression variant in the delta process chain.  The compression was set to 'no marker update'.  Since we only started receiving measureable deltas in or regression box this is where the incorrect setting showed up.  Reinitalized and the deltas are working correctly now.

  • Stock is increased and FI entry is posted in statistical GR for third party

    Hello
    I was trying to activate the functionalty for statistical GR in third party but STUCK due to following issues. For enabling this functionaltity I have checked "Goods Reciept" and "GR Non-Valuated" check in "Delivery tab" of PO.
         1. After creating Statistical GR through MIGO (101), stock has been posted in plant. I was expecting no movement in Stock .
         2. I was hoping there should not be in FI posting but FI entry is posted through statistical GR .
    We are using third party process from many years but without statistical GR. Requirement from client is to post statistical GR without FI entry and no stock should be updated. Please advise if anyone has idea on this.
    FYI--We have item category TAS and schedule line category CS in customisation. Also advise if above two settings in PO are enought to turn ON the statistical GR functionality.
    Thanks for your help.
    Regards
    Arvind

    Thanks AKPT for reply
    As advised , pleae find enclosed screens for OMS2, PO and FI entry. FI entry might be wrong as while posting MIGO ..I was getting error "WE etnry is missing in table YFTFI_SUBST236 and I maintained some dummy GL accounts.
    Regards
    Arvind

  • Aggregates on Non-cumulative InfoCubes, stock key figures, stock, stocks,

    Hi..Guru's
    Please let me know if  anybody has created aggregates on Non-Cumulative Cubes or key figure (i.e. 0IC_C03 Inventory Management.)
    I am facing the problem of performance related at the time of execution of query in 0IC_C03.( runtime dump )
    I have tried lot on to create aggregate by using proposal from query and other options. But its not working or using that aggr by query.
    Can somebody tell me about any sample aggr. which they are using on 0ic_c03.
    Or any tool to get better performance to execute query of the said cube.
    One more clarification req that what is Move the Marker pointer for stock calculation. I have compressed only two inital data loading req. should I compress the all req in cube (Regularly)
    If so there would be any option to get req compress automatically after successfully load in data target.
    We are using all three data sources 2lis_03_bx,bf & um for the same.
    Regards,
    Navin

    Hi,
    Definately the compression has lot of effect on the quey execution time for Inventory cubes <b>than</b> other cumulated cubes.
    So Do compression reqularly, once you feel that the deletion of request is not needed any more.
    And ,If the query do not has calday characterstic and need only month characterstic ,use Snap shot Info cube(which is mentioned and procedure is given in How to paper) and divert the month wise(and higher granularity on time characterstic ,like quarter & year) queries to this cube.
    And, the percentage of improvement in qury execution time in case of aggregates is less for non cumulated cubes when compared to other normal(cumulated) cubes. But still there is improvement in using aggregates.
    With rgds,
    Anil Kumar Sharma .P
    Message was edited by: Anil Kumar Sharma

  • Statistical condition types not getting displayed in invoice

    We  have configured the tax collected at source (TCS) condition types and make them as statistical in the pricing procedure. Now, the issue is when we are using them the values for these condition types are getting calculated correctly as the total value is correct but in the invoice their individual values are not getting displayed.
    In the analysis also we can see that the condition records are found but being statistical condition they should get displayed in the invoice as only the correct total value will not serve the purpose, please help
    Thanks & regards
    Puneet Agrawal

    Hi
    Keep a break-point (hard) in these program lines and debug and see that whether the condition type(KSCHL) related to Taxes is appearing here or not.
    If the condition type for taxes is coming then you can print the taxes.
    First check the related condition type for Taxes in T685 table and see here in the code.
    <b>Reward points for useful Answers</b>
    Regards
    Anji

Maybe you are looking for

  • Tax Analysis Tool | Report | SAP B1 | Localisation India

    I don't know why this thread is not displayed same as I have typed in Hi Solution Development Team, I guess you are aware of the poor status of legal reports offered by SBO in country india version (I am aware of merging of A,B,C in new version 8.8).

  • Firefox does not open

    I just had my PC reformatted (it runs Windows XP). Haven't been able to open Firefox Mozilla window since. I rebooted the system several times and even tried re-downloading Firefox 3.6 anew, but the problem remains. Each time I click the Firefox icon

  • Why you cant assign attributes with dense dim members

    Hi Experts, I have one stupid Q... Why we cant assign attributes to the dense dim mem, Is it bcoz 1) Essbase server was designed like this... 2)or else it will take too long time with dense dimension to calculate.... or else some other reasons Thanks

  • How to install OBIA on windows having OBIEE on Linux?

    Hi, I am going to install as many components on Linux as possible for BI Apps. Presently I have installed OBIEE 11.1.1.7 on Linux 5.8 64bit. Now I want to install OBIA 7964 which is only available for Windows so far. How I can setup OBIA on Linux? Ma

  • How do i get my documents to edit?

    I cant get my documents to edit. How do i do this?