Question reg Forecast funtions in Oracle

Hello Everybody,
Is there a function(s) in Oracle that's equivalent of FORECAST function in MS Excel ?

This is the forum for Oracle's SQL Developer tool, not for general SQL and PL/SQL questions.
Questions like this will get a better response in the PL/SQL forum.

Similar Messages

  • Reg Forecast functions in Oracle

    Hello Everybody,
    Is there a forecast function in oracle that's equivelent to forecast function in MS Excel ?

    Not sure about MS Excel, but Oracle has MODEL clause that can be used for projections.
    Take a look into
    "Example 2 Sales projection that fills in missing data"
    at http://docs.oracle.com/cd/B19306_01/server.102/b14223/advbi.htm
    Also I know that projections can be done in Oracle OLAP Option using OLAP DML.

  • Question about BC4J data tags, Oracle sessions and Locking!

    Hi ,
    I have seen numerous examples of JSPs using data tags and in all the examples the data tag for the application module has the "username" and "password" harcoded in it.
    My questions are:
    1) For a stateful application should we be including the username and password in every JSP page. I personally believe that we should not.
    2) If we have a username and password in every JSP page will it not start a new ORACLE user session and if so will it not cause locking problems?
    3) If we don't hard code the username and password in every JSP page, will it reuse the same ORACLE session ?
    4) How do we avoid locking problems when we use data tags?
    5)I can understand the inclusion of username and password in every JSP page if it is a stateless application but again Is there a way we can avoid hardcoding the username and password in every single page?
    I would appreciate if some one can let me know if any of my assumptions are incorrect.
    JDeveloper Team/Juan any advice?

    The username and password are optional. They can be provided via the connections.properties file. The multple entries for username and password don't mean that separate connection are made. The first time the ApplicationModule tag is encountered, your application instance is created. If you are running in reserved mode (look at your releasePageResources tag) the application instance is kept until your Http session times out. If you are running in Stateful or Stateless mode, you application instance is returned to the application pool and retrieved the next time you need an instance. Please refer to the application pool documentation and to the source in oracle\jbo\common\ampool provided in jbohtmlsrc.zip.

  • How to make generic the javascript funtion in oracle adf

    hi,
    i created some validation funtion in javascript in resource folder.i am passing parameter input text id(which is in my adf page) in javascript funtion .but i want to use tat validation function for my whole application page.pls help me

    Hi,
    +"i created some validation funtion in javascript in resource folder.i am passing parameter input text id(which is in my adf page) in javascript funtion .but i want to use tat validation function for my whole application page"+
    1. put the JS into an external JavaScriot file and reference it from af:resource. Don't copy it into all your pages
    2. If you have common validation to perform, then create a custom validator instead of calling JS functions
    http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_validate.htm#autoId20
    Frank

  • System Copy Questions R/3 4.6c  + Oracle + Solaris

    dear gurus
    goal system copy in R/3 4.6c + Oracle + Solaris
    I am doing the db export using R3SETUP (quite slow, I have optimize it using split SRT files method)
    I have questions here..
    when I do export I download latest R3szchk, R3load, R3SETUP, R3check, R3LDCTL
    I download from
    Support Packages and Patches - Archive - Additional Components - SAP Kernel -SAP KERNEL 64-BIT - SAP KERNEL 4.6D 64-BIT
    R3szchk, R3load, R3check, R3LDCTL using SAPEXEDB_2364-10001132.SAR (Kernel Part II for Stack Q1-2008)
    R3SETUP using R3SETUP_29-10001132.CAR
    source system : Oracle 8.1.7 - R/3 4.6C - Solaris 9
    my question is
    target system : Oracle 10.0.2 with patches - R/3 4.6C - Solaris 10
    later on when i do import to target system, should I use the same kernel when I do the export, or should I use the one from
    Support Packages and Patches -> Additional Components -> SAP Kernel -> SAP KERNEL 64-BIT -> SAP KERNEL 4.6D_EX2 64-BIT
    another question
    how to speed up the export ? i am stucked with bottle neck in IO, CPU usage is very low, please dont recommend me something related to hardware or disk architecture, if possible recommend me something related to Oracle parameter.
    disk IO  using IO stat showing 100% busy and service time 800-1500
    thanks !

    later on when i do import to target system, should I use the same kernel when I do the export, or should I use the one from
    Support Packages and Patches -> Additional Components -> SAP Kernel -> SAP KERNEL 64-BIT -> SAP KERNEL 4.6D_EX2 64-BIT
    Yes - use that one. It uses the Oracle instantclient instead of the old 9.2 client. Make sure you add "-loadprocedure fast" to the load options, this will enable the fastloader.
    Make also sure you install all the patchsets and interim patches BEFORE you start the data load.
    another question
    how to speed up the export ? i am stucked with bottle neck in IO, CPU usage is very low, please dont recommend me something related to hardware or disk architecture, if possible recommend me something related to Oracle parameter.
    if you disk I/O subsystem can´t deliver more than you are lost with Oracle parameters.
    Markus

  • Questions about JAVA/JBOSS on Oracle VM 2.2.1 - Performance

    I recently upgraded to VM 2.2.1 on an older 1.86 quad-processor (intel) system (DELL 2950). We have 3 VM's setup. One as an application server running JBOSS 5.1 and Jira and SubVersion. The second is a 10G Oracle Database server and the third is a 11G Database Server. This is all for development. THe Dell has 32GB of memory, 1 quad processor and 2 NIC (TOE) cards. I'm using SATA RAID 5.
    The developers are noting that the startup of JBOSS is taking 6-9 minutes which is substantially longer and they say the application is now slower. I've looked at everything and don't see any one blocking resource. I've increased the number of VCPU's to 4 for the Application Server and raised it's priority in OVM. There was no impact to the speed of the application or the boot process.
    JBoss seems to initiate something every few seconds that raises the CPU usage to 70-99% (out of 400%). On Windows systems that seems to occur as well.
    I'm curious if anyone else has a similar environment and what types of performance you are seeing with JBOSS in and Oracle VM environment on boots.
    Thanks.

    Maybe the following can useful:
    http://www.mastertheboss.com/en/jboss-howto/42-jboss-config/83-jboss-cpu-monitor.html
    What was the environment when you did not experience this problem? Can you provide more info?
    Network changes? NIC/Switch settings mismatch? Bad cabling? DNS problems?
    Does any of the following apply?
    http://stackoverflow.com/questions/1927071/improving-jboss-5-slow-startup
    Edited by: Markus Waldorf on Aug 25, 2010 2:19 PM

  • Re : Most commonly asked question in intervew As an Oracle DBA

    Hi All,
    Can U guys send me popular Question asked by an interviewer for Oracle DBA.
    Can u send the questions asked to you for DBA
    Thanks
    Avishesh

    For example:
    http://www.oraclepower.com/WebPortal/webportal?aid=sp&pg=/pages/hiringquestions.htm

  • Reg: SVN client for oracle linux

    Hi,
    can u plz provide some link to download and instal SVN CLIENT in ORACLE LINUX.
    Thanks,
    Nitesh

    They have installed the wrong Linux distro on that desktop.
    A server o/s does not provide default support for desktop h/w (like webcams, touchpads, latest video chipsets, etc). Instead, it provides support for server h/w and server environments.
    What you should be looking at is Ubuntu 12.10 or 13.04 (see http://www.ubuntu.com/). It is arguably the best Linux desktop distro. I have been using it for over 10 years now doing development and support. Prior to that I used Fedora, but it never did provide a smooth install and out-of-the-box driver support for desktop h/w. And I doubt that this has changed to be better than what Ubuntu supports and provides.
    If you do use Ubuntu, consider the 64bit version if your are developing s/w for 64bit Linux - makes development and deployment easier as there are some differences between the 32bit and 64bit kernels.
    You will also be able to install 64bit Oracle XE on your desktop (if you do get to that stage and need assistance in getting the XE Redhat RPM installed, post a message on the database general questions forum).

  • Question reg sample init files??

    I was looking for sample init files
    I found the following files in Windows
    initsmpl.ora under C:\Oracle10g\product\10.2.0\admin\sample\pfile
    initdw.ora under C:\Oracle10g\product\10.2.0\dbs
    and in UNIX AIX
    -rw-r----- 1 oracle dba 8385 Sep 11 1998 init.ora
    -rw-r--r-- 1 oracle dba 12920 May 03 2001 initdw.ora
    under /oracle/OraHome2/dbs
    I can see initdw.ora for datawarehouse databases
    I have following questions
    1.Are these are the only files I have mentioned are there or are there any more
    sample files i have missed?
    2. initsmpl.ora or init.ora are for which kind of databases ?General purpose or OLTP or?
    Thanx
    Gagan

    Hi,
    1 Those are the same files
    2 There are no special files are created for OLTP or DatawareHouse DataBases.
    These files are unique for different type of database you may choose. Only the parameters values will be
    adjusted based on the Environment upon which we create the database. It's parameters changes during the
    Course of Tuning of your database.
    - Pavan Kumar N

  • Migrating Transact SQL Stored Procedure & Funtions  to Oracle 9i Procedures

    Is it possible to Migrate Transact SQL Stored Procedure & Funtions in MS-SQL to Oracle 9i Procedures & Funtion . I am an Bigginer in Oracle and SQL.Is their any tool available for this?

    This feature is currently available in the Oracle Migration Workbench. with the Microsoft SQL Server plugins.
    Have you tried it?
    Regards,
    Niall

  • Questions About Chapter 2 in Oracle DB 10g: SQL Fundamentals II

    Hello,
    first of all i'm glad to be a member of your forum. I have joined a beginner Oracle Course: Intro to SQL. I'm facing some problems understanding some concepts in Chapter 2 of Oracle Database 10g: SQL Fundamentals II text book. I got about 15 questions. However, i will only ask two questions at first. Since i'm a newbie, please answer it in a simplistic form. Excuse me if you see grammatical mistakes.
    Dropping a column can take a while if the column has a large number of values. In this case it may be better to set it to be unused and drop it when the number of users on the system are fewer to avoid extended locks.
    Questions:
    "when the number of users on the system are fewer to avoid extended locks."
    1. Can you explain this to me please?! fewer than before? fewer than? What if users kept increasing during the years! then this "fewer" may not happen until the company collapse!
    2. Why do we need to use unused columns? When should we use unused columns?

    Great! .... I got more questions, i just do not want to open a new same thread. Thus, i will just post the questions in here and i hope i will get help from experts...Please bare with me guys...The questions are numbered, unnumbered parts are information that helps you understand my question.
    Note: just answer what you are willing to, one question or whatever you want. I'm not expecting to get all the answers from one member :)
    Thanks for understanding
    Page 2-7:
    Certain columns can never be dropped such as columns that form part of the partitioning
    key for a partitioned table or columns that form part of the primary key of an index- organized table.
    Questions:
    "columns that form part of the partitioning key for a partitioned table"
    *1. Do they mean one table can be split into two different storage? What is the thing that*
    link these both tables to make Oracle Server realize these two tables are actually one  table? Is is tablespace_name?
    "columns that form part of the primary key of an index-organized table."
    *2. Can you clarify the above sentence please*
    *3. If i have set of columns that has large amount of data, i rather set them unused then*
    drop them because the response time is going to be faster! I do not get it, can you
    explain please? What i know is drop drops the column and release the disk space whilst
    unused column make the columns useless and does not release disk space yet until we drop them, so
    drop column does it in one step unlike taking the unused column process. In brief, i would like to know
    why dropping unused columns that has large set of data is faster then dropping the column
    directly...
    Page 2-12
    4. ALTER TABLE emp2 ADD CONSTRAINT emp_dt_fk
    FOREIGN KEY (Department_id)
    REFERENCES departments ON DELETE CASCADE);
    The above query is written in text book. I think it should be written as
    ALTER TABLE emp2 ADD CONSTRAINT emp_dt_fk
    FOREIGN KEY (Department_id)
    REFERENCES departments(dept_id) ON DELETE CASCADE;
    Am i correct?
    *5. Can you tell me what deferring constraints is in one sentence please? Why do we need it? When do we need it in real-life?*
    *7. You can defer checking constraints for validity until the end of the transaction. A*
    constraint is deferred if the system checks that it is satisfied only on commit. If a
    deferred constraint is violated, then commit causes the transaction to roll back.
    I do not understand the above paragraph, please explain. What i know is "end of
    transaction" ends with ; or commit
    Page 2-18
    create table test1 (
    pk NUMBER PRIMARY KEY,
    fk NUMBER,
    col1 NUMBER,
    col2 NUMBER,
    CONSTRAINT fk_constraint FOREIGN KEY (fk) REFERENCES test1,
    CONSTRAINT ck1 CHECK (pk > 0 and col1 > 0),
    CONSTRAINT ck2 CHECK (col2 > 0) );
    -- "CONSTRAINT fk_constraint FOREIGN KEY (fk) REFERENCES test1"
    *8. This is wrong isn't it? It references to test1 but no column specified.*
    An error is returned for the following statements:
    ALTER TABLE test1 DROP (pk); -- pk is a parent key.
    *9. We can not drop it because we did not mention ON DELETE CASCADE. Am i right?*
    ALTER TABLE test1 DROP (col1) -- col1 is referenced by multicolumn constraint ck1.
    *10. I do not get it, can you explain please. col1 is not referenced, i see CHECK constraint is applied*
    but no references made. Secondly, is ck1 considered multicolumn because it check two columns?
    Or multicolumn here represents something else?
    ALTER TABLE emp2
    DROP COLUMN employee_id CASCADE CONSTRAINTS;
    *11. This drop employee_id column and all its child. Correct?*
    ALTER TABLE test1
    DROP (pk, fk, col1) CASCADE CONSTRAINTS;
    *12. This drops three columns and all its child if there are any. Correct?*
    *13. Then what's the difference between ON DELETE CASCADE and CASCADE CONSTRAINTS?*
    For example, What if employee_id in emp2 table definition does not have ON DELETE CASCADE,
    will CASCADE CONSTRAINTS work? Please explain...
    Page 2-22
    When you are expecting a large data load and want to speed the operation. You may want
    to disable the constraints while performing the load and then enable them, in which case
    having a unique index on the primary key will still cause the data to be verified during
    the load. So you can first create a nonunique index on the column designated as PRIMARY
    KEY, and then create the PRIMARY KEY column and specify that it should use the existing
    index.
    Example:
    1. create the table
    create table new_emp
    (employee_id number(6),
    first_name varchar2(10)
    2. create the index
    create index emp_id_idx2 on new_emp(employee_id);
    "You may want to disable the constraints while performing the load and then enable them"
    so i suggest to load all data i want into new_emp.
    3. create the primary key
    alter table new_emp ADD primary key (employee_id) USING index emp_id_idx2;
    What i understand is the following:
    If we want to load large data into the new_emp, its better to create the table without any
    constraints - in our case the constraint is primary key. After that, we create nonunique
    index points to employee_id and then load data into new_emp. Finally, specify employee_id
    as primary key using the nonunique index.
    *14. Is my explanation correct?*
    "in which case having a unique index on the primary key will still cause the data to be
    verified during the load."
    *15. Data to be verified against what? Is it to be verified whether its NULL or NOT NULL? I*
    know primary key does not take NULL and every value must be unique.
    After loading all data we want, what if i did
    "alter table new_emp ADD primary key (employee_id);"
    *16. Will i face any problems or inefficient process?*
    I do not think we need step two, we could do the following:
    1. create the table
    create table new_emp
    (employee_id number(6),
    first_name varchar2(10)
    "You may want to disable the constraints while performing the load and then enable them"
    so i suggest to load all data i want itno new_emp.
    2. create the primary key
    alter table new_emp ADD primary key (employee_id);
    *17. The above steps are as efficient as the three steps i mentioned above. The only difference*
    is we let index be created implicitly. Right? If no, why?
    Page 2-23
    CREATE INDEX upper_dept_name_idx ON dept2(UPPER(department_name));
    The following statement may use the index, but without the WHERE clause the
    Oracle server may perform a full table scan:
    select *
    from employees
    where UPPER(last_name) IS NOT NULL
    ORDER BY UPPER (last_name);
    "but without the WHERE clause the Oracle server may perform a full table scan"
    *18. The above query let oracle server perform full table scan anyway! Right? It has to go*
    through every field and check is it not null or not. I know we are using function-based
    index but there are alot of not null last_name! so oracle server must scan one by one. If
    we only had one not null field, then i would say Oracle server can point to that field
    immediately by the aid of function-based index we created above. Can you clarify please...
    Another related topic statement that i do not get it yet:
    "The oracle server treats indexes with columns marked DESC as function-based indexes."
    *19. The bove statements is so general. What if we have a column ordered by DESC order and we*
    did not create any function-based indexes, will statement be true?!
    Lets go back the above query:
    ORDER BY UPPER (last_name);
    *20. Its not DESC. To me, the above query does not flow with this statement "The oracle server treats*
    *indexes with columns marked DESC as function-based indexes."?*
    Page 2-27
    Regarding FLASHBACK TABLE, you can invoke a flashback table operation on one or more
    tables, even on tables in different schema. You specify the point in time to which you
    want to revert by providing a valid timestamp. By default, database triggers are disabled
    for all tables involved. You can override this default behavior by specifying the ENABLE
    TRIGGERS clause.
    "By default, database triggers are disabled for all tables involved. You can override this
    default behavior by specifying the ENABLE TRIGGERS clause."
    *21. What are database triggers?*
    *22. About External Tables. What are external tables? When is it used in real-life? Why do*
    we want External Tables?
    Page 2-30
    Oracle server provides two major access drivers for external tables. They are
    ORACLE_LOADER access driver and ORACLE_DATAPUMP access driver. ORACLE_DATAPUMP used to
    both import and export data using a platform-independent format.
    "platform-independent format."
    *23. What is the format? Is it .dat?*
    Page 2-35
    CREATE TABLE oldemp ( fname char(25), lname char(25) )
    ORGANIZATION EXTERNAL
    (TYPE ORACLE_LOADER
    DEFAULT DIRECTORY emp_dir
    ACCESS PARAMETERS
    (RECORDS DELIMINATED BT NEWLINE
    NOBADFILE
    NOLOGFILE
    FIELDS TERMINATED BY ',' (fname POSITION (1:20) CHAR, lname POSITION (22:41) CHAR)
    LOCATION ('emp.dat') )
    PARALLEL 5
    REJECT LIMIT 200;
    *24. Can you please explain the below part:*
    ACCESS PARAMETERS
    (RECORDS DELIMINATED BT NEWLINE
    NOBADFILE
    NOLOGFILE
    FIELDS TERMINATED BY ',' (fname POSITION (1:20) CHAR, lname POSITION (22:41) CHAR)
    *25. Can you please explain what is PARALLEL 5? and Why do we need it?*
    Again, any help is appreciated...
    Edited by: user11164565 on Jul 21, 2009 4:41 AM

  • Development Question Pertaining to Discoverer and Oracle Apps

    Hi All,
    I have a Disco Development question. First of all I am in an Oracle Apps environment.
    Here is my situation. I received a specification to make sure our End User Layer has the capability to create a workbook
    which has the following information in it:
    1. Onhand inventory by lot information using the GMF_PERIOD_BALANCES table.
    2. Demand pegging (from the msc_demands table) associated with this onhand inventory.
    3. Accounting cost information (GL_ITEM_CST information).
    4. They also want more detailed item master information information (e.g. catalog descriptive element information and costing category).
    Here is a solution to address this:
    -Take existing inventory by lot views we have and use the the gmf_period_balances table in lieu of the mtl_onhand_quantity_details table. Within this same
    view, add pegging information. This has been implemented in a development environment.
    -Take existing costing views (using the component_cost_details table) and replace with the gl_item_cst.
    -Take existing view associated with item detail information and make sure all catalog descriptive elements and costing category information have been added.
    My next main dilemma for this problem is to address the best way to join these views in the Discoverer end user layer. Here are the approaches I see:
    1. Encapsulate the item detail information with the onhand, period end with pegging view. Next, create a join with the costing view.
         pros: In my testing of this, the totalling works with extending item costing (item cost x onhand inventory).
         cons: Refresh times and scalability of this solution are concerning (consider a couple of people using this at the same time). Also, there is no nice
         way to join these two views (there is a n x m relationship between these two views). For instance, the costing calendar is used in the accounting costing
         view, and the general ledger calendar is used in the GMF_PERIOD_BALANCES table.
    2. Abandon a two or three view approach. Create an uber view which encapsulates the onhand quantity, pegging, standard costing and item detail information.
         pros: A more simpler end user experience which should then allow for the totalling to work.
         cons: Very easy for an end user to obtain a result set which is part of a cartesian product (e.g. multiple general ledger periods are included in this workbook
         with multiple costing periods). Doubtful that this workbook would be refreshable.
    3. Fan trap solution which would be some variation of onhand view being joined to an item details folder which is joined to the pegging details updated.
         pros: Without too much work, I am able to get this to work, except totaling.
         cons: Totaling.
         Please let me know what your thoughts are regarding the best approach.
         Thanks,
         Patrick

    Hi,
    Where you have complex joins I would always go for a single view. It keeps Discoverer simple and allows you to concentrate performance effort building the view. Generally you can build in conditions to stop a cartesian product.
    Rod West

  • Question regarding shared_pool sizes in Oracle 11gR2

    Hi! I administer an Oracle 11gR2 database that runs on IBM hardware, OS AIX 6.1. The LPAR has 44 Gb of available memory and 8 dual-core processors assigned.
    Normally, everything runs fine and we have no problems. Today the database started rejecting connections suddenly. The issue lasted for about 14 minutes and then cleared itself. When checking the alert log, it indicated an out of memory problem with the shared_pool, ORA-4031 error.
    I am utilizing the automatic memory management feature of 11g, and memory_max_target is set to 14 Gb. I am going to increase memory_max_target to 18 Gb as part of my solution, but will that also then increase the size of the shared_pool in the SGA? Is it best to let the database manage the size of the shared_pool, or can (and should) I set it to a minimum size?
    Any help, or links to documentation or MOS notes, is greatly appreciated. Thank you!
    Mark

    Before suggesting you change your memory managment in use I would want more information such as how long have your been running the system this way?  How long had the system been running since its last startup?
    If you have access to the AWR then what does the AWR report for the problem period show?  Any unusual activity?  Were any new features or major processing added to the application recently.
    When you look at the memory management views how frequently does Oracle move granules from PGA to SGA, from the buffer cache to the shared pool and so on.
    The following support note may help with trying to investigate what issues you face in relation to ORA-04031 providing the issue re-occurs.
    Oracle document# 1088239.1 Master Note for Diagnosing ORA-4031
    HTH -- Mark D Powell --

  • Reg Forecasting

    Hi
    I am new to DP.
    please anybody clarify these doubts...
    What is univariate forecast profile? and what is multi leaner regression?
    What are the differences between UNIVARIATE & MULTILENIAR?
    What is forecast accuracy?
    How do you find out, which is fit to forecast?
    please explain.
    i am very much thankful to you
    Regards
    Anitha

    Hi Anithaa,
    Below link will give you entire information on forecasting models, forecasting accuracy technequues etc,:
    http://help.sap.com/saphelp_apo/helpdata/en/33/437a37b70cf273e10000009b38f8cf/frameset.htm
    For MLR refer the following link:
    http://help.sap.com/saphelp_apo/helpdata/en/ac/216ba4337b11d398290000e8a49608/frameset.htm
    Which is the best model to use:
    You should try running forecasting using various model(seasonal, seasonal linear, linear, constant etc) and comapre
    the forecast accuracy and find out which one is the best suitable.
    Another way is to choose auto model which will select the best possible forecast strategy and give the result.
    Regards,
    Chetana

  • Basic question reg. distributed installatio

    Hi everybody,
              i have a very basic question, for which i wasn`t able to find a simple
              answer/solution.
              I am planning to set up Bea in a distributed environment. the idea is
              to have a physical machine for the presentation, meaning
              webserver/jsp/servlets in the dmz1 and a machine with the application
              server holding ejbs in a different dmz.
              This results in an architecture where the presentation layer only can
              be contacted via http/https by the users and the logic layer
              communicates with the pres. layer via RMI/T3.
              Is there any documentation on such a setup ? any hints ?
              Thanks in advance, i'll keep on searching the dox.
              Berthold Krumeich
              

    [att1.html]
              

Maybe you are looking for

  • Central Person and Address

    Hi Guys, I'm in ECC 6.0 and SRM Server 5.5. My SRM is Add-On to ECC.I've a problem in Central Person (CP) and Address. I've created my Org Structure. For all the nodes Business Partner (BP) is generated. I run the transaction 'BBP_BP_OM_INTEGRATE'. I

  • HT2693 How can I cancel my itune card balance $ .99 and go back to my normal payment. I can't change the Region too

    I want to cancel my remaining balance of itune card $ .99 and go back to my normal payment. Because I can't change the region until I spend the remaining balance

  • Reinstalling Xcode

    I have Xcode 3.2.3 currently installed on my Mac. Its somewhat buggy so I decided to update it to v. 3.2.5. But what is the correct procedure to remove v.3.2.3? Clearly moving its icon to the trash doesn't do it.

  • My iMac needs 4 Minutes to start

    I don't know why, but my iMac needs 4 Minutes for every start. I have disconnected all USB devices. Can somebody help?

  • Usage Decision Results to any stock update?

    Dear SAP Experts, I want to know , when usage Decision is taken in QM, stock will be updated or not? After usage decision say few quantity is scrapped and given usage decision code for the same. I want to know this decision will update to stock or no