Question on Cube

Hi,
   How are you ?. Can anyone please let me know, what is a snapshot cube.
                  Thank you.

snapshot is like capturing the data at one moment of time. and if you have that time characteristic in the cube.
for eg: the open inventory changes from day to day and if you have date char in the cube and if you load at the end of everyday then that cube could also be called as daily snapshot cube...
Thanks,
Kalyan
It doesn't matter how many breaths u take in a life, what matters is how many breath taking moments u have in life.

Similar Messages

  • Basic Questions on CUBE & DSO size

    Hi Experts
    I had a few basic questions with Regards to Cube and DSO.
    I had a CUBE  & ODS in Live with Historical Data from 2000 to tell date.
    1) How can i check the No of Data Records avaliable in Cube (2000 to till date)
    2) How can i check the No of Data Records avaliable in ODS (2000 to till date)
    AND
    3)When i check the Data Load request in Cube (Context Menu Manage) in request tab i can see
    Request ID,Avaliable for reporting...Transfered Records & Added Records
    What is the difference between Transfered Records & Added Records in a request
    Thanks in advance

    Hi,
    1 & 2. to check ODS/Cube data goto SE11/SE16-->give the name of the ODS/Cube as /BIC/A(ODS name)
    and for the cube as /BI0/F(cube name) (before compression) and /BI0/E(cube name) (after compression).
    3. Transferred records--> are the records loaded from Source.
        Added records--> these are the records which are added to the cube/ods. This added records will depends on the Transfer/update rules of cube and ODS and also key fields in the ODS. This added records may be less or greater than transferred records.
    Added to FCI the program: SAP_INFOCUBE_DESIGNS is to display how many rows per dimension table.
    Hope you understood ......
    Edited by: Ravi kanth on May 19, 2009 5:25 PM

  • Delta and Full Load question for cube and ODS

    Hi all,
    I need to push full load from Delta ODS.
    I have process chain.... in which the steps are like below,
    1. R/3 extractor for ODS1 (delta)
    2. ODS1 to ODS2 (delta)
    3. ODS2 to Cube ---> needs to be full load
    Now when i run process chain by further processing automatically ODS2 does init/delta for Cube.
    How can i make it possible for full load ??
    can any one guide anything in this ?
    Thanks,
    KS

    Hi,
    1. R/3 extractor for ODS1 (delta) :  This is OK, normally you can put the Delta InfoPack in Process Chian
    2. ODS1 to ODS2 (delta): It automatically flow from ODS1 to ODS2 (you need to select Update Data automaticall in the Targets at the time of ODS creation)
    3. ODS2 to Cube ---> needs to be full load  :
    This you create a Update rules from ODS1 to Cube then Create InfoPackage in between ODS2 and Cube then do full loads. You can delete the data in the CUbe before the load ann dthen do Full load to Cube.
    Note: In ODS2 don't select Upadate Data autmaticlly to Data Targets
    Thanks
    Reddy
    Edited by: Surendra Reddy on Nov 21, 2008 1:57 PM

  • Data loading from DSO to Cube

    Hi,
    I have a question,
    In book TBW10 i read about the data load from DSO to InfoCube
    " We feed the change log data to the InfoCube, 10, -10, and 30 add to the correct 30 value"
    My question is cube already have 10 value, if we are sending 10, -10 and 30 Values(delta), the total should be 40 instead of 30.
    Please some one explaine me.
    Thanks

    No, it will not be 40.
    It ll be 30 only.
    Since cube already has 10, so before image ll nullify it by sending -10 and then the correct value in after immage ll be added as 30.
    so it ll be like this 10-10+30 = 30.
    Thank-You.
    Regards,
    Vinod

  • Are Cube organized materialized view with Year to Date calculated measure eligible for Query Rewrite

    Hi,
    Will appreciate if someone can help me with a question regarding Cube organized MV (OLAP).
    Does cube organized materialized view with calculated measures based on time series  Year to date, inception to date  eg.
    SUM(FCT_POSITION.BASE_REALIZED_PNL) OVER (HIERARCHY DIM_CALENDAR.CALENDAR BETWEEN UNBOUNDED PRECEDING AND CURRENT MEMBER WITHIN ANCESTOR AT DIMENSION LEVEL DIM_CALENDAR."YEAR")
    are eligible for query rewrites or these are considered advanced for query rewrite purposes.
    I was hoping to find an example with YTD window function on physical fact dim tables  with optimizer rewriting it to Cube Org. MV but not much success.
    Thanks in advance

    I dont think this is possible.
    (My own reasoning)
    Part of the reason query rewrite works for base measures only (not calc measures in olap like ytd would be) is due to the fact that the data is staged in olap but its lineage is understandable via the olap cube mappings. That dependency/source identification is lost when we build calculated measures in olap and i think its almost impossible for optimizer to understand the finer points relating to an olap calculation defined via olap calculation (olap dml or olap expression) and also match it with the equivalent calculation using relational sql expression. The difficulty may be because both the olap ytd as well as relational ytd defined via sum() over (partition by ... order by ...) have many non-standard variations of the same calculation/definition. E.g: You can choose to use or choose not to use the option relating to IGNORE NULLs within the sql analytic function. OLAP defn may use NASKIP or NASKIP2.
    I tried to search for query rewrite solutions for Inventory stock based calculations (aggregation along time=last value along time) and see if olap cube with cube aggregation option set to "Last non-na hierarchical value" works as an alternative to relational calculation. My experience has been that its not possible. You can do it relationally or you can do it via olap but your application needs to be aware of each and make the appropriate backend sql/call. In such cases, you cannot make olap (aw/cubes/dimensions) appear magically behind the scenes to fulfill the query execution while appearing to work relationally.
    HTH
    Shankar

  • Cube script parameters

    Hello!
    Finally we are starting our huge migration project from 10.2.0.4 to 11.2.0.3. Now we are doing study what to change in our OLAP build scripts. Originally we use DML and now would like to build cubes using "proper" way. My question concerns clearing of variables. Are there any way to setup a parameter for dimension status for clear step in cube script?
    Let's say I would like to define before execution the day I would like to clear from sales cube. How should I accomplish this without DML programming?
    Thanks in advance!
    Regards,
    Kirill

    Kirill,
    Since you are moving from 10golap to 11golap, few points to keep in mind (for you and for others on this forum):
    (1). Convert AW's calc measures logic from olap dml to new OLAP EXPRESSION syntax. http://docs.oracle.com/cd/E11882_01/olap.112/e23381/toc.htm
    (2). If there is any olap dml program to populate an attribute, move that logic to relational side (i.e., in your dimension source view)
    (3). Try to understand, how XML can be used to manage the AW and its objects. There are lot of posts by David Greenfield on this forum.
    (4). Its very important to understand DBMS_CUBE package. Try to understand all its details, since you will use this a lot. http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_cube.htm
    (5). It is also important to use as little olap dml as possible, even for calc measures. Use this forum for any help you need with that. There should be only few cases now, where olap dml is necessary. Use the standard cube-aware olap dml statements: http://docs.oracle.com/cd/E11882_01/olap.112/e17122/dml_basics.htm#BABFDBDJ
    (6). Logging is much extensive now. So if you have any code that relied on olapsys.XML_LOAD_LOG that need to be changed also.
    http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_cube_log.htm#ARPLS72789
    (7). Use compressed cubes now. See David's post about how much to precompute: Question on cube build / query performance (11.2.0.3)
    (8). If you have a RAC environment, then look at this new functionality where you can Pin dbms_cube.build parallel jobs to specific node on RAC Pin dbms_cube.build parallel jobs to specific node on RAC
    (9). For any write-back to cube, see if you can use this tip: http://oracleolap.blogspot.com/2010/10/cell-level-write-back-via-plsql.html
    (10). If you had any custom work done to improve looping during queries, then keep in mind that it is provided out-of-the-box now by properties like: $LOOP_VAR and $LOOP_DENSE
    http://docs.oracle.com/cd/E11882_01/olap.112/e17122/dml_properties013.htm
    http://docs.oracle.com/cd/E11882_01/olap.112/e17122/dml_properties014.htm
    (11). Better features for handling security through AWM. http://docs.oracle.com/cd/E11882_01/olap.112/e17123/security.htm
    (12). Cube Materialized Views features also available. You may or may not need it.
    http://docs.oracle.com/cd/E11882_01/olap.112/e17123/admin.htm#CHDBCEGB
    (13). OLAP_TABLE function is still fully supported. OBIEE 11.1.1.5 uses OLAP_TABLE to generate queries. So if you like you can continue using OLAP_TABLE, or you can look at the new CUBE_TABLE function: http://docs.oracle.com/cd/E11882_01/server.112/e17118/functions042.htm
    And finally... Go through all David Greenfield's posts. You will find lot of good ideas on how to do things differently in 11gOLAP.
    .

  • When set attributehierarchyVisible to false, do I have to reprocess cube with full?

    is there any other way to do that? it takes quite a bit time to reprocess the cube in full mode..
    thanks
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --

    Hi cat_ca ,
    Adding something useful to the question 'Does cube need to be process ProcessFull ?'
    Type
    Action
    Unprocess cube/dimension?
    Cube
    New measure group
    Yes
    Cube
    New measure
    Yes
    Cube
    Edit measure aggregation method
    Yes
    Cube
    Measure format
    No
    Cube
    Measure name
    No
    Cube
    Measure display folder
    No
    Cube
    ErrorConfiguration edition
    No
    Cube
    Edit dimension usage
    Yes
    Cube
    Calculations
    No
    Cube
    Add, edit or delete kpi
    No
    Cube
    Add, edit or delete action
    No
    Cube
    Edit partition query
    No, not applied till next process.
    Cube
    Add new partition
    No, new partition unprocessed.
    Cube
    Edit partition storage mode
    No, but data is empty.
    Cube
    Create,edit and assign aggregations
    No, not applied till next process.
    Cube
    Add, edit or delete perspective
    No
    Cube
    Add, edit or delete translation
    No
    Dim
    Add attribute to dim
    Yes
    Dim
    Edit attribute name
    No
    Dim
    Order by property of attribute
    Yes
    Dim
    AttributeHierarchyVisible property
    No
    Dim
    Edit attribute relationship
    Yes
    Dim
    Add or delete translation to dim
    Yes
    Dim
    Edit translation
    No
    Other
    Add, Edit, Delete Role
    No
    Other
    Edit data source
    No, not applied till next process.
    Other
    Edit DSV
    No
    Regards, David .

  • Urgent Need of Doc

    Hello!!!!
    Please provide basic frequently asked interview questions on CUBE, CUSP and CUCM.
    I need them on urgent basis so that i can clear my interview.
    Needed those docs today as tomorrow is my interview.
    BR
    Akriti

    Hi,
    I have been searching for ZAP version 1.5 application everywhere. I stumbled across this post & was wondering if you happened to have the De-Zap application version 1.5 or higher. I managed to find online version 1.0.1 but it gives me a header error. A friend of mine just recently passed away & he used to use this program to compress all his audio sessions. I'm trying to re-release his works to help raise money for his children. Any help with this would be greatly appreciated.
    Thank You,
    Chris
    [email protected]

  • ZXRSRU01 (RSR00001 BI: Enhancements for Global Variables in Reporting)

    Hi,
    we are using Userextit RSR00001 to define partproviders of a multiproviders by using variables and a custom table because our multiproviders contain more than 60 partproviders
    i_step = 1
    SELECT * FROM zvar_cube_userex INTO wa_zvar_cube_userex " 
          WHERE     bex_variable EQ i_vnam.                  
          IF sy-subrc EQ 0.                                   
            l_s_range-low    = wa_zvar_cube_userex-infoprovider.
            l_s_range-sign   = 'I'.                            
            l_s_range-opt    = 'EQ'.                           
            APPEND l_s_range TO e_t_range.                     
          ENDIF.                                              
        ENDSELECT.                                      
    At the end of  i_step = 1   in I_T_VAR_RANGE we get following entries
    ZINP_CP2     0infoprov   I  EQ  ZCPCRAG
    ZINP_CP2     0infoprov   I  EQ  ZCPCRAG01
    ZINP_CP2     0infoprov   I  EQ  ZCPCRAG02
    ZINP_CP2     0infoprov   I  EQ  ZCPCRAG03
    Now our question.
    Cubes ZCPCRAG ZCPCRAG01 and so on are yearly cubes, devided by 0fiscalper. Now we think about,  additional to our 0infoprov determination ( depending on query) using SAP standard possebility "Logical multiprovider partitioning" via table RRKMULTIPROVIDERHINT. 
    - Is that possible ?
    - Is ther somebody who used it in the same way?
    Harald

    Thanks rakesh&sangeetha
    In my senario
    I created one report,on report I created on custmor exit varible
    for calday
    when I execute the report in 3.x analyzer Iam getting break point screen(I will take u to cmod)
    If I execute same report in BI.7 It,s not displying  break point?
    (with out asking the variable values ,its directly executing the report)
    why like this ?

  • Regarding query of infocube additive property

    Hi,
    We have a scenario in our reports., If OLTP retrives 35 records and at BW side(OLAP) has to show 35 records.,
    my question is cube is additive porperty, if all characterstics are same means with different keyfigure value it adds the
    information.
    So how can we avoid summation and take the value into cube with different records as showing.
    Can anyone please let me know on it as its high priority ticket at my project.
                                                 Thank you.

    Hi,
    In your special case, you can do so by including a 'CHAR' in your cube( as Line Item Dimension) whose value should get incremented by 1 for each record updated! (use an ABAP routine for this)
    Hence, in such a case you will have a unique key combination (based on DIM iD's) in your fact table and even for all entries being same for other fields, aggregation will not take place. You can see unique records!
    -VA
    Edited by: Vishwa  Anand on Sep 27, 2010 2:08 PM
    Edited by: Vishwa  Anand on Sep 27, 2010 2:09 PM

  • With regaards

    Hi Experts
    When I load data into planning area from the Info cube, data is copied in the planning area but system is showing following log
    380 Combinations of the Cube not contained in the MPOS.
    I checked the CVCS in the MPOS but there are no CVCs generated with those combinations which is displayed in the log.
    I again generated CVCs for the MPOS and tried to load key figure data into the Planning area but system is showing same warning in the log.
    Please  advice me.

    Hi Venu
    If i understand your Question.
    Cube has the missing combinations but you are not able to generate those combinations.
    Please ensure that no null values for the characteristics in the info cube.
    System wonu2019t generate CVCu2019s if there are null values present for the combinations in the infocube.
    Please check it.
    regards
    kanth

  • 0FI-CA module, cube 0FC_C08 & 0FC_C09 dataflow? question abt filling tables

    Hello all,
    We are currently working on BI7.0 and ERP6.0. I am trying to understand the data flow for these two cubes 0FC_C08 and 0FC_C09.
    When I see the data flow in BW, I see there is an infosource 0FC_BP_ITEMS, which feeds to DSO 0FC_DS05 which feeds to DSO's 0FC_DS06 (Open Items) and 0FC_DS07 (Cleared Items) which feed to 0FC_C08 and (Open items) and 0FC_C09 (Cleared items).
    0FC_BP_ITEMS -> 0FC_DS05 -> 0FC_DS06 and 0FC_DS07 -> 0FC_C08 and 0FC_C09...
    Now what I am looking for is what do these two datasources feed to ?
    0FC_CI_01    FICA Cleared Items for Interval
    0FC_OP_01  FI-CA Open Items at Key Date
    Also I have another question like:
    1.      Run tcode FPBW that fills data to table DFKKOPBW, and then you will see data in RSA6 for datasource 0FC_OP_01.
    2.     Run tcode FPCIBW that fills data to table DFKKCIBW, and then you will see data in RSA6 for datasource 0FC_CI_01.
    My question is do we have to do this on periodic basis or do we have to do it every time before we run infopackage to load data in BW? What are the key dates or date interval we can use for those two tcodes?
    Please anyone who has worked on it can give some ideas
    Thanks all for your help in advance.
    Kiran
    Edited by: Kiran Mehendale on May 16, 2008 4:40 PM

    0FC_CI_01 FICA Cleared Items for Interval
    --This Data Source will feed InfoCube: 0PSCD_C01
    and
    0FC_OP_01 FI-CA Open Items at Key Date
    --This Datasource will feed infoCube: InfoCube: 0PSCD_C02
    http://help.sap.com/saphelp_nw70/helpdata/EN/41/3d30113dfe6410e10000000a114b54/frameset.htm
    From this link you will be able to check out the data sources as well as any information on the infocubes.
    hope this helps.
    Matt

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • Memory Question for G4 Cube

    In accordance with recommendations on a previous question here I have ordered OS 10.4.6 from Apple to install on my Cube. The Apple rep said it would work with the 384 MB ram I now have in the computer. I think I will add more memory though and the Apple rep advised I visit <www.crucial.com> to get it as Apple no longer stocks memory for the Cube. The spec for memory in the Cube manual says to use "PC-100" synchronous DRAM (SDRAM). The crucial.com site recommended a 512MB chip that says it is "PC-133" and implies it will work, guarantied. Will it?
    It will be 2 weeks at least before Apple will ship the OS 10.4.6 package as it is on backorder. I know I will have some questions on installation when I get it.
    JTB

    PC133 RAM is usually less expensive than PC100. The cube has 3 slots, and the max you can install is 3 x 512 megs. If all three slots are already filled, you will have to remove one of the modules and replace it with the 512 you plan to buy. I would personally stick with PC100, particularly if the modules that will remain are PC100.
    I usually buy my RAM from Ramdirect. Their prices are competitive, and they have always stood behind what they sell - there is a lifetime guarantee. I will recommend them without reservation.
    http://www.ramdirect.com/vcom/index.php?cPath=1723
    (I have no relationship with the company, other as a satisfied customer.)
    Installing the RAM is dead easy, but you might want to download a service manual.
    http://cubeowner.com/manuals/cubesvc_man02.pdf

  • Inventory Cube Loading - Questions....

    This is the process I intend to follow to load InfoCube 0IC_C03 and the questions therein. I have the "how to handle inventory scenarios" document, so please don't suggest I read that.
    1A) Delete Set-up tables for application 03
    1B) Lock Users
    1C) Flush LBWQ and delete entries in RSA7 related to application 03 (optional - only if needed)
    2A) Fill set up tables for 2LIS_03_BX. Do Full load into BW Inventory Cube with "generate initial status" in InfoPackage.
    2B) Compress request with a marker update
    3A) Fill set up table for 2LIS_03_BF with posting date 01/01/2006 to 12/31/2007 (Historical) and load to Inventory Cube with a full update
          QUESTION1: Does this need a marker update?
          QUESTION2: Is this a good strategy  - do movement documents that old get updated once posted?
          QUESTION3: Does the posting dates restriction depend on on how far back in history we want to go and look at stock values?
    3B) Fill set up table for 2LIS_03_BF with posting date 01/01/2008 to 9/9/999  (Current) and load to Inventory Cube with a delta init with data transfer.
    3C) Compress load in 3B without a marker update
    4A) Fill set up table for 2LIS_03_UM  and load to Inventory Cube with a delta init
    4B) Compress load in 4A without a marker update
          QUESTION4: How should we select the posting date criteria? Do I need to load from 01/01/2006 to 9/9/9999 since that's the range used for BF?
    5) Start V3 update jobs via Job Control
    6) Intiate subsequent delta loads from BF and UM and compress with marker update
    QUESTION 5: Is the sequence of loading BX first, then BF and UM fixed? or can I do BF and then BX, UM
    QUESTION 6: Any tips on minimizing downtime in this particular scenario? Please don't suggest generic steps. If you can suggest something specific to this situation, that'd be great.
    I hope you can help with the 6 questions I asked above.
    Regards,
    Anita S.

    Hi Anita,
    Please find my answers below. I have worked enough with this scenario and hence feel that these would be worth considering for your scenario.
    3A) Fill set up table for 2LIS_03_BF with posting date 01/01/2006 to 12/31/2007 (Historical) and load to Inventory Cube with a full update
    QUESTION1: Does this need a marker update?
    In this step we dont need marker update while compressing.
    QUESTION2: Is this a good strategy - do movement documents that old get updated once posted?
    I am able to get the question quite clearly.
    QUESTION3: Does the posting dates restriction depend on on how far back in history we want to go and look at stock values?
    Yes. We need to start from latest and then go back as backwards as we want to see the stock values. This holds true when we are using non cumulative key figures
    4B) Compress load in 4A without a marker update
    QUESTION4: How should we select the posting date criteria? Do I need to load from 01/01/2006 to 9/9/9999 since that's the range used for BF?
    No need to provide any selection criteria for UM while loading to BW, as this would fetch the same data filled in setup of revaluations.Unless you are looking for only some history other wise you can fill that for company code list and bring the whole data to BW  by a single full load, as the data wont be that huge as compared to BF
    6) Intiate subsequent delta loads from BF and UM and compress with marker update
    QUESTION 5: Is the sequence of loading BX first, then BF and UM fixed? or can I do BF and then BX, UM
    This is fixed in terms of compression with marker updates.
    QUESTION 6: Any tips on minimizing downtime in this particular scenario? Please don't suggest generic steps. If you can suggest something specific to this situation, that'd be great.
    *Yes. Most of time consuming activity is for filling BF history. Negligable for BX and UM comparitively.
    Either try having multiple BF selective setup filling based on posting date based on available background processes or filli open months setup during downtime and rest history once you unlock the system for postings. The reason for this being we dont expect any posting for history*
    Feel free to ask any further questions about this.
    Naveen.A

Maybe you are looking for

  • Something is wrong with XMLsocket

    alright , I am using XMLsocket class, actrionscript 2.0 (flash 8). I have set up appache server on port 1024, and I can connect without problems. But something's wrong with sending test data (I am using log file to check for incoming server requests)

  • My behavior is locked and I can't unlock.

    HELP!! I inherited this png. There is a Behavior Event: "onMouseover" Action "Set Text of Staus Bar" Info "Team" I wan't to delete this behavior but it has a lock on it and I can't seem to unlock it. Any ideas?

  • HT3986 Windows on MacBook with Parallels Desktop 7

    I need to install Windows on a MacBook 13" 5.1 bought May 2009. Operating system Mac OS X 10.5.8. Processor 2GHz Intel Core 2 Duo. Memory 2GB 1067 Mhz DDR3. What Windows software do I need to work with this laptop and Parallels Desktop 7?  Does in ne

  • Can't upload picture on my Asha 303

    Hi, i have Nokia 303 and my phone suddenly stopped uploading pictures, i dont know why. Moderator's Note: The title of the thread was amended as well as the following post as it was moved from another thread.

  • Computer not starting up after updates

    I updated the software on my computer and restarted the computer. When it came back up it said i needed to shut down my computer for the firmware and when i was turning it on, i should hold the power button until it flashed. I did this and the comput