Cube Performance and Data Explosion

Hi Experts,
One of the parterns developed a data warehouse application. And the DW application has some performance issue:
when the report get the query of the high level dimensions, the performance is okey, and when the query get the very detail data in the cube, the performance gets bad.
The aggregations and the detail data are all stored in the cube, and the cube data gets Explosion quite quickly since some detailed transaction data need to be queried and stored in the cube too.
So, experts, do you have any good suggestion on this issue? or if the may be a better design for the cube? e.g. in DW, the cube only stores the aggregations or summary on coarse grained data and measure, for fine grained data , it can be got in ODS.
another question, I google the architecture solution for the above issue, and someone said that if the DW is designed in a hypercube, there maybe data explosion issue, but instead of desinging the hypercube, multicube should be used, so I wonder if multicube can solve the data explosion issue, and how to solve it. And if the multicube has better performance than the hypercube or can also solve detail data query.
Last question, do you have any experience on DW implementation on TB level Data, and any good suggestion for architecture design using Oracle OLAP or Essbase for good performance.
Thanks,
Royal.
Edited by: Royal on 2012-11-4 上午4:01

You have not asked any specific technical question. In my opinion all Oracle Datawarehouses should use Oracle OLAP option for the Aggregation strategy. Significant improvements in 11.2.0.2 (and later versions) have been made. It has become much easier now to create and maintain dimensions/cubes. On the reporting side, OBIEE 11g now understands OLAP metadata. Other reporting tools can use the CUBE_TABLE views.
Here are some links that you may find useful.
Comparing MVs and OLAP... Oracle White paper
http://www.oracle.com/technetwork/database/bi-datawarehousing/comparison-aw-mv-11g-twp-130903.pdf
Oracle OLAP Support page
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1107593.1
Three demos done by OLAP Development which explains how OLAP can help in a DW.
http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_1/OLAP_Features_and_Use_Cases_1.html
http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_2/OLAP_Features_and_Use_Cases_2.html
http://download.oracle.com/otndocs/products/warehouse/olap/videos/intro_part_3/OLAP_Features_and_Use_Cases_3.html
Main OLAP page at Oracle OTN site
http://www.oracle.com/technetwork/database/options/olap/index.html
Recommended Releases for Oracle OLAP
http://www.oracle.com/technetwork/database/options/olap/olap-certification-092987.html
Accelerating Data Warehouses using OLAP option
http://www.oracle.com/technetwork/issue-archive/2008/08-may/o38olap-085800.html
What's new in 11.2.0.2 database OLAP option
http://docs.oracle.com/cd/E11882_01/olap.112/e17123/whatsnew.htm
Oracle 11.2 OLAP Documentation (scroll down to OLAP section)
http://www.oracle.com/pls/db112/portal.portal_db?selected=6&frame=#online_analytical_processing_%28olap%29
Excel reporting from OLAP using Simba tool. This was developed in partnership with Oracle.
http://www.simba.com/MDX-Provider-for-Oracle-OLAP.htm
There is a good demo for Simba Excel tool at:
http://www.simba.com/demos/MDX-Provider-for-Oracle-OLAP-web-demo.html

Similar Messages

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Cube Partition and Date

    Hi,
    I have a cube which has data from 2000 till now. The data from 2000-2007 is very less.
    But for 2008 and till may 2009 we have huge data
    So we decided to partition the cube on Fiscal Year/Period with 1252( 12 for 2008, 5 for Jan 2009 till may 2009 and 1 for 2007 and below and 1 for June 2009 onwards)
    Now My question is how would we give the data range?
    Can anyone please tell me?
    Regards

    Hi AS,
    Suggest you to partition until October/December 2009, so that you wont have to repartition again immediately(reduce administrative effort).
    For your partitioning request.
    Fiscal year/period - 001/2008 to 005/2009.
    Maximum number of partitionins - 19.
    Check Features subtopic in this link for further details
    [Partitioning exmaple|http://help.sap.com/saphelp_nw70/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm]
    Hope it helps,
    Best regards,
    Sunmit.

  • Performance and data werehouse

    Hello! 
    How enabling data warehouse affects on performance of the SQL server and what recommendations exists about it?
    Thank you!

    Are you referring to the Management Datawarehouse that comes with the Data collector ?
    In that case it does not affect ther server significantly. If you are collecting data from many different servers, I would recommend to place the managementdw database on separate disk and be carefull with the retention period.
    Regards
    Rasmus Glibstrup
    http://blog.sqlguy.dk

  • Problems with Photoshop performance and data transfer speed on iMac

    Two months ago, I started noticing slow performances using Photoshop (above all using clone stamp tool) on my 27" iMac (late 2012). I did the AHT and I found that 8GB of 32GB RAM were broken.
    I removed them but the problem didn't disappered, I also noticed that data transfer speed (both copy and paste from/to internal HD and from CF card/external HD) was really slow.
    I tried many solutions suggested by Apple support, none of them worked out. At the end, I tried uninstalling and re-installing Photoshop: no more problems!!!
    10 days ago, I received a new 8GB RAM module and so I installed it back... suddenly, the problem came back, I tried re-installing again Photoshop but the problem, this time, still persist!
    Does anyone had the same experience? All other CC programs work well (LR, AE, Premiere...)

    yes, it does!
    what seems to be very strange to me is how data trasnfer speed could be affected!
    (just to say, I've already tried reset of SMC and PRAM, I've tried with different accounts and I've also re-installed the OS, next step would be formatting the disk and installing the OS from zero)

  • Report performance and data quality

    Hi,
    Can someone help give explanations to following questions :
    1.) Does BW Report show how current is my data?
    2.) What are the reason why the performance of my BW Report is slow?
    3.) What are the reason why my BW Report is have missing data?
    4.) Why is my BW Report have incorrect data?
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    Thanks,
    Milind
    Please do not raise generic questions across multiple forums
    Edited by: Arun Varadarajan on Apr 9, 2010 2:08 AM

    Milind,
    1.) Does BW Report show how current is my data?
    You should be able to see the data currency when you run in the web - which method are you using - BEx or Web...?
    2.) What are the reason why the performance of my BW Report is slow?
    It could be due to anything - please search the forums for the same on how to identify possible performance bottlenecks
    3.) What are the reason why my BW Report is have missing data?
    It depends - Missing data loads etc etc
    4.) Why is my BW Report have incorrect data?
    You should be knowing that...? I can just say that it has incorrect data because ...." The sun rises in the east...".!!! more akin to asking "Why did the chicken cross the road"
    5.) Why doesnu2019t my BW Report Data match SAP R/3 Data?
    You should ask SAP that question...
    Honestly I am not sure what the reason behind such generic questions are.... if you are looking for answers - then you need to be more specific - if these are more like Interview Questions asked to you - I guess you should be able to answer them or ask further questions to clarify the question further....

  • Hard drive performance and data throughput

    I am using my macbook pro for work primarily and part of that entails creating/restoring images of other macs. I've had the best luck with SuperDuper however the process is still VERY slow. For instance at this moment with no other applications open other than S.D. and firefox the copy speed is under 5MB/s from my MBP to an iMAC via fire wire.
    I am looking for suggestions to increase the performance/IO in the hopes to speed up the process. When purchasing this system the 7200rpm drive was not an option (15") which is unfortunate. I realize that both hard drives in the operation will cause the variable in speed but I want the sending drive as fast as possible.
    My thoughts right now are to purchase a 7200rpm external drive to store backup images and also send from. This would cut out any possible IO on the drive that my mac is performing to run the operating system. Another thought was to upgrade my mac to a 7200rpm drive and use the current 5400rpm drive as the storage for images...in the hopes that it would still provide an increase in restoration speed since it wouldn't be running OSX on it.
    Any thoughts or ideas? Experiences? My MBP has the 5400rpm I believe and 2GB of ram.
    Thanks

    I'll try and explain a bit better. I'm not restoring
    the same image to different types of macs. I create
    images of OTHER macs using my macbook pro to perform
    the process as well as store the backup image.
    Thanks for the clarification. I do that too, but when I do I use my Mac Pro to clone a Mac via Target Disk Mode to an external FireWire 800 drive.
    it helps but its a usb2 enclosure with a somewhat
    older hard drive that is only 30Gb. I am looking at
    purchasing a firewire 800 external drive but I will
    see how this other unit works for now since we
    already have it.
    Part of your throughput problem may be the overhead issues with USB 2. FireWire uses its own chipset so is more independent of the CPU, and FireWire can sustain high-speed transfers at a higher level. USB is CPU-bound and is more vulnerable to CPU demands from other apps or background processes or other USB devices. So even though USB 2 has a higher theoretical peak (480Mbps), FireWire (400Mbps) actually does better in the real world.
    About USB 2 vs. FireWire 400 performance
    I'm not sure if FireWire 800 would help because your slowest drive in the chain may not be fast enough to take advantage.

  • Exchange performance and Data Execution Prevention

    Hi
    Is there any impact on Exchange server performance related to (DEP) Data Execution Prevention ? Should I exclude Exchange processes from DEP ?
    I searched for any blog/forum post that would discuss this subject but had no luck finding any.
    Regard,

    Hi Grzegorz,
    Data Execution Prevention is a Windows feature, not an Exchange server feature.
    DEP is enabled on computers that are running Microsoft Windows Server by default.
    In our lab, we often turn it on. Sometimes it can resulting that messaging performance is slower than expected. Besides, there is no official articles or blogs to explain how the Data Execution Prevention affect Exchange performance.
    Hope my clarification is helpful.
    Best regards,
    Amy
    Amy Wang
    TechNet Community Support

  • Performance and data types: which to use?

    Hi All,
    I am wondering what data type to use and the effect of them on memory/speed.
    1. What is the difference (if any) of using sgl, dbl, int etc. Looking at the LabVIEW help there seems to be a range of 8-256 bits of storage according to the data type. Is it basically choose the one with the smallest storage that can fit the data?
    2. I currently have a cluster flowing through subVI's. The cluster contains the start time (or freq), the delta t (or f) and the array of data (about 500-5000 elements). I tried to use the waveform datatype but it couldn't handle a delta t of 2 nanoseconds (500 MHz signal). Am i ok using the cluster, or should i seperate the components and pass them along? What data type should i use for each of the components?
    Thanks

    There are three main issue to consider.
    Range and accuracy. If you need a very high level of accuracy, then you will need to use the extended data type or even create your own, although that's unlikely.
    Memory. Yes, SGL takes less than DBL, but unless you're dealing with really huge amounts of data this won't matter.
    Coercion. Most built in functions work on DBL. If you wire a SGL into them, they will coerce it, possibly creating a copy of the data and increasing your memory usage.
    To sum it up, most of the times it would be best to use the default DBL. It's highly unlikely you'll need one of the others.
    As for your second question, it sounds to me like the data is a single organism, so I would say you should leave it in the cluster, but that really depends on whether the functions need it or not and whether you're constantly bundling and unbundling the cluster. Note that 5000 elements is far from being a large array and you shouldn't have any problems handling it.
    As for the timing unit, if you really only have 5000 elements (that's 10 microseconds of data?) then you should not have a problem with using a U32 with a nanosecond as the base unit. That should give you the ability to measure more than 4 seconds.
    Try to take over the world!

  • How to tune performance of a cube with multiple date dimension?

    Hi, 
    I have a cube where I have a measure. Now for a turn time report I am taking the date difference of two dates and taking the average, max and min of the date difference. The graph is taking long time to load. I am using Telerik report controls. 
    Is there any way to tune up the cube performance with multiple date dimension to it? What are the key rules and beset practices for a cube to perform well? 
    Thanks, 
    Amit

    Hi amit2015,
    According to your description, you want to improve the performance of a SSAS cube with multiple date dimension. Right?
    In Analysis Services, there are many tips to improve the performance of a cube. In this scenario, I suggest you only keep one dimension, and only include the column which are required for your calculation. Please refer to "dimension design" in
    the link below:
    http://www.mssqltips.com/sqlservertip/2567/ssas--best-practices-and-performance-optimization--part-3-of-4/
    If you have any question, please feel free to ask.
    Simon Hou
    TechNet Community Support

  • Performance of my query based on cube ? and ods?

    hi all,
    how to identify the performance of my query based on a cube nor ods. I have requirement which enables to do flat file extraction and the extraction is only once and the records are less too. I need to sort whether my query will be faster based upon cube nor on ods.
    Can anyone let me know how to measure the performance of my query based upon cube and ods and how to find out which one will be faster. bcos i need to explain them the entire process of going to load the data directly to ods and do reporting from there nor data loaded directly to cube and do reporting from cube.
    thanxs
    haritha

    Hi,
    ODS is 2 Dimensional  so avoid reporting on ODS,
    Cube is MultiDim, for analysis perpose we can go reporting on Cube only
    Records in ODS are Overwritten whereas in Cube records are Aggregated
    and can also do compression on Cube, which will increase the query performance and so data retrieval in cube is faster
    Thanks

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • I performed a time machine backup without plugging my labtop into a power source. My computer died and all the settings were changed, ie the clock and date were changed back to 2001. So I tried to restore my computer using a previous time machine backup.

    I performed a time machine backup without plugging my labtop into a power source. My computer died and all the settings were changed, ie the clock and date were changed back to 2001. So I tried to restore my computer using a previous time machine backup. (which I now know was wrong). However, when time machine tried to restore it said there was not enough room to do a backup. It seems that it did a half backup because some essential  files such as system profiler are now missing. Can I undo this restore...? What can I do to fix this

    You need to do a full system restore, per Time Machine - Frequently Asked Question #14.
    If that sends a message, please note the exact wording.

  • Loaded data amount into cube and data monitor amount

    Hi,
    when I load data into the cube the inserted data amount in the administrator section shows 650000 data sets. The monitor of that request shows a lot of data packages. When I sum the data packages, the sum is about 700000 data sets.
    Where is the difference coming from?
    Thanks!

    Hi ,
       If it is a full load to the cube , all the records are updated in it since in a cube data can be overwritten.
       If it is a delta load and u want to see why the difference occurs between the records transferred and added in cube ,
       u can go to the manage tab in dso , go to the contents tab ,there click change log button at the below , check the number of entries in that table , the number of entries are the added records in cube since only these records are the new records other records with the same key are already present in the cube.

Maybe you are looking for

  • Writing HTML data in TEXT File

    I am writing HTML content in TEXT file ...I read it in a string and then write using PrintStream... But it writes all content on one line ..I want to write it as it was in source HTML file...

  • Preview in Browser and Adobe Browser Lab show website differently? Please help

    Hi...working on my first website and I have a question about the differences in viewing the site through choosing file...preview in browser.... from within Dreamweaver, and using the Adobe Browser Lab to view the website. There seems to be a very sig

  • Support Message and cProject

    Hi experts, I'm working with Solman Service desk and I would like to know how can I do to create support message in cProject? Somebody help me. Thanks Daniely

  • Error when trying to burn cds

    Every time I use itunes to burn a cd it gets about halfway done, then finishes the disk saying that there was a medium write error. Can someone tell me exactly what this is and what can be done to fix it? I have not attempted to burn with another pro

  • Lumia 620 Rebooting

    Just after the Lumia Black update i find my Lumia 620 rebooting every now and then