Solution on Memory Confliction in Labview

Hello everyone, 
I got some problems on the labview run-time memory confliction.
I have finished serveral vi , for example,  A,B, C,D, and some of their sub vis have the same name (but not same content).
Now I am planning to combine A,B,C,D together, each of them to be a Subvi of E, then whenever I loaded it together,  Labview would remind me memory confliction problem.
I think that is a serious disadvantage in labview, which store vis in memory directly with their name.
I would run A,B,C,D in sequential order, so is it any way I could handle that in labview besides changing all the names to be different ?
I am thinking of only load Vis into memory when I need that, and delete that from memory when the module is done, but I am not sure whehter that could be realistic here.
Any idea is well appreciated,
Thanks,
-Kunsheng

Even if you dynamically load them, you'll still have an awful time if you have to do any more debugging and/or development.  You will eventually save things pointing to the wrong VI and make your life miserable.
Sooner or later, you will have to give your VI's unique names.  I would vote for sooner.
-Matt Bradley
************ kudos always appreciated, but only when deserved **************************

Similar Messages

  • How can one use the physical memory of our system rather than virtual memory while running Labview?

    We have a Windows NT system with 2 Gb of physical memory and would like to utilize the RAM fully using Labview. What usually occurs is that Labview uses a tremendous amount of page file space while a majority of the physical memory is unused. Is there a way to configure Labview (or our system) to overcome this problem? It seems that our processes would be much faster if they were mainly using the physical memory. Is it possible to trick the system, by creating a RAM disk and allocating this as virtual memory space?

    > We have a Windows NT system with 2 Gb of physical memory and would
    > like to utilize the RAM fully using Labview. What usually occurs is
    > that Labview uses a tremendous amount of page file space while a
    > majority of the physical memory is unused. Is there a way to
    > configure Labview (or our system) to overcome this problem? It seems
    > that our processes would be much faster if they were mainly using the
    > physical memory. Is it possible to trick the system, by creating a
    > RAM disk and allocating this as virtual memory space?
    LabVIEW the application doesn't know anything about physical versus
    virtual memory. LV asks the OS for general purpose memory and goes
    from there.
    Drivers like DAQ and IMAQ will have a combination of general user
    memory and page lo
    cked physical buffers.
    I'm not up on the details, but this is something that the OS is in
    control of, so that is where you need to look for the solution. One
    of the things to look at is the disk cache. By default, I think that
    NT takes a percentage of the RAM to use for disk cache. With that
    much RAM, this is probably unnecessary and is using too much.
    Similarly, the OS may be attempting to keep the working set size
    to a fraction of the total RAM to reserve space for other things.
    It doesn't make sense to me, but then I don't work for MS.
    Greg McKaskle

  • Memory Footprint for LabVIEW 7.1 Full Development System

    Hello,
             I would like to know the memory footprint for LabVIEW 7.1 Full Development System.Both during the installtion and for developing applications later.
    Can someone help.Thank you.
    Solved!
    Go to Solution.

    There isn't a single answer for that, as this varies considerably.
    As far as disk space goes, if memory serves, the basic 7.x takes up a few hundred MB, but that could inflate considerably depending on which modules and drivers you add to it.
    For RAM, I believe 7.0 used to take up ~20-50 MB immediately after being loaded, but that number would then also change a lot over time depending on what you do. Loading, editing and running code all regularly change the amount of RAM used.
    Why are you asking? Also, are you aware that 7.1 is relatively old by now (came out 7 years ago)?
    Try to take over the world!

  • Is there a major resource (memory) conflict between AdBlock addon and Peerblock?

    Sorry this is so long. I just downloaded FF 3.6.3 for Windows last night and added the NoScripts and Adblock addons while concurrently running Peerblock. Within a few minutes my entire system went haywire, more precisely a complete system failure. I/O, CMOS, IDEport2, ACPI, everything went haywire. The mouse and the cursor became unusable, it was as though someone was mashing on buttons or clicking repeatedly. When I tried to open FF again it literally opened more than a hundred terminals. Right click completely failed because it immediately closed. I couldn't use any programs or type anything anywhere on my system. This behavior persisted even when I booted into BackTrack 4 and even under BIOS. The entire system literally became unusable. Then I wasn't able to even make it past the splash screen in most instances. During boot this was accompanied by constant beeps. I was able to gather some information, but not much before the complete insanity. I did see in FF something about a corrupted module. Peerblock also generated an error report indicating I/O errors but because of the nature of this problem I wasn't able to really view or record either one. There were tons of ACPI and other errors in EventViewer under Windows while all this was occurring. More errors than I had ever seen before. Finally my system wouldn't even boot unless I left it alone for some time. I also noticed excessive heat. Finally Windows prompted me, after removing and reseating the RAM, to the repair console. It found zero errors, absolutely nothing, generating 0x0 errors for everything. but did indicate that it located the root source of the problem. When it started up both FF and Peerblock had been removed. Everything else was fine. I'm using the system now without a single problem. I've been using Peerblock for more than six mos. without any issues. FF runs beautifully under BackTrack 4. It appears to me this was a memory conflict of some nature that corrupted the CMOS or some other memory modules, but I'm not that savvy. I would like to continue using both these programs with Windows but if it's not possible; it's not possible. I'm not crazy and won't be installing and using both these programs together again. This was the most serious problem I've ever encountered, worse than any virus. Also, let me assure you this was not a virus. Since I've gotten back on my system I've run three different AV engines. I am willing to try FF, with AdBlock of course, again and update how it goes. I may even try the alpha5. I know your busy, but please help if you can. Any feedback or help would be greatly appreciated. I will continue to support the Mozilla community regardless of this issue.
    -Regards
    == This happened ==
    Every time Firefox opened
    == As soon as I used FF for windows.

    I have a new computer and have had problems with just about everything there is to have problems with. I chose FireFox over IE because I just plain like it better. I am at a loss as to what to do with the Error Console. It is full of errors, and I am afraid to use my computer because it may crash. I had to remove my MicroTrend Security System because of all the problems with downloads. I really have never seen any such disgusting things before from a computer. Perhaps I downloaded too many addons. But isn't that what they are for? If not, why not make a simple product with no add-ons that will work efficiently. I know I would buy one. This is a beautiful computer with a huge screen that can also be a TV, but what is the use if the Browsers are all fighting with one another and cannot get along. Worse than our World. Lets get all get along and try to get what everyone would like. We are not all NERDS, although I wish I were right at this moment. I wish some wonderful nerd would email me and take over my computer and take out what I don't need and add in what I do. There is a terrific idea for some brilliant NERD.
    I am desperate here in New York State, USA and need help NOW.
    Thanks to all. I hope Firefox will send us out a Great Big Power Patch to fix all the ills of 3.6.3. I thought this might be the one. It surely sounded great. Thanks, Elizabeth

  • Memory Management in LabView / DLL

    Hi all,
    I have a problem concerning the memory management of LabView. If my data is bigger than 1 GB, LabView crashes with an error message "Out of Memory" (As LabView passes Data only by value and not by reference, 1 GB can be easily achieved). My idee is to divide the data structure into smaller structures and stream them from Hard Disk as they are needed. To do so, i have to get access to a DLL which reads this data from disk. As a hard disk is very slow in comparison to RAM, the LabView program gets very slow.
    Another approach was to allocate memory in the DLL and pass the pointer back to LabView...like creating a Ramdisk and reading the data from this disk. But memory is allocated in the context of Labview...so LabView crashes because the memory was corrupted by C++. Allocating memory with LabView-h-Files included doesnt help because memory is still allocated in the LabView context. So does anybody know if it's possible to allocate memory in a C++-DLL outside the LabView context, so that i can read my Data with a DLL by passing the pointer to this DLL by LabView? It should work the following way:
    -Start LabView program--> allocate an amount of memory for the data, get pointer back to labview
    -Work with the program and the data. If some data is needed, a DLL reads from the memory space the pointer is pointing at
    -Stop LabView program-->Memory is freed
    Remember: The data structure should be used like a global variable in a DLL or like a ramdisk!
    Hope you can understand my problem
    Thanks in advance
    Christian
    THINK G!! ;-)
    Using LabView 2010 and 2011 on Mac and Win
    Programming in Microsoft Visual C++ (Win), XCode (Mac)

    If you have multiple subvis grabbing 200MB each you might try using the "Request Deallocation" function so that once a vi is done processing it releases the memory.
    LabVIEW Help: "When a top-level VI calls a subVI, LabVIEW allocates a data space
    of memory in which that subVI runs. When the subVI finishes running, LabVIEW
    usually does not deallocate the data space until the top-level VI finishes
    running or until the entire application stops, which can result in out-of-memory
    conditions and degradation of performance. Use this function to deallocate the
    data space immediately after the VI completes execution."
    Programming >> Application Control >> Memory Control >> Request Deallocation
    I think it first appeared in LabVIEW 7.1.
    Message Edited by Troy K on 07-14-2008 09:36 AM
    Troy
    CLDEach snowflake in an avalanche pleads not guilty. - Stanislaw J. Lec
    I haven't failed, I've found 10,000 ways that don't work - Thomas Edison
    Beware of the man who won't be bothered with details. - William Feather
    The greatest of faults is to be conscious of none. - Thomas Carlyle

  • PXI-8430/8 Cause memory conflict in XP device manager

    Is this normal?  If not, how do correct this?
    I have already tried:
    Remove all entry related to this card in Device Manager, remove the card.
    Every thing looks ok when rebooted.
    Now, if I put card back, I see the same problem again.
    Message Edited by ZJZ123 on 11-08-2007 03:12 PM
    Attachments:
    Memory error.png ‏32 KB

    Hey ZJZ123,
    There are some documentation that the 8430 may cause some type of memory conflicts, but the ports should work fine. If they do not work than it may be a symptom of this, let me know
    Regards,
    Message Edited by Can W on 11-09-2007 12:25 PM
    Can W.
    PXI and VXI Platform PSE
    National Instruments

  • Wire: class conflict in LABVIEW 7.0 Express

    Hello all,
    recently, we were upgrading from LABVIEW 6i to LABVIEW 7.0 .
    Everything seems to be finde with one exception.
    In a VI I use an intensity graph (three dimensional graph). The change
    of its properties, in particular some values affecting the intensity
    display, is done in a SubVI.
    This SubVI gets a reference to the intensity graph as handle. The
    corresponding terminal in the SubVI is a RefNum control.
    In LABVIEW 6i this was working without any problems but in LABVIEW 7,
    it does not work anymore.
    I get a broken wire sign (red cross) and the error message
    Wire: class conflict .
    The online help did not bring much light into that situation. There is
    only a somehow hidden hint that the options for the RefNum control
    sh
    ould be checked.
    But what is really wrong there? Should I use anything else as terminal
    in my SubVI or is there a particulkar option that needs to be set?
    At the moment, the VI server class of the RefNum control is set to
    Generic|Generic|Object|Control|Graph|IntensityGraph (copied from mind
    at the moment).
    Any hints would be greatly appreciated.
    Thank you very much in advance and best regards
    Gerd

    Hi Gerd,
    I have had problems in the past with passing control references into subVIs. I had these problems because the strict control reference is often incompatible with a non-strict refnum control, or a strict refnum control of the wrong type. For example, if you have a multi-selection listbox, but you try to wire that control reference to a listbox reference associated with a scalar datatype (single-selection), you will get a wiring error.
    The easiest solution for these problems is to right-click on the control reference and choose Create > Control. Then, copy this control into your subVI and use it as the subVI input. I think this will solve your class conflict problems.
    If this doesn't work right, let me know and we can talk further...attachi
    ng a sample LLB that demonstrates the problem would be helpful.
    -D
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • PCI-E graphics card and system memory conflict.

     
    Hello,
    I have a bit of a strange problem with my new system build.
    All my parts work fine when used by them selves.
    MSI KN8 diamond.
    Athlon 3200 64bit 939 Winchester.
    Crucial Ballistix PC4000 memory.
    Inno3D 6800GT PCI-E
    Thermaltake purepower 680W.
    The problem arises when I try and use both the memory and graphics card card at the same time, the system does not post stating a memory problem.  I know both work as I’m able to post when either is removed i.e when running with a standard PCI graphics card or when the memory is swapped. I’ve checked out a lot of forums over the last 2 weeks and I know there are a lot of people out there running with the same memory.  I’ve tried under clocking the memory with no success. Can anyone shed any light on this problem as I’ve never had a conflict between a graphics card and memory before.
    Thanks John 

    Need to make a couple of changes as there appears to be a possible issue with the loading of the Optimized Default setting in the bios, and I quote from MSI:
     Quote
    After loading the Optimized Default, the memory setting values will be optimized to achieve better performance. It is, however, likely to cause problems of compatibility.
    Which seems to be the exact issue you are experiencing.  First thing I would do is to manually set your vdimm to 2.8v in the bios.  This will make sure that your memory is getting the correct voltages as the AUTO setting usually sets memory to 2.5v.  This alone could eliminate your problem, but I think you'll have to comb through your memory timings and reset them, (Crucial lists 2.5-4-4-8 @ DDR500 speeds, so I'd try 2-3-3-6 or 2.5-3-3-6 for DDR400 speeds).
    There is also an official bios 1.1 for your board, and a beta (5.01) from the Diamond Club.  I'd stay with the official releases for the time being (until you get your board up and running properly).

  • Looking for memory leak in labview 6.02

    dear all,
    my labview program needs an increasing amount of memory when running (at the time about 12k/s) which leads to swapping memory on my hdd.
    i have found out that calling cluster references like Controls[] or decos[] leads to this kind of memory leak and made a work around for that (simply calling it only once at runtime) but there are more memory leak(s)which i cannot find. The bugs that i have found searching the labview resources does not answer my problem.
    does anybody already found out more memory leak problems ?
    thanks

    I have been seeing similar behavior; I too have not found all the
    leaks. I think I have slowed the allocation by replacing some of my
    'build array' functions with the combination of 'initialize array'
    followed by 'replace array subset'. The drawback to doing it this way
    is that you have a fixed maximum number of elements allocated just once
    rather than allowing an array to grow incrementally. Please let me know
    if you try removing build arrays and if that helps.
    By the way, there is a checkbox under the options menu to deallocate
    memory as soon as possible - if you don't mind that your program runs
    more slowly, this may help to avoid the problem, at least temporarily.
    -vinny
    ciossek wrote:
    > dear all,
    > my labview program needs an increasing amount of memo
    ry when running
    > (at the time about 12k/s) which leads to swapping memory on my hdd.
    > i have found out that calling cluster references like Controls[] or
    > decos[] leads to this kind of memory leak and made a work around for
    > that (simply calling it only once at runtime) but there are more
    > memory leak(s)which i cannot find. The bugs that i have found
    > searching the labview resources does not answer my problem.
    >
    > does anybody already found out more memory leak problems ?
    >
    > thanks

  • Finding Memory Leak in LabVIEW

    Hi NG,
    we wrote a sort of hugh application in labview that collaborates with our
    ..NET driven
    device framework and a variaty of unmanged c++ libs.
    As we are now close to the release we did performance testing and it showed
    up that
    the app is consuming more memory continously.
    We used prefomance counter to check whether the managed objects might not
    get garbage collected
    but this is done OK, since the handle count stays at a constant level.
    We tried to figure out whether one of the c++ libs is causing the problem
    but this is not the case either, since
    the memory leak exists even if the lib functions are just not being called.
    Even labview profiler is not detecting any inconvenient memory consumption
    of any of our vis but
    windows task manager notices a constant memory consumtion of the executable
    (or of labview.exe itselve when running in
    dev. environment).
    Does anybody know what this might be? Has anybody a clue what other tools we
    could use to figure out
    where the problem is?
    Any help is greatly appreciated.
    Thanks in advance, Sebastian Dau!

    "mfitzsimons" <[email protected]> wrote in message
    news:[email protected]..
    > Sebastian,
    > Memory leaks can be VERY hard to find.&nbsp; I use the divide and conquer
    > technique that involves chunking (make into smaller code segments) your
    > code into smaller pieces and running the smaller pieces until you find the
    > memory leak.&nbsp; It can take weeks to find the source.
    you accelerate this process by using the labview profiler.
    > The most common is arrays or any data that increases in size as the
    > program runs.&nbsp; To fix this you can use a FIFO that allows only so
    > many elements to be added before deleting data out of the array.&nbsp; The
    > second source is opening references repeatedly and not closing them.&nbsp;
    > Look closely at any VI that open anything.&nbsp; Do a search for "Memory
    > Leak" on the discussion forum and you will find about 60 postings that may
    > help.&nbsp;
    > Matt
    > If you are using DAQ-MX <a
    > href="http://digital.ni.com/public.nsf/allkb/BAF29EE03747EE4B86256E9700541436?OpenDocument"
    > target="_blank">http://digital.ni.com/public.nsf/allkb/BAF29EE03747EE4B86256E9700541436?OpenDocument</a>
    > &nbsp;
    >
    arrays and refs have already been checked.
    thanks, Sebastian

  • Subpanel problem & memory leak in Labview 8.0

    Hi,
    I just installed Labview 8.0 Evaluation package yesterday. I tried to
    code a user interface class using dqGOOP object oriented package. It
    seems that LV 8 behaves differently from LV 7.1 which is good. One can
    use same subvivi in multiple subpanels. I however run into problems
    with this new feature i.e. I cannot edit the subvi after it has been
    placed on two subpanels. Editing is possible after LV 8 restart.
    In my code I create two similar object instances from the panel class. I have a class method panel_run which puts itself on the subpanel defined by refence. I have two subpanels on the main vi, one for each of my panel_run instances. I do not stop the panel_run instances cleanly, instead I use the Labview stop button to stop the execution. Once the main vi has runned, the panel_run subvi cannot be edited any more. When one opens the panel_run
    subvi, a clone of the subvi is always opened instead. One cannot modify
    the clone. After a few runs, I also got a memory segmentation fault
    from labview, which then crashed. I was unable to repeat the crash.
    I attached the code. Use panel_test.vi to start the program. Is
    there any way to get around this problem of not being able to access a
    subvi after it has been placed on a subpanel?
    Regards,
    Tomi
    p.s. Sorry for the wrong category. This is Labview 8.0 problem. There
    was not however a category for labview 8, so I selected Labview 7.1.
    Tomi Maila
    Attachments:
    panel_test.zip ‏569 KB

    Open your panel_run VI --> you get a clone
    Press CTRL+M --> Will open your VI (Not the Clone)
    Close the Clone and work on the VI.
    PJM
    Got EasyXML?
    JKI.VIPM.EasyXML.OpenG.LAVA.Builder.blog

  • Intel Hyper-Thre​ading Technology conflicts with LabVIEW utilities (VISA, Scope GUI, IO Trace...)

    I would like to share a pretty hard-to-troubleshoot issue we are experiencing for the last few months.
    Our company used to get DELL T5500 for our engineers. The PCs work just fine with all LabVIEW utilities. But DELL has discontinued T5500 series and replace them with T5600. I got one of them few months ago and after installing LabVIEW, I tried to run VISA console via MAX. It immediately crashes MAX and destroy MAX database. After that I try to run other utilities like NI IO Trace, VISA Interactive Control, Scope Soft Front Panel, ... All of then crashed. I am running 64-bit Windows 7  + 64-bit LabVIEW. And we know that most of NI Utilities are 32-bit.
    After a lot of frustration I went down to researching the computer BIOS level. And try to side-by-side compare with T5500. T5600 has much newer CPU and has  lot more performance enhancement features. I tried to turn of/on one by one to see if any affect LabVIEW utilities. To my surprise I found that Intel® Hyper-Threading Technology (Intel® HT Technology) is the sinner. After turning it off all LabVIEW utilities start to work just fine. All T5600s are shipped with this feature enabled by default.
    We know that DELL Precision PCs are almost industry standard for all engineering department. I think in the next few years a lot of people will be hit by this issue. I already notified NI and DELL R&D so they can find a good solution. But I just would like to make this issue Google-searchable so that anybody see this issue may get some help.
    Give me any feedback if you encountered the same problem.
    Thanks,

    This means that you were on a witch hunt and hyperthreading is not the problem. (I always had doubts).
    The original thread was about crashes in the visa console, but your problems seem to be much more generic:
    "- The application stalls unpredictably after some time, sometimes a minute, sometimes hours. After clicking into the GUI it starts working again. This repeats in an unpredictable way. Competitive activities on the computer seemed to increase the stalling-frequency.
    - Sound Input VI stops unpredictably and has to be restarted."
    Are you sure you don't have a general code issues such as race conditions or deadlocks. Maybe you should start a new thread and show us a simplified version of your program that still demonstrates the problem. If there are race conditions, moving to a different CPU can cause slight changes in execution order, exposing them.
    Did you repair the LabVIEW and driver installation? What are the power settings of the computer? Did you update other drivers (such as video, power management, etc)
    What is the exact CPU you are using? What third party utilities and security software is running on your PC?
    LabVIEW Champion . Do more with less code and in less time .

  • How to reduce page faults (virtual memory) usage in LabVIEW

    I am running LabVIEW 2010 on Windows Embedded Standard system 4G (3G usable) of RAM.
    The program (executable running on the Run-Time Engine) continuously samples from a USB-6251 at 1MS/sec (2 channels @ 500K) and scans the data stream for anomalous events, which are recorded to file.  The system is intended to run 24/7 for at least 2 months, and although everything is working fine (6% CPU, no memory leaks), it generates 6000 Page Faults / second.  I am concerned that I will kill the hard-drive at this rate, over long periods of time.  There is plenty of RAM available (LabVIEW is only using 200K, and there is over 2G free), but the program is choosing to rely on Virtual Memory instead.
    Is there a way to force (coerce) a LabVIEW application to consume more RAM and less VM?
    The code is heavy (2 independent routines to collect and process the data stream through a shared double buffer, lots of in-obvious logic...), but I will post if it would help to answer the question.

    Craig Akers wrote:
    I am running LabVIEW 2010 on Windows Embedded Standard system 4G (3G usable) of RAM.
    The program (executable running on the Run-Time Engine) continuously samples from a USB-6251 at 1MS/sec (2 channels @ 500K) and scans the data stream for anomalous events, which are recorded to file.  The system is intended to run 24/7 for at least 2 months, and although everything is working fine (6% CPU, no memory leaks), it generates 6000 Page Faults / second.  I am concerned that I will kill the hard-drive at this rate, over long periods of time.  There is plenty of RAM available (LabVIEW is only using 200K, and there is over 2G free), but the program is choosing to rely on Virtual Memory instead.
    Is there a way to force (coerce) a LabVIEW application to consume more RAM and less VM?
    The code is heavy (2 independent routines to collect and process the data stream through a shared double buffer, lots of in-obvious logic...), but I will post if it would help to answer the question.
    IIRC, the decision to move a page of memory from physical memory to disk is made at the OS level.  There probably isn't any setting you can change in LabVIEW to change this behavior.
    Keep in mind that not every page fault results in a page being loaded from disk.  If your program (or the LabVIEW run-time) is frequently allocating and freeing memory, you could get a lot of soft page faults as the physical memory pages are repeatedly allocated to your process and returned to the OS.  If you're only running at 6% CPU, this wouldn't be a problem.
    You could try disabling the page file altogether, if the machine has enough RAM, but I wouldn't do this unless you actually have a performance (or hard-disk durability) problem.  Having a page file to back up the physical memory is the difference between your program suffering from degraded performance vs. simply crashing if the machine runs out of physical memory.
    Mark Moss
    Electrical Validation Engineer
    GHSP

  • DAQ Solution Wizard greyed out in Labview 5.1

    My DAQ board seems to be work correctly when using MAX, but in LabView the DAQ Solutions Wizard is greyed out. I have a PCI-MIO-16E-4 DAQ board, Labview v5.1 and NI-DAQ v6.9.2f8. I am running Windows 2000.

    Make sure that the LabVIEW 5.1 support files have been installed along with NI-DAQ. Go to the Control Panel >> Add/Remove Programs >> NI-DAQ >> click Change >> select Add/Remove. If there is a red X next to LabVIEW 5.1 support, then it has not been installed. Install the entire feature and the DAQ Solutions Wizard should work.

  • Memory register in labview

    How to create the registers in labview..Not shift register, but a similar micro controller kind of register for memory.
    for example register based data conversion, where we need to have registers to hold data to be converted and to store..
    can we use arrays..?
    Srinivas

    This is still no reason to don't use shift-registers, with an array datatype.
    It is more important that you specifiy where and how you want to access the data. With shif-registers you could create Action Engines (FGV) to access the data.
    I'm sure you will get lot's of other suggestions wihtin the next couple of hours.
    Christian

Maybe you are looking for

  • Why can't I download my photos from Canon camera. Every thing worked fine with iPhoto

    I can not down load my canon photos to the new Photos app!!?? What's the story and how can it be fixed!!!!!

  • I have problem in space drive in my windows 8

    I have a problem in my space drive , it is decreasing very fast .. every day I am checking the storge of my space drive .. I found it lower than the day before ... I made clean up ... but it did not work very well .. I lost till now more than 30 gb o

  • Cannot Open Premiere Pro CC

    Hi Guys I'm not sure if I should post this here or on the Cloud installation forum. I have CS6 Master Collection and Pr CS6 has been running fine. I'm currently running the Pr Cloud trial. On my main PC, I've downloaded P rCC and edited a small proje

  • Why i didn't have color in my iphone 4S without changing any settings

    I'm on Iphone 4S iOS 8.1.2, and since december 31st i lost color in my screen... I didn't change any settings but no more color... Some one told me i touch gray scale setting but i don't. I try to reset to factory but nothing change Can you somebody

  • Cannot load Flash Player 10.1

    I am running Windows XP SP3 (all updates installed as of 3 July 2010).  I am using IE v 8.0.6001.18702 cipher 128 bit. When i try to install Flash player 10.1, I get the error "ActiveX control could not be registered." There is only one user on the c