Bad REadings

I found out that the virtual channel system is kind of problematic with my system. One of the NI engineers recomended me to use those, specially because they perform CJC on every reading automatically, and I did. But I wasted a lot of effort just to discover that the AI Read VI doesn't update the CJC on every reading, what is done only in the setup.
Here is the spec of my systgem:
- SCXI-1000
- SCXI-1200 connected in the parallel port and controling the chassis, with connector block 1302, and using its counter resources.
- SCXI-1163, controling a few switches with connector block 1326
- SCXI-1120, reading T thermocouples, with connector block 1327.
- LabVIEW 5.1f1
- NI-DAQ - 6.9.1f28
- IBM Personal Computer 300 GL, with Pentium II 266 Mhz.
I dicover
ed that the Virtual Channels weren't that good using the "SCXI Scan VI", provided with the package. I configured a single virtual channel and started an acquisition, returning me NaN. Then, using the SCXI Scan VI and configuring everything manually, I could get some results. I have tried so many things to get it working that the possibilities are almost over.
Anyway, the error I get most is code 10800. I tried to slow down the Scan Rate, increase the samples/channel, increased the time limit, checked all jumpers, reduced the gain, set the same gain for all channels, connecting and not connecting "-" channel to ground, tried to split the daq into 2 readings of 4 channels instead of 1 of 8, (and maybe some other things I can't remeber now) and nothing seems to work. The other modules work well. Also, it's important to say that I got the same DAQ problems with the VIs provided with LabVIEW. A funny thing is that some times, the DAQ works, sometime, it doesn't. Of course, I am comparing
in exactly the same conditions, i.e., turning everything ON, then first loading LabVIEW and starting a acquisition. This is really strange.
I hope anyone can help me!
Thanks,
Guilherme

Since you are getting a time out error (10800) it would worth to check that the SCXI-1200 is responding. In the measurement and Automation Explorer, select the SCXI-1200 under devices an interfaces, right-click and go to properties. Click on "test resources" and you must get "The device has passed the test" If not that could be one possible reason of the problem.
Can you get readings directly from the SCXI-1120 by using the SCXI string? This does not mean necessarily the CJC sensor:
"You need to use SCXI channel strings to communicate with other modules in the SCXI chassis. The general format of the SCXI channel string is OBz!SCx!MDy!a, where z is the device number of the DAQ board connected to the chassis, x is the SCXI chassis number, y is the module nu
mber, and a is the channel number (use a:b to scan multiple channels). The SCXI channel string is not used for the SCXI-1200 because it is its own DAQ board."
The Virtual channels should not represent a problem, that is why is important to make sure that teh system is working fin without them too.
Thanks for contacting National Instruments!
Arturo Q.

Similar Messages

  • Bad readings from DAQCard-6036E after DAQmxSelfCal() call

    Hi,
    I have a very urgent problem that I would like investigating.
    I have a DAQCard-6036E and have noticed that, occassionally, after a call to DAQmxSelfCal(), the AI readings are incorrect.
    All channels in my application are configured in differential mode, and the readings returned are occassionally wrong by ~9mV on channels with gain of 1, and ~0.9mV on channels with a gain of 10.
    If I repeatedly call DAQmxSelfCal(), I seem to get the bad readings after about every 3rd call to DAQmxSelfCal() - then the next call will usually correct the calibration coeffns, and so on. I have not been able to establish an exact pattern to the problem, but have tried with the laptop on mains power, and internal batter
    ies and the problem is the same.
    Could there be a problem with the self calibration function in DAQmx, or could there be an external influence? When I call DAQmxSelfCal() my signals are connected to the card, but I assume the inputs get isolated from connected signals during the calibration routine.
    Thanks

    Hi Ed,
    I got a loan DAQCard-6036E from NI and tried that and had the same problems. I am suspicious that it is something to do with my laptop, maybe the battery monitoring circuitry or something. Assuming that there is nothing wrong with the DAQCard 6036 internal design, my theory is that if the laptop generates noise while the self-cal is in progress then this can affect the calibration coeffns that are generated. This might be a rubbish theory, but believe me, the effect is genuine.
    I would say proceed with caution, put the self cal in, but test it thoroughly by calling it repeatedly with a high precision voltage reference attached and check the returned values. Note that I only think this is an issue with DAQCards, I have not seen it with PCI cards, but then I have not tested these as much.
    By the way, doing a self cal once per day is more than enough. Once per hour is too much. There is an article somewhere on NI.com that talks about the recommended interval. Make sure the laptop has been on for at least 15 mins first.
    Let me know how you get on.
    Regards
    Jamie

  • FP-TC-120 ok at low temps but gives bad readings at 200 C

    I've been using the FP-TC-120 for years with no problems reading type-K thermocouples at 20 - 150 C temperatures or so.  Recently I wanted to read six (type-K) thermocouples in a furnace at 800 - 1200 C temperatures, but the unit gives me bad readings (in both LabView and in FieldPoint Explorer).  The six thermocouples are fine when I turn the furnace on, but about when the first thermocouple gets above about 250 C I start getting almost random values for all six, with occassionally the right values coming through.  If I disconnect all but one of the thermocouples, then I get a good, steady reading.  Some times it will work for two, but never for all six.  My firmware is "0030".  The FieldPoint unit and FP-1000 base are all at room temperature.
    Thanks for any help.

    My (firmware) revision is 30.  FieldPoint Explorer is version 3.0.2.
    Further tests today (using a heat gun) gave no problems with a pair of thermocouples at 380 C and several others at room temp.  I am now wondering whether my problem was due to radiative heating of the FieldPoint module(s), since the furnace exterior (at about 125 C) would be partially visible by them.  I've moved the modules to a sheltered location, added some fans, and plan to try again in a couple of days.  The behavior suggests that the multiplexing chip is not functioning properly; perhaps that is due to it overheating.
    Thanks for your interest.

  • Ch 3 on 6th mod of 1121s w/ 1321 FE gives bad readings if not scanned by itself

    I can scan channel 3 by itself or with channels from another 1121 module and recieve a good reading but trying to scan it with the other channels from the same 1121 module results in a bad reading. I do not have this problem with any of the other 1121 modules. There is an 1141 with a 1305 in slot 7 of the chassis.

    I know it has been a long time since the last post on this issue, but I wanted to give an update.
    It turns out that all 14 of the 1321 front ends I purchased had solid state relays from the same lot (4 per module.) All of these relays with lot 0027T1267 relays from Clare do NOT meet manufacturer's specs for temperature, and so can cause erratic problems. At a mere 96F, they change values and eventually completely change states. Since they aren't mechanical in nature, they don't hard switch on or off, but rather have inbetween values that can appear to be bad readings. See my other post on 1321's for more.
    Tim Jones

  • FP-TC-120 : Channel 0 gives bad readings after reaching 100F

    I am using a FP-1001 module to communicate with 5 FP-TC-120 modules. The problem I have is that channel 0 on the first three FP-TC-120 modules goes out of range at 100F. All of the other channels are working fine.
    I have checked the configuration settings in MAX and channel 0 is set up the same as the rest. I also checked the readings using Tag Monitor and when the value is below 0F the quality column says "good" but as soon as it reaches 100F it says "limit exceeded".
    I have tried swapping FP-TC-120 modules as well as the FP-TB-3 bases with no luck. The firmware is up to date, Rev. 0030. There are no other modules connected to the FP-1001.
    Any ideas?

    What type of thermocouples are you using? How are they mounted? The [c]FP-TC-120 modules are not designed for a large amount of channel to channel common mode voltage. If several channels are biased with more common mode voltage than the others, you may observe this type of behavior. One solution is to re-arrange the thermocouples so that thermocouples that are at the same potential are on the same module. Also, make sure that you do not wire the V & C terminals from one FP-TC-120 to the next.
    Regards,
    Aaron

  • Bad readings with CFP-RTD-124

    Hi,
    I have a compact field point system to record t° measurements on a machine (to replace a YOKOGAWA paper recorder system)
    This machine is equiped with Thermocouple and RTD sensor
    TC acquisition boards are CFP-TC-120. Readings are ok
    RTD acquisition boards are CFP-RTD-124. Reading are nok  : see picture attached
    With the previous YOKO system readings were stable and did not show any fast variations.
    If i take resistance measurements, value is also stable.
    CFP-RTD-124 used is revision H (187208H-02)
    In advance thanks for your support.
    Regards
    Stéphane
    Attachments:
    CFP RTD 124.JPG ‏107 KB

    To me, it looks like just a little bit of electrical noise.  You've collected 10's to 100's of data points in a 2 second period and it looks like the results are +/- 1 degree.  (The VI isn't clear as to whether this is °C or °F)
    I might be wrong, but I think the thermocouple modules have some internal filtering to eliminate noise.  I've never used the RTD module to know, but perhaps it doesn't.  Or maybe there is a setting in MAX to turn it on.
    Does your temperature change so rapidly that you'd need to read numerous data points per second?  The screenshot doesn't look like it is.  One possibility would be to so some averaging across multiple data points and plot that result.

  • Bad readings when another channel maps ranges

    Two channels read about half correct voltage, but only if another channel (several A/D channels away) is set to Map Ranges with correct values. If set to Map ranges with defaults, or No Scaling, then the channels read correctly. This is verified in two different applications reading the same channel list.

    It is a little hard to figure out what exactly might be going on with the information provided. What application are you testing this on? What version of NI-DAQ do you have installed onto your machine? Are you using virtual channels for all of your inputs?
    Using the test panels on my machine, I was able to play around with the Map Ranges setting in Measurement & Automation Explorer (MAX) but was unable to duplicate the results that you are seeing. I would be sure to verify that your virtual channels are indeed referencing separate channels and that the settings are being saved and set as you would expect. Do you see these results in the Test Panels in MAX?
    I was not able to find any report of this type of phenomenon happening so hopefully it is just a set
    ting that we are overlooking on your system. If you are still having this problem, then provide a little more information and we can see what we can do. Thank you!
    Regards,
    Michael H
    Applications Engineer
    National Instruments

  • Bad reading of thermocouple on SCXI-1102

    Hi,
    Some of the thermocouple plugged in the SCXI-1102 give bad reading(unusual value). There is maybe an issue with the use of grounded and non-grounded thermocouple? Is the system need to be calibrated?
    Thanks.

    Hello gperron,
    When you say some thermocouples give bad values, does this mean that you have some that give the correct value?  If so, try switching the ones that give you a correct reading with the ones that are giving a bad reading.  If the bad readings follow the thermocouples, you can conclude that those thermocouples are bad.  Are you using the built-in cold junction compensation on the SCXI-1303 to make sure you are getting the most accurate readings?
    In general, thermocouples are floating sources and you measure them differentially with a bias resistor connected between the negative terminal and ground.  Page 3-4 of the 1102 User Manual describes connecting signals to the 1102.  The 1303 Installation Guide describes how to replace bias resistors, if needed.   
    Thanks,
    Laura

  • Having problems with encoder readings due to motor commutation noise

    Hello everyone,
    I wanted to ask for advice with a hardware problem which I believe is rather usual.
    Here I describe my application:
    We are controlling an electric actuator for a robotics application. We are using encoders to take position readings and we need to perform analog acquisition for other measurements (such as force measured with strain gauges).
    The problem is:
    In summary, I am having problems to acquire properly position readings from a quadrature linear encoders and also some analog inputs. The cause  is the commutation noise generated by the motor drive we are using (which is a brushless dc motor Moog BN-23-23).
    Our acquisition platform is a NI PXI-8106 with a PXI-1042Q chassis. We have two possibilities to acquire the signals. We have a multifunction DAQ M series NI PXI-6259 and a FlexRIO NI PXI-7951R with a DIO module NI PXI-6581R.
    The commutation noise have a frequency of 30 kHz. In an oscilloscope we can see a series of noise peaks that are only present during a short period of time (about 1/10 of the period of the noise). The rest of the time the noise is not present.
    The Accelnet amplifier module that feeds the electric motor provides us with a clock signal synchronized with the noise (which frequency is about 1/4 of the noise frequency). This clock signal provides a mean to solve the problem of the analog acquisition. We can use this clock to perform a buffered acquisition with an external clock in LabView connecting the clock to a PFI pin or to the FPGA card. But the noise is also corrupting this clock signal (we get a daqmx error warning us about possible glitches in the clock signal, and also stopping the acquisition). I believe that solving the encoder problem we can solve also the analog acquisition problem.
    In the encoder readings the noise is making our counter count upwards or backwards gradually rather fast. We can get an increase in position of about 10 cm/second without any appreciable movement in the linear actuator.
    It would be of great help if anyone could post the solution he is using to solve this problem.
    Thanks in advance for your help,
    jespestana
    PS: I insist in my belief that we are having a hardware issue, because we are only having bad readings when the electric motor is working. I am convinced so because we have already performed encoder and analog readings using other drives, such as hydraulic cylinders. Thus, I think that it is not a problem of our software (of our LabView VI).
    Solved!
    Go to Solution.

    Hi jespestana,
    I'm not sure why the noise would be causing your encoder measurement to increment more slowly...  However I do have one suggestion on the M Series board (6259):
    The M Series cards have built-in digital filtering on the PFI lines (see the M Series User Manual).  It sounds like the noise is a series of ~3 us pulses (1/10 of 1/30 kHz).  One of the available filtering frequencies that you may set on your M Series is 6.425 us, which should ignore any pulses (high or low) that are less than 6.425 us. You may set digital filtering with a DAQmx Property Node:
    A caveat is that the driver only allows you to configure digital filtering for counter inputs on M Series devices.  So, you could use digital filtering directly on your encoder task but not for your AI Sample Clock.  A workaround can be found here, which involves configuring a dummy counter task to set the PFI filter for your AI task.  If you're using the same PFI line for your encoder and AI task, you should just be able to set the PFI filter through the encoder task and not worry about the workaround.
    With regards to the Flex RIO, I believe you could implement something similar on the FPGA, but I'm probably not the best person to comment about this.  It would likely be a great deal more work than using the built-in filtering of the DAQmx API.
    Best Regards,
    John Passiak

  • ? about 3D! Turbo Experience readings

    I recently noticed that the 3D! Turbo Experience monitor that game with my GF4Ti200-VTD8x  shows the GPU voltage as 1.7 even though in my motherboard (K7N2-L) BIOS settings the AGP Voltage is set to 1.5 volts.
    Is this just a mis-reading of the AGP voltage, or does the video card need 1.7 volts instead of the 1.5 volts the motherboard defaults to?
    MacBrave
    system specs:
    MSI k7n2-l
    MSI GF4Ti4200-VTD8x 128mb
    512mb Crucial PC2700 DDR
    Antec 350w SmartPower PSU

    Don't worry 3D!Turbo Experience has some really querky bad readings on my computer too. No need to fret if you are not overclocking anything or doing any voltage modifications.
     )

  • PXI 6070E, Unable to capture waveform on ACH0 while generating a wavefrm on DAC0

    I'm trying to plot the sinewave being generated continuously (verified with a scope) by DAC0 that is wired to ACH0. All I get is noise. When I disconnect DAC0 (but leave it running) and connect an external function generator to ACH0, it plots just fine. Has anyone seen this problem?

    Hello,
    Have you tried changing the input mode? I'm not very sure what input mode you are using but I would recommend to use differential.
    For troubleshooting bad readings, it is always advisable to read the signals from MAX (Measurement and Automation Explorer) first, and then use other programs like LabView, LabWindows/CVI or Measurement Studio.
    If you want to use differential mode, here's how you would connect the signals: Connect DAC0 OUT to ACH0, and AOGND to ACH8.
    Finally, here's a link to an interesting Knowledge Base: Data Acquisition: Troubleshooting Offset, Incorrect, and Noisy Readings
    Good luc
    k with your application!

  • How can I get the program to recognize two different types of thermocouples?

    I am using a PCI-4351 card with a TBX-68T terminal block. I was having trouble writing and finding a program that would give me more than one reading/sec. I found a program on the NI website called "435x_logger_triggering", and so far it is the only program that I have found that will actually collect data at 60 Hz. Unfortunately, this program only lets you specify one mode for your thermocouples. This is a problem because we are using two thermocouples, one is type K and the other is a type R, so we get bad readings from one of the thermocouples depending on which mode it is set on. I would like to know how I can program in a seperate mode for each channel in this program. Un
    fortunately, some of the sub VI's in this program are password protected, so I don't know if this is possible. Everything is entered correctly in the Measurement and Automation explorer, so that isn't the problem. I will attach a copy of the program that I am using. I have modified it slightly from the one I got off the NI website, so that the thermocouple readings have a time stamp and are saved to disk. Any assistance you could give me would be greatly appreciated.
    Thanks,
    Jordan
    Attachments:
    Forest_Fire_Thermocouple.vi ‏140 KB
    435xlogger_triggering.vi ‏110 KB

    Jordan,
    You should be able to sample two different thermocouples in the example that ships with the 435x driver called "Getting Started with multiple tranducers Continous". Simply put each type of thermocouple in a different index . Each index of the Transducer Group Array can have a different type and specify the channels that correspond to that type.
    One way that you can speed this VI up is to place a wait inside of the while loop. This will reduce the number of times the software polls the card if it has available data(increasing the overhead). I would suggest about 500 ms. The data that you receive will all have the same delta t because the sampling clock is hardware driven not software, so it does matter when the data is polled.
    You will not be
    able to get 60 samples per second when you are measuring multiple channels anyway. The sample rate for multiple channels is about 9/(# channels). This is explained in the 435x Users Manual.
    I looked at your code and noticed that you tried to change some of the enumeration controls. Unfortunately you will not be able to change these because they are password protected on the low level subVIs, which is where they are defined.
    The way you select if you want the notch filter is in the 435x Config you specify fast or slow. If it is slow then it will select 10Hz as the nitch filter. If you select fast the it will select either 50 or 60Hz You would then use the function "435x Set power line frequency"
    Good luck,
    Mike

  • How do I measure a signal source that is floating?

    I've been trying to measure a source signal that is floating (a voltage around 1V). I'm using the Sc 2345 and I've got the signal coming through a low pass filter module (scc-LP02). I've set up the DAQ board to read in NRSE. When my data aquisition board and laptop are plugged into a battery power supply I have no problem reading the signal correctly. The problem arises when I connect my laptop OR data aquisition board to the AC outlet. When I do this I get a reading of -5V. I believe this is because I'm actually grounding everything to building ground when I use the AC outlet. I've tried plugging my DAQ board into the AC outlet without grounding it, and I get a signal close to what I'm looking for but with a lot
    of noise. The SCC-LP02 manual says that the internal circuitry does have bias resistors that are required for floating points, so I'm not sure if I'm still forgetting about something. I've read other liturature suggesting that I need to connect the ground reference of the floating point to the DAQ device analog input ground, but I'm not sure how to physically do this (is there a screw terminal on the board for AIGND?). I have limited electrical wiring knowledge, but I would think that if I referenced the floating ground to the board, it would essentially ground everything to building ground.
    Hopefully I've overlooked something relatively simple. Thanks,
    Chris

    First of all, what type of power supply are you using for the SC-2345 (e.g. PW01)? When you say "data acquisition board" what exactly are you referring to? The SC-2345? Or do you have an external data acquisition board (e.g. DAQPad 60xx)? It sounds like if nothing is plugged into an AC outlet, then you are getting correct readings. However if one or both of your components are plugged in, then you get bad readings. You mentioned plugging your DAQ board into an AC outlet without grounding it. What exactly are you doing here? How are you accomplishing this? Below is a extensive tutorial on wiring up different types of signals in the field. Take a look at that and see if that helps explain what may be going on here. Another suggestion would be to tie the -
    ch of you signal to the chassis ground of your SC-2345 carrier. This is simply a screw mounted to the board in the corner. Refer to the SC-2345 manual for more information. I hope this helps.
    Field Wiring

  • How to prevent interference between multiple Kinect v2 sensors

    Hi,
    I have an application which used to use 3 Kinect v1 with partially overlapping field of views to have a large field of view of a scene. I am trying to upgrade this application to be able to use 3 Kinect v2, as the Kinect v1 is now discontinued.
    I am having problems dealing with interference on the depth stream. Every now and then, the light from one of the Kinect will give bad readings on another Kinect... This is not new, the Kinect v1 had interference problems too.
    To solve the interference problems with the v1, I used to simply toggle the IR emitters ON and OFF. Since my 3 Kinects don't need to acquire a frame at the same time, 2 of them would turn their emitters OFF and the one Kinect that would need to acquire a
    frame would turn its IR emitter ON.
    Now, since we can no longer control the IR emitter directly, can any one give me a good solution for preventing my Kinects from interfering each other ? My Kinects don't need to acquire a frame at the same time, but they may need to each acquire a frame
    on a short interval (about 100ms), so closing and opening the Kinects is not an option as up to 2 seconds can pass between the call to KinectSensor.Open() and the first non-null depth frame.
    Thank you.

    I've been pricing builds on Newegg that I could turn into a small Kinect cluster.  I need three for what I want to do.  this is what I have right now.  It's going to be a while before I can build up three grand so I don't know if I'll ever
    get to do it. But on the bright side I would be able to use it as a cluster for other projects as well, so...yeah.  I was thinking i5 but the linked conversation is deterring me from that. Would you be willing to let me know some more of your system's
    specs? Would love some input from anyone willing to offer any!
    Link to conversation that deterred me from i5, specifically the post from "Carmine Si - MSFT":
    https://social.msdn.microsoft.com/Forums/en-US/dbaeb6a9-2af2-49f9-8cdb-07aff309d5ce/insufficient-frame-rate-detected-from-kinect-kinect-sensor-v2?forum=kinectv2sdk

  • Configurin​g 1126/1327 in DAQmx

    Hi All,
    I'm having some difficulty findin the right combination of settings, hardware and software, to configure an SCXI-1327 to work with a SCXI-1126 for making frequency measurements of a speed sensor signal. The sensor's signal is a high going low pulse, from 12V to 0V, 4 pulses per revolution, so it is normally high when at 0 RPM. The max RPM is about 6000, which gives us a max frequency of about 400Hz. I have not been able to find a combination of settings of threshold, hysteresis, low pass filter, etc., that gives me a reasonable reading. I'm also having difficulty with the 1327 in Max, where there are configuration settings for the attenuator. I have tried to set the physical attenuator to 1:1 in both the 1327 and its configuration in MAX, but everytime I go back into the configuration for the channel it is at the diffault setting of 100:1. We've tried to resolve this for a few days, without success, and need it running, so any suggestions are welcome.
    LabVIEW 7.1.1, MAX 7.2.
    Thanks,
    Putnam Monroe
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

    Good Evening,
    The set up has a number of other "modules" in the SCXI chassis. We are reading voltage, current and temperature on four units under test, and, for the most part getting good readings on those. I say for the most part because I was led to believe that the current and voltages were straight DC when in actuality they are pulsed DC (the system uses PWM). I've gotten by that with the addition of an RC integrator on each channel to give me the effective DC, dependent on the duty cycle of the pulses, but I can't find the magic numbers to set up the frequency measurements, and I have the question about configuring the 1327's attenuator in MAX. When I set it to 1:1 to match my the virtual channels' configuration and then go back in MAX to look at it the value has reverted to 100:1, so I don't know if it ever was 1:1. I'm also not sure what values of threshold, hysteresis, etc., would be appropriate for measuring a ~200 Hz signal that is a nice square wave, 12V going to 0V. As the motors will either be stopped or running at roughly full speed the band of interest of frequencies is probably in the 100 to 300 Hz range, so the 1126's lower limit of 15Hz is not a limiting factor. Our bad readings are across the frequency range, I haven't been successful getting it to properly trigger at any frequency.
    WinXP Pro, LabVIEW 7.1.1, NI-Max 7.2
    Putnam Monroe
    Certified LabVIEW Developer
    Senior Engineer
    North Shore Technology, Inc.
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

Maybe you are looking for

  • Error while saving territory

    Hi Experts while saving territory, i am getting below error. Error while saving rule score; contact technical support Wait and try to activate the changes later Details Could not lock object Application 'Territory Management' for processing-relevant

  • I'm getting an error everytime I open iTunes

    Recently, I had a problems with my DVD drives. A solution I came across was to delete some registry keys that related to them. After fixing that issue, I got an iTunes message that said this: Warning! The registry settings used by the iTunes drivers

  • Exchange 2003 to 2010 Public folder Migration issue

    I have successfully replicated our Exchange 2003 public folders to our exchange 2010 public folders using the addreplica script in exchange 2010 PS. I have moved all mailboxes to use the public folders on Exchange 2010.  A couple of days ago I ran th

  • Error in IDOC ststus , for interface IDOC to File

    Hi All,           iI am doing IDOC to file interface , in which i am using IDOC Orders05, we are using the same IDOC for other interface to connect with DOTNET connectors , when i trigger the IDOC ORDERS05 for my interface when i check the ststus , i

  • Premiere Pro CC will not open any prproj files.

    I'm working on a feature length.  We have a premiere in 2 weeks!  My PC computer shut down yesterday afternoon so I did not restart.  (It sometimes does this)  When I switched it on this morning, I tried to start Premiere Pro but it moved at a crawl