Digital sampling

I am having some timing issues.
I am sampling analog inputs at 1kHz, and also sampling some digital signals "continuously" but I can not figure out how to increase the sampling rate of the digital signal. I need to get the sampling rate of the digital signal up to 1kHz to "coincide" with my analog data. I am using the DAQ assistant express VI to acquire both the analog and digital signal.
Thanks
Scott Donahue
University Of Illinois Urbana-Champaign 

I tried to use the timed loop, but It did not increase the sampling rate of the digital signal. I tried to speed the loop up (1ms period), decrease my analog freq and number of samples per loop, but that still did not work.
When I try to use run the multi-func-synch AI-read dig chan Vi I keep getting an error message 200077. It says that I can not use the sample clock, and I must use either handshake or On Demand, this defeats the purpose of using this VI though. I need to be able to choose the same sample rate for both inputs. 
The format of the VI seems to be what I need. It appears that If I can get it working I can synchronize my analog and digital signals to both sample at 1000 times per second, or 1000 Hz.
I am enclosing a link to my file with read and write privileges, I hope that you can help me with my issue. I appreciate your time and patients with this "labview newbie"
https://netfiles.uiuc.edu/xythoswfs/webui/_xy-11758353_2-t_qhBJ0PKv
Thanks
Scott

Similar Messages

  • Problem with digital samples of NI 9425 in windows 7 on usb 3.0 ports

    I have a program in C that reads the digital samples of a device 9174 module 9425 in windows 7 x64 which has no problems with the USB 2.0 ports but with the 3.0.
    When I connect the device to a USB 3.0 port errors in the samples appear, are activated or deactivated lines uncontrollably.
    The PC is Artigo A1250, Drivers NI-DAQmx9.6

    Hey vic22,
    Sylvia is correct -- the 9174 has been thoroughly tested on the USB 2.0 standard.  But we still expect it to communicate properly with USB 3.0 ports.  We have seen a few cases in which specific host controllers cause unexpected behavior similar to what you describe, but in general, our USB devices are compatible with USB 3.0.
    Of course, if sticking with a 2.0 port works for your application, that will be the quickest/simplest recommendation that we can make.  If you need to move your cDAQ to a 3.0 port, let us know -- there are some troubleshooting steps we can take to figure out why you're seeing errors.
    There are some additional details in this somewhat related article.  Upgrading the drivers for your USB Host Controller might be worth a shot.
    Kyle B  |  Product Support Engineer  |  ni.com/support

  • How to read multiple Digital samples and plot a chart with time stamps

    Hi,
     Could anyone send me a code that:
    1. Reads 'multiple samples(lets say 4) from single digital input' 
    2. 'plot digital data as a chart with time stamps'
    3. Find frequency
    4. Log data into file with time stamps
    I have attached the code which i tried.
    Thanks,
    LK
    Attachments:
    DigitalNSample.vi ‏27 KB
    DigitalNSample.vi ‏27 KB

    Hi,
     Could anyone send me a code that:
    1. Reads 'multiple samples(lets say 4) from single digital input' using NI USB 6009 or NI USB 6251.
    2. 'plot digital data as a chart with time stamps'
    3. Find frequency
    4. Log data into file with time stamps
    I have attached the code which i tried.
    Thanks,
    LK
    Attachments:
    DigitalNSample.vi ‏27 KB

  • Sample rate for digital sampling (cDAQ-9172 & NI 9401)

    Hi!
    I have a cDAQ-9172 with a NI 9401 C-series module (digital). I would like to sample the digital inputs with a sample rate of e.g. 400 kHz or 200 kHz. My problem is that I can only select a the 100kHzTimebase clock, and therefore only get a 100 kHz sample rate. The 20MHzTimebase clock is too fast, since it gives me a sample rate of 20 MHz). Is it possible to get a user defined sample rate of e.g. 200 kHz, by e.g. dividing down the 20MHzTimebase clock?
    Solved!
    Go to Solution.

    The cDAQ-9172 chassis does not have an internal timing engine for digital input however you can use one of the onboard counters to generate your clock.  Set your pulse train generation counter to be one of the internal counters, such as "cDAQ1/_ctr0" and your digital input sample clock source to be /cDAQ1/Ctr0InternalOutput". 

  • Digital sample rate

    Is there a way to set the sampling rate for digital inputs?
    In am using an 6032E Daq board. I use two VI's: Port Config and Port Read, but there I cannot set the sampling rate.
    The only thing I found with rate is for handshaking.
    I still have an other question then: in the same VI, I want to read from an Analog input and from a Digital input. I already wrote that VI, and it runs, but is there something I must consider when reading from both analog and digital inputs at a time?

    You cannot set a sampling rate for digital operations on an E-eries DAQ board. The port is static, meaning that there is no timing circuitry controlling the port. It is updated or read by software command only.
    You configure the port once, but you must call port read each time you want read from the port, and port write each time you want to update the port.

  • Digital sampling on PXI-7851R

    Hello,
    I would like to have your help on an issue I have using a PXI-7851R.
    I tried to acquire 48 digital signals at 1Mhz with this card.
    Using project wizard, I succeed to get a working acquisition vi (Using a DMA FIFO to pass data from PXI-7851R to PC), but it seems to be limited at nearly 100kHz, which do not change with number of input.
    But, as far as I understand the Datasheet, digital acquisition may go as high as 40 Mhz.
    Where am I wrong?
    Thank you,
    Gael

    Hi Romain,
    I have a PXI-1042-Q on which I use (for this application) the PXI-7852R (sorry about that, I have indicated that I have 7851R). Host computer run LabView 2009 with FPGA module, on WinXp (no RT kernel).
    So,
    I do a LabVIEW FPGA project of 'R Series Intelligent DAQ on My Computer' type.
    Then, with FPGA wizard I add one 'Buffered DMA Input' to which I add one connector (Connector1/DIO1).
    I set clock setting of Buffered DMA Input to 10 MHz, internal clock type, with 40000 samples (to get 4ms of signal into the buffer)
    I open project, go to MyFPGACode.vi push play to let labview what it has to do. Everything is correct and I get no error.
    When finish, I start MyHostCode.vi and 'Buffer Underflow' light on. If I go down to 1 MHz in 'Loop Rate', it works.
    What I try to do is to record digital signal from a connector to let me check communication between two parts of a system. This connector has 50 pins, 48 signals and 2 grounds. I would like to acquire at least at 10 MHz (I am not sure that I have to go that high) with all the 48 pins at the same time.
    I enclose a dummy project I made with a single connector.
    Thank you for your help,
    Gael
    Message Edité par Gael B le 01-18-2010 06:16 PM
    Attachments:
    Test10Mhz.lvproj ‏168 KB

  • How to digitally record audio? (iPad to MAC)

    Hello,
    I am generating sounds internally in my iPad 3 and I play it through the earphones. The thing is that I would like to record the generated samples in its digital form using a MAC Book Pro. We are doing it doing in its analog form but we are not achieving good results at low volumes.
    How is the best way to get this?
    We are managing 3 solutions here:
    1- Instead of send the samples to the audio codec they could be sent to RAM and saved into an txt file or similar. Then this txt file could be loaded on the MAC.
    2- We could route them to HDMI port using the dock and connect the cable to the HDMI port of the MAC.
    3 - We could route them to USB port using the dock and connect the cable to a USB port of the MAC.
    I don't think that the HDMI/USB ports of the MAC can act as a device (slave) and then record the digital samples in some file. Is it possible? How?
    Regards.

    Another way. You can use a USB flash drive & the camera connection kit.
    Plug the USB flash drive into your computer & create a new folder titled DCIM. Then put your movie/photo files into the folder. The files must have a filename with exactly 8 characters long (no spaces) plus the file extension (i.e., my-movie.mov; DSCN0164.jpg).
    Now plug the flash drive into the iPad using the camera connection kit. Open the Photos app, the movie/photo files should appear & you can import. (You can not export using the camera connection kit.)
    Secrets of the iPad Camera Connection Kit
    http://howto.cnet.com/8301-11310_39-57401068-285/secrets-of-the-ipad-camera-conn ection-kit/
     Cheers, Tom

  • How I can accommodate samples in specified values ?

    Hi
    I use VISA read functions and I want to use fft and Play functions for digital samples. in every time I have an scaller value and this can not be use for fft and play sound of that. so with recommandation of one of the users, I must accomodate samples and then give them to fft and Play sound function for good performance.
    any of you has any idea that how I can accomodate data and then use them while new samples come after eachother ?
    Thank you.

    Hi, ajapyy,
    If there's just one sample every time, you can consider use queue as a buffer, and query the queue number to determine whether dequeue or not.
    In other way, I thought you can try FFT PyByPt.vi in the Signal Processing Toolkit.

  • How Do I accurately measure digital Polses using a compact field point

    Hi All,
    My Goal Is to measure the speed of an item by measuring the difference between operation of 2 switches.
    Pulse can be between 100 msec and hours.
    I require > 100 uSec Accuracy @ 100 msec, 1msec @ 1 Sec etc...
    This architecture is fixed, What I am looking for is a method.
    I am used to using M-Series NI-DAq's with their synchronus digital sampling or counter acquisition,
    I have looked at a cFP-DI-304 but it looks like it is an asynchronus sample, dependent on CPU Loading.
    The cFP-50x is promising but I can't find many code examples and there are a few "sorry for the missed pulse events" notes.
    Does anyone have any suggestions or helpful hints for me?
    Thanks in advance.
    iTm - Senior Systems Engineer
    uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT

    As I said in my post, the architecture is fixed, I understand that there are many alternatives, most of them are an "Up-Sell",
    Very few have extended temperature range, LV R/T,  small form factor, fair price and industrial 24V operation.
    We have an existing solution that uses custom hardware and I am looking to improve the accuracy, reduce size, improve reliability and reduce overall system cost,
    Compact Rio is too expensive to achieve these goals.
    I experimented with a cFP-DI-304 (on change method) yesterday an got reasonable results around the 1mSec mark.
    I don't have a sig-gen capable of the 12V required to stimulate the inputs at a a precise rate, so I wasn't able to confirm it's accuracy.
    What I want to find out is how accurate the on-change timestamps are? Do they go beyond the quoted 1kHz sampling mode? are they prone to event loss like other NI devices?, Does CPU load effect function.
    How about cFP-500/502?
    I would like to be able to help myself but I haven't been able to find much useful information on the inner architcture of the cFP backplane, or associated modules.
    The most I could find was that the data is transferred a bit like a shared variable.and is subject to losing samples if cpu loading is too high.
    In the case of event based acquisition, I am hoping that it capures the edges from a high accuracy timebase and places it in a shared variable /buffer for collection by the software.
    Thanks
    iTm - Senior Systems Engineer
    uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT

  • 1mS Digital aquisition realistic for AI-16XE-50/Pentium/Win98?

    I'm trying to sample digital levels + buffered period measurement at 1mS
    or better. Setting the tick counter for 1mS intervals apparently causes
    too much overhead on my old laptop, and the samples are 3-4 mS intervals
    at best. Running the acquisition loop flat out with no waiting will
    sample the digitals at barely 1mS or faster, but some buffered period
    measurement samples are missing, even though it's contained within the
    same acquire loop. D'oh! Since the period measurement is apparently the
    slowest, I'm thinking there should be a way to hold off the digital
    sampling until the period measurement is done, so they all have the same
    time correlation, but I dunno how, seein' as I 'aint never tried this
    here Lab
    view stuff before.
    Doug

    Doug,
    Win 98 is not a real-time (deterministic) operating system... you're always
    going to loose samples and the interval will never be precise.
    I haven't used it but, I believe NI makes a LabView RealTime version that
    actually uses a separate interrupt driven microprocessor on one of their DAQ
    boards to do the periodic sampling... at least that's how I think it works?
    Are you trying to trap really narrow spikes or could an averaging technique
    be used to average out the varying period?
    Bill
    D. Berry wrote in message
    news:[email protected]..
    > I'm trying to sample digital levels + buffered period measurement at 1mS
    > or better. Setting the tick counter for 1mS intervals apparently causes
    > too much overhead on my old la
    ptop, and the samples are 3-4 mS intervals
    > at best. Running the acquisition loop flat out with no waiting will
    > sample the digitals at barely 1mS or faster, but some buffered period
    > measurement samples are missing, even though it's contained within the
    > same acquire loop. D'oh! Since the period measurement is apparently the
    > slowest, I'm thinking there should be a way to hold off the digital
    > sampling until the period measurement is done, so they all have the same
    > time correlation, but I dunno how, seein' as I 'aint never tried this
    > here Labview stuff before.
    >
    > Doug
    >

  • Audition 3 seeing a different sample rate setting than what the device shows

    Hi,
    I have just installed Adobe Audition 3, along with the 3.01 patch, on a brand new system running Windows 7 64 bit. The mother board is an Asus Sabertooth X58 using Realtek High Definition Audio. The device drivers show that the audio sampling rate for line input is set to 24 bit 192K. I wanted to set it to the maximum that the sound card would allow to test performance and audio quality.
    The problem is when I bring up Audition 3 and hit record, I get the message "We do not support recording when your file does not match your hardware sample rate. Your current hardware sample rate is 44100Hz". Clearly this is not the case since the Line In Properties - Advanced tab is displaying "2 channel, 24 bit, 192000 Hz (Studio Quality).
    Under Audition's Audio Hardware Setup it shows only one choice for Audio Driver: Audition 3.0 Windows Sound. It also displays Sample Rate: 44100Hz, Clock Source: Internal, Buffer Size: 2048 samples with no way to change these values.
    If I click on the Control Panel button I get:
    DirectSound Input Ports:
    Device Name: Line In (High Definition Audio Device
    Audio Channels: 2
    Bits per Sample: 16
    Anyone know of how I can change these settings to get Audition to agree with the device settings?
    Thanks
    Dale

    DaleChamberlain wrote: Anyone know of how I can change these settings to get Audition to agree with the device settings?
    I'm afraid that life is nowhere near that simple. The main issue here is that Audition, in common with most audio software, uses a driver system called ASIO to talk to the sound device - this cuts out a lot of the OS and reduces the latency of the system considerably. There are several problems with ASIO though - the first being that it only supports a single device per system (or sometimes multiple identical devices if the manufacturer can make them look like a single device), and with software designed to use this driver, then to use any other driver (like a native Windows one) you have to use a converter stage like ASIO4ALL. This will convert the ASIO streams to WDM, and let you use multiple sound devices - but with increased latency.
    It's the second problem that's really going to stuff you though - and that is that quite reasonably, ASIO is limited by its inventors to run only at three sample rates; 44.1k, 48k and 96k. So there's no way you can run at what you think might be a higher quality setting. All settings above even 48k are making your sound device work much harder, and for what? All that happens is that you increase the potential frequency response to way beyond the human hearing range - to no purpose at all. You don't have sources that can produce useful output at these frequencies, and you certainly don't have the means to reproduce them. This has all been well documented and explained before, so I'm not going over all that again. In a nutshell, Nyquist points out that any digital sampling device has a frequency response limited to a maximum of half of the sample rate, so for 48k that gives us a frequency response up to 24kHz - comfortably higher than any adult can hear by quite a long way. Anything you sample and record beyond this by using even 96k is nothing but noise as far as humans are concerned, and unpercievable noise at that.
    So what the line input properties tab is saying is, if you have a non-ASIO driver designed to support all potential rates, possible. You don't have an ASIO driver available, because it's a built-in sound device, and anyway you've already pointed out that it's using the Audition Windows driver (a cut-down version of ASIO4ALL, effectively), so a conversion is already taking place. What Realtek refer to as 'High Definition Audio' is no such thing - all on-board sound devices of this nature are of universally low quality, and to improve this you'd need an external device - of which there are many available, usually with dedicated ASIO drivers. But none of them will work with ASIO beyond 96k, simply because the standard doesn't support any higher rates.
    If you download and use ASIO4ALL (it's free), then you will get an additional control panel which will show you exactly what your sound device is capable of doing as far as Audition or any other ASIO software is concerned, and this is a useful diagnostic tool anyway, so it's worth doing. You just select this option when installed, instead of the Audition Windows Driver.
    I'm sorry to be the bearer of what seems like bad news, but actually, it isn't. You will percieve no quality difference at all running at anything beyond 48k sample rates; all you will be doing is wasting your computer's resources unnecessarily. You waste both processing resources and hard drive space by processing at ridiculously high sample rates, and there are zero returns.

  • I need help with the analog to digital VI

    hi there
    i'm trying to create some sort of 13 bit virtual adc  
    what i want to do is to use a simulated signal ( e.g. sine, squared, etc.)  sample it at a frecuency rate of 8 khz and send each digital sample by de serial port. 
    the thing is that i don't really know how to use the analog to digital VI, I know it converts an analog signal into a digital waveform or table but i want to be able to use that digital data and not just watch it on a table
    sorry for my bad english  and thanks in advance
    Solved!
    Go to Solution.

    well i disigned  a digital telphone switch (PABX)  on a fpga and y want to test it with labviews signals so i dont have to buy expencive ICs (adcs and stuff)
    my idea is to use labview to simulate analog signals and sample them to send that digital information to my PABX and once its been switched  send it back to lab view and display the switched information on a waveformchart or something.
    hope you get my idea i just dont know how to xplain
    there's the VI i made to try to understand how the analog to digital VI works   
    thanks for your help
    Attachments:
    P2_10b.vi ‏36 KB
    MAIN IDEA.JPG ‏34 KB

  • NI Digital Electronic​s Board DAC and ADC access from Multisim

    Is it possible to use the ADC and DAC functionality of the NI Digital Electronics Board from a circuit designed in Multisim?
    I would like to have my students be able to digitally sample an audio input, perform transformations on the data stream, and listen to the effect.  Nothing particularly fancy.
    Thanks,
    -Bret Wood

    Hello,
    In order to use the ADC/DAC you need to define a protocol and this goes beyond the scope of the PLD schematic in Multisim. 
    LabVIEW can be used to accomplish this, there are some examples that you can use as reference, they can be found in the NI Example Finder.
    Regards,
    Fernando D.
    National Instruments

  • Sample depth 10 bit vs 8 bit

    Televsion began processing video digitally with DVE's in the late 70's with such boxes as the Vital Squeezoom and ADO from Ampex, as well as entries from NEC, and so on.
    So why the discussion of 8bit vs. 10 bit? a microcosm of links...check out SMPTE's site as well, all the standards mentioned...
    http://documentation.apple.com/en/finalcutpro/usermanual/index.html#chapter=C%26 section=11%26tasks=true
    http://en.wikipedia.org/wiki/Serial_digital_interface
    All of those systems were based on composite Video and processed in parallel chunks of data with luminance and chrominance sampled together. We'd look at it today and say it was one step above VHS! For the time though, it was pretty awesome, though it was maintenance heavy.  When SDI was introduced it signalled a paradigm shift in television from slower lower bit rates to a solid 10 bit sample depth...and there were 10 bit systems inbetween which proved the concept. 
    What was the difference between the 8 bit composite to the 10 bit Component sampling, and why 8 and 10 bit SDI?
    Well, there are several differences, but the consumer doesn't see most of them at home.  The flavors vary from DVcam to IMX50 and EVERYTHING inbetween, including MP4 on satellite uplinks!  There are a myriad of formats, transmission styles, modulation standards and compressions! Which is best?
    We've used nearly everything in our own company from FCP Pro to Quantel eQ's in our workflow and recently purchased both Autodesk and Avid components to complement our table of offerings. What does it all mean and how is it useful to the digital offering.
    Well, bottom line, mixing and matching creates as many challenges as it does solutions. But it doesn't answer the original question I presented...why 8bit vs 10 bit..is there a difference and why? Where does HD fit in?
    if all the variations are limited to SDI and Component video, which is pretty much what our broadcast industry is based on (given that there are internal differences with RED camera RGB, graphics extreme high end renderings and what not), the work flow pretty much standardizes on Y, R-Y, B-Y, video.  It's a pretty good flow and it's close to the RGBY premium we all consider the holy grail of video, no matter the resolution. 
    Green defines both the color and the resolution, so the difference channels of R and B are half sampled to make bandwidth manageable. And that can be losslessly (for the most part) compressed about 6 to 1, as both DVCPro and IMX do it.  ProRes follows suit with it HBR compression as does Jpeg 2000.
    Mp2 at 100Mbit and MP4 at about 50 Megs follow suit.
    But the big question is whether there is ANY difference between 8 bit and 10 bit SDI in these flavors. As an engineer, I can unequivocally say yes. In one simple stroke, and in one sense it boils down to sample depth. And the price is in both the luminance and color.  A color difference sample, ie, R-Y/B-Y, is a half sampled signal for each channel of color, since our eyes are not very perceptive to COLOR noise, so the signal is cheated to save money, circuitry, clocks, and so on.  Still,  if you do get the chance to see a full RED camera- RGBY signal on an OLED monitor, it WILL pop your eyes! and that's an extrapolated stream.
    Ten bit sampling (SMPTE 259M) for the SDI-SD signal yields 1024 steps of sampled video for a "one volt" signal.  Digitally, it is data which streams at 270Mbits for 10 bits sans audio, 360Mbits with AES audio embedded.  For standard def it would look like your average freighter at about 500 mph and if you could deliver it home, it would look pretty good. NOT GOOD ENOUGH!
    Eight bit sampling for the same stream yields 256 steps of video for that sample "amplitude".  Streams at 143Mbits sans audio and 177 with embedded audio. Roughly  a tramp steamer rolling over the waves at about 125MPH...pretty slow these days.  An 8bit SDI-SD signal has equivalent chroma sampling to the 10 bit, but not the luminance or bandwidth, and that same principle holds true for HD! 
    And every other compression, down to DVcam is a compromise off these broadcast studio standards.  Take a look at Discovery channel sometime and look at the paint by the numbers skin tones and you'll see what shooting on the cheap gets you!
    Now for the super tanker! HD at 1080i, weighs in at a whopping 1.45 Gigs of bandwith! Roughly analogous to half a frame of image from a Canon 5D, HD was still designed for a MINIMUM Screen size of 102"....MOM, try THAT in your living room! 
    (As a tiny footnote...Canon used the H.264 HD off their 5D and 7D for their theater experience in Vegas, and it was pretty cool) 
    Here is the nasty that several manufacturers DON"T want you to know.  It is the risetime on a digital sample that makes ALL the difference in the world.  I spoke the the CEO of AJA while at NAB this year (2011). This is a guy whose been through the ranks as a working stiff in the TV biz.  When I asked him about the Kona cards, he latched on to what I was saying very quickly...big smile, connection made!  "First time anyone's asked me that" he said..."rise time is really important for HD...and it's because it's so hard to get it and not have all kinds of artifacts"
    What he was saying was that if you sample slower, you get less detail, less color, more noise.  Yes, Martha, there IS a difference between 8 bit and 10 bit, from old composite TV up to HD! 
    Color, luminance, detail.  If you put crap in, you get crap out.  Incidently, the rise times for all these.... 8 bit SDI-SD ~375nanosecods, 10bit SDI-SD 226 nseconds, SDI-HD ~ 157nseconds.  The faster you sample, the higher the frequency you can, and the crisper the detail.  The deeper you sample, the more steps, less overshoot, and richer color.  The difference between a 1940's race car and The 200+mph ones today. design, attention to detail, aerodynamics. 

    Unfortunately there is no such way in the filter panel, it might be a good
    feature request, maybe you can add that using the link at the main page on
    this forum.
    In the meantime there are 2 possible workarounds. Regarding the fact that a
    16 bit is twice the size of an 8 bit you might want to use the sort by size
    criterion in the Path Bar but when having different source files or layered
    files that might not always be sufficient.
    If you use the find command (cmd+F / or menu Edit-Find) you can select the
    find by bit depth option. Also select the option to equal and use either the
    number 8 or 16.
    I am looking for a way to use the Filter panel to sort/filter results by bit
    depth (16 bit vs 8 bit images) - is there a way to do this Bridge CS4? I have
    looked in every part of the sorting mechnisms and can't find a way. Am I
    missing something?

  • Digital Signal Processing Technique

    Dear Members
    The problem I have got in my term paper is to input a sound sample by a sound card(microphone)and acquire the digital sampled frequencies of that sample.In order to obtain DFT of that signal for recognition.
    Which API or method should I use to develop in JAVA.

    javax.sound might be what you want.

Maybe you are looking for

  • How many machines can I run CC on?

    I have purchased CC as an individual and would like to know if I can download it onto both my laptop and desktop machines?

  • How do I obtain a time from a server, when I have no process running on it?

    I have a program that is running on many machines. They all write files to a shared DIR on a server. I need to obtain a time that is constant, as the only constistant thinh I know they can all see is the server, I'd like them to beable to get the tim

  • HP PHOTOSMART 5524 SERIES PRINTER WILL NOT CONNECT WIRELESSLY​/COMMUNICA​TE WITH LAPTOP. PLEASE HELP.

    I am really hoping that someone can help me resolve a serious issue with a new printer. I have tried every diagnostic tool including the HP print & scan doctor and my printer still doesn't seem to connect to my laptop or print.  Although it does prin

  • Mac pro or mac air?

    Helllo, I'm a highschool student with a macbook air and im required to bring my computer to school every day, my school is technology based, but im looking at a macbook pro 13 inch and a macbook air 13 inch for my school. i over use my computer for a

  • Transform XML Problem

    Hello, my java application can generate one raw xml document, then i use javax.xml.transform.TransformerFactory and Transformer to transform with one stylesheet. The problem is that the final xml document has some strange letters inside. for example,