Max. sample frequency with "Modulus Breakpoint (on-board re-enable).vi"

Hi all,
I'm using the PCI-7344 board and I'd like to trigger an external device. The trigger frequency should be very high (> 4 kHz).
Is it possible to realize such a sample rate with breakpoints and an onboard program like: "Modulus Breakpoint (on-board re-enable).vi".
I don't need a visual indicator and so I don't need the 10 ms delay in the second part.
What is the timeout in the first part for?
Thanks in advance for your help.
Best regards
Attachments:
Modulus_Breakpoint_(on-board_re-enable).vi ‏114 KB

Depending on the movement of a xy - stage (controlled by the motion controller) I'd like to trigger a pulsed laser. Maybe every 2 µm I need a short TTL pulse to release a laser pulse.
I tried a software based solution as well as a on-board solution. Both ways are working well but unfortunately the software based solution is limited to 20 Hz and the on-bord solution is limited to 50 Hz.
Because of that I have to decrease Vmax in my application to 100µm/s.
Well , it would be much easier to trigger the laser with an adjusted frequency. The problem is the acceleration and deceleration of the stage. Because of that the pulses will overlap much more at the edges and that's the fact I wanna avoid.
Thanks for your help. Best regards

Similar Messages

  • Can't set max\min frequencies with CPUpower

    Hi all ! And happy easter to everyone !
    I'm having a problem with CPU Frequency Scaling... I want to set maximum and minimum frequencies according to mi CPU specs, but I'm unable to do this.
    Well, starting from the beginning :
    I've a Samsung NP535U3C laptop, with and AMD A6-4455 APU. I've two boost pstates, in fact TurionPowerControl tells to me 
    [alastor@NP535U3C ~]$ sudo TurionPowerControl -l
    TurionPowerControl 0.44-rc2 (export)
    Turion Power States Optimization and Control - by blackshard
    Main processor is Family 15h (Bulldozer/Interlagos/Valencia) Processor
    Family: 0xf Model: 0x0 Stepping: 0x1
    Extended Family: 0x15 Extended Model: 0x10
    Package Type: 0x0 BrandId: 0x0
    Machine has 1 nodes
    Processor has 1 cores
    Processor has 7 p-states
    Processor has 2 boost states
    Power States table:
    -- Node: 0 Core 0
    core 0 pstate 0 (pb0) - En:1 VID:80 FID:10 DID:0.00 Freq:2600 VCore:0.5500
    core 0 pstate 1 (pb1) - En:1 VID:80 FID:7 DID:0.00 Freq:2300 VCore:0.5500
    core 0 pstate 2 (p0) - En:1 VID:90 FID:5 DID:0.00 Freq:2100 VCore:0.4250
    core 0 pstate 3 (p1) - En:1 VID:96 FID:2 DID:0.00 Freq:1800 VCore:0.3500
    core 0 pstate 4 (p2) - En:1 VID:108 FID:14 DID:1.00 Freq:1500 VCore:0.2000
    core 0 pstate 5 (p3) - En:1 VID:114 FID:10 DID:1.00 Freq:1300 VCore:0.1250
    core 0 pstate 6 (p4) - En:0 VID:116 FID:2 DID:1.00 Freq:900 VCore:0.1000
    --- Node 0:
    Processor Maximum PState: 6
    Processor Startup PState: 4
    Processor Maximum Operating Frequency: 2600 MHz
    Minimum allowed VID: 123 (0.0125V) - Maximum allowed VID 0 (1.5500V)
    Processor AltVID: 58 (0.8250V)
    Done.
    This is confirmed also by cpupower :
    [alastor@NP535U3C ~]$ sudo cpupower frequency-info
    analisi della CPU 0:
    modulo acpi-cpufreq
    CPU che operano alla stessa frequenza hardware: 0 1
    CPU che è necessario siano coordinate dal software: 0
    latenza massima durante la transizione: 4.0 us.
    limiti hardware: 1.30 GHz - 2.10 GHz
    frequenze disponibili: 2.10 GHz, 1.80 GHz, 1.50 GHz, 1.30 GHz
    gestori disponibili: conservative, ondemand, performance
    gestore attuale: la frequenza deve mantenersi tra 1.30 GHz e 2.10 GHz.
    Il gestore "ondemand" può decidere quale velocità usare
    in questo intervallo.
    la frequenza attuale della CPU è 1.50 GHz (ottenuta da una chiamata diretta all'hardware).
    boost state support:
    Supported: yes
    Active: yes
    Boost States: 2
    Total States: 7
    Pstate-Pb0: 2600MHz (boost state)
    Pstate-Pb1: 2300MHz (boost state)
    Pstate-P0: 2100MHz
    Pstate-P1: 1800MHz
    Pstate-P2: 1500MHz
    Pstate-P3: 1300MHz
    Pstate-P4: 900MHz
    So, my question is, why the ondemand governor can switch only between 2.10GHz and 1.3Ghz ? Should'nt switch between 2.6Ghz and 900MHz ?
    I've tried to set lower frequency to 900 MHz whit the command :
    cpupower frequency-set -d 900MHz
    but nothing changed, the lower frequency still remained 1.3Ghz
    I've also tried to set the maximum and minimum frequencies in /etc/default/cpupower :
    # Define CPUs governor
    # valid governors: ondemand, performance, powersave, conservative, userspace.
    governor='ondemand'
    # Limit frequency range
    # Valid suffixes: Hz, kHz (default), MHz, GHz, THz
    min_freq="900MHz"
    max_freq="2600MHz"
    # Specific frequency to be set.
    # Requires userspace governor to be available.
    # Do not set governor field if you use this one.
    #freq=
    # Utilizes cores in one processor package/socket first before processes are
    # scheduled to other processor packages/sockets.
    # See man (1) CPUPOWER-SET for additional details.
    #mc_scheduler=
    # Utilizes thread siblings of one processor core first before processes are
    # scheduled to other cores. See man (1) CPUPOWER-SET for additional details.
    #smp_scheduler=
    # Sets a register on supported Intel processore which allows software to convey
    # its policy for the relative importance of performance versus energy savings to
    # the processor. See man (1) CPUPOWER-SET for additional details.
    #perf_bias=
    # vim:set ts=2 sw=2 ft=sh et:
    but again, nothing canged.
    I've tried ondemand and userspace governors but the situation is the same whit both of them, I always get hardware limits 1.30 - 2.10 GHz.
    I've also tried to set the processor.ignore_ppc=1 boot flag, but it was selfish.
    Someone could help me ? At least to set the lower frequacy to 900MHz so I can save a bit of battery.
    Thank you.

    Hi ! And thank you for your reply !! I'm always on conservative when I'm on battery to save some power. At this point I think I should report the problem to AMD. However here is my /proc/cpuinfo :
    [alastor@NP535U3C ~]$ cat /proc/cpuinfo
    processor : 0
    vendor_id : AuthenticAMD
    cpu family : 21
    model : 16
    model name : AMD A6-4455M APU with Radeon(tm) HD Graphics
    stepping : 1
    microcode : 0x6001119
    cpu MHz : 1300.000
    cache size : 2048 KB
    physical id : 0
    siblings : 2
    core id : 0
    cpu cores : 1
    apicid : 16
    initial apicid : 0
    fpu : yes
    fpu_exception : yes
    cpuid level : 13
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1
    bogomips : 4193.77
    TLB size : 1536 4K pages
    clflush size : 64
    cache_alignment : 64
    address sizes : 48 bits physical, 48 bits virtual
    power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro
    processor : 1
    vendor_id : AuthenticAMD
    cpu family : 21
    model : 16
    model name : AMD A6-4455M APU with Radeon(tm) HD Graphics
    stepping : 1
    microcode : 0x6001119
    cpu MHz : 1300.000
    cache size : 2048 KB
    physical id : 0
    siblings : 2
    core id : 1
    cpu cores : 1
    apicid : 17
    initial apicid : 1
    fpu : yes
    fpu_exception : yes
    cpuid level : 13
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1
    bogomips : 4193.77
    TLB size : 1536 4K pages
    clflush size : 64
    cache_alignment : 64
    address sizes : 48 bits physical, 48 bits virtual
    power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro

  • What is the max sampling frequency of a BNC-2120

    my windows platform is NT and i have labview 6.1

    Hi Ulises,
    I don't quite undersatnd what you mean? The BNC-2120 is a connector block which interfaces to E-Series and 671x boards. The BNC-2120 simplifies the connection of real world signals to the DAQ device. There is no sampling rate you can set. The limiting factor of the samplingrate is the DAQ Device you're using, not the BNC-2120.
    I hope I haven't misunderstood your question.
    Regards,
    Luca
    Regards,
    Luca

  • Flex sampling frequency changes when I use it with apple loops

    Flex sampling frequency changes when I use it with apple loops in a 24 bit 88.2 project

    sorry  !
    Flex changes sampling frequency when I use it with apple loops in a 24 bit 88.2 project

  • Low rates of (modulus) breakpoint re-enabling

    Hello!
    I am using the motion control NI7334 and I need the breakpoint output to trigger my measurements.
    In a quite simple program I re-enable the (modulus) breakpoint each time it occurred. The fastest repetition rate I could generate to enable the breakpoint was about 110 Hz, with higher breakpoint frequencies it becomes irregular, means I don't get all breakpoints. I'm worried if it becomes even worse when my program will be more complex. These low rates for re-enabling the breakpoints are a problem of the labview-code, of windows xp or of the communication between software and PCI-board? 
    Attachments:
    motiontest.vi ‏81 KB

    The reason why you can't reach faster breakpoint reenable rates is a
    mixture of all the reasons that you have suggested plus some internal
    delays on the board. There is not much you can do about that.
    There are several options for better breakpoint performance:
    You could contact your local NI branch to upgrade your NI 7334 to a NI 7354.
    The 735x boards provide buffered breakpoints with 2 kHz trigger rate
    and periodic breakpoints with 4 MHz. Reenabling is done automatically
    on the board.
    You could use the counter inputs of a DAQ-board like the PCI-6220 (two counters) or a dedicated counter board like the PCI-6601
    (four counters) to generate the trigger signals by dividing down the
    frequency of the encoder signals. As this runs in hardware, you will
    also be able to generate triggers with rates up to several MHz.
    I hope that helps,
    Jochen Klier
    National Instruments Germany

  • Max sampling rates in differential sampling

    I am rather a novice in terms of DAQ and wonder about the maximum
    sampling rate. For the DAQ-cards I use (M-series 6221, E-series 6024)
    the max sampling rates are said to be 250 and 200kS/s respectively. I
    am aware that all channels share a common A/D converter, and that
    sampling several channels concurrently limits the max frequency per
    channel to a smaller value. But, what if I use differential sampling?
    Does this mean I have a reduced max sampling rate since it uses 2
    analog input channels? My guess is that I dont, that it is handled
    before A/D conversion, but I cant find the answer anywhere.
    Hope you can help!

    Hello Sirnell!
    You will have the same sampling rate regardless of which connections you make (differential,RSE, NRSE), but with the differential connection you will reduce the amount of channel you can use. With an E-series board with 16 inputs you will only be able to connect 8 signals using the differential connection.
    For more information about Field wiring take a peek at this link:
    http://zone.ni.com/devzone/conceptd.nsf/webmain/01F147E156A1BE15862568650057DF15
    For more information about DAQ (glossary):
    http://zone.ni.com/devzone/conceptd.nsf/webmain/45ACC30D4A769A3F862568690061D750
    Cheers.
    Ashwani S.
    Applications Engineer
    National Instruments Sweden

  • What are supported sampling frequency and digitization rates for Zen Micropho

    Some MP3's are playing back slowly. It is inconsistent, though, since files with the same digitization rate and sampling frequency will behave differently - some right speed, some slow.
    The bulletin board says:
    "My tracks don't play at the correct speed e.g. they play too slowly, why?
    Chances are they are encoded in an unsupported sampling frequency..."
    How do I find out what the supported sampling fequencies and digitization rates are?

    Thanks for the info. I guess I was looking for something more specific - the exact bitrates and sample rates that Creative claims to support. Would you know where official and comprehensi've data can be had? There must be a tech spec somewhere.
    It is common these days in business to see a recording of, say, a conference call or seminar presentation at 32k bitrate/025Hz, or even 24k bitrate/8000Hz, posted to a company's website for download by those who could not be there, and MP3 players are increasingly used for their replay. Companies use low digitization rates because there is no need for hifi and the files are much smaller: less storage, faster download.
    I'd be surprized to think that Creative don't have compatibility with the standard range of rates offered by ubiquitous programs like Audacity and dBpower, the latter being one they themselves recommend!

  • Converting Units in LabView and specifying sampling rate with Universal Library functions

    Hi,
    I am having trouble converting units in LabView 7.0 and having it write to a file / output on a chart in Nm instead of in volts.  I can't seem to find any straightforward instructions, eventhough it seems like a simple task. 
    Another task that would seem rather simple is changing the sampling rate with the XAIn program from the Universal Library for LabView (this is version 5.40).
    If anyone could help me out, I would greatly appreciate it - I have tried other sources without much luck!
    Thanks
    Jenna

    Are you really needing to change units or do you need to scale the voltages read from your sensors? If it's scaling, it would be trivial to set this up in mAX or with the DAQ Assistant if you were using an NI board. Since you are not, then you could use the Scaling and Mapping Express VI. The y axis of a chart is just a label. You can use the text tool to change it to anything you want.
    I have no idea how to change sample rate for your board. Have you asked the vendor?

  • Measuring frequency with counter or analog input?

    I have some frequencies which I need to measure. Some signals are digital and some are analog.
    I have a PXI-6040E DAQ which has both counters and analog inputs. 
    I heard that you can measure higher frequencies with the counters than with analog inputs.
    What is the range of frequencies an analog input can determine versus a digital counter? 
    Where is the cutoff that you must absolutely use a digital counter to determine the frequency?
    Is the cutoff different for different DAQ cards? Or is generally the same.

    Have you looked at the specs for the device? The max sample rate and the max counter input will be listed there. The max frequency for an analog input is based on nyquist sampling theory. Are you at all familiar with? It states your sampling frequency has to be at least twice the frequency you are measuring.
    Since each device might have a different max sample rate, your last question is answered.

  • Max achievable frequency in HLS code

    I am looking for the maximum frequency (smallest clock period) at which I could run the test code that I attach, targetted for Zynq (ZC702). It is the design of a very simple IP block with hls::stream input and output , with an s_axilite control.
    It simply computes two multiplications and the expf function, all in single-precision floating point and inside a pipelined for loop with an initiation interval II=1.
    1) If the solution is set with a clock period of 10.0ns, then synthesis and implementation (export) will meet the timing (I am using Vivado HLS 2014.3). But if I lower it to 5.0ns (as an example), then synthesis expects 4.36ns (uncertainty 0.62ns), but implementation will report 6.011ns and fail. (I am aware that synthesis only estimates the timing).
    2) I have also read the post in the HLS forum regarding "registered stages" and the suggestion of trying "reg(...)" available in #include <hls/utils/x_hls_utils.h>. If I apply this idea, then synthesis estimates 5.29ns (uncertainty 0.62ns), but implementation reports 5.681ns.
    2a) If I only compute the reg( expf (...) );  inside the for loop, then implementation will just achieve 4.990 ns.
    Based on this behaviour, I cannot find a procedure to find out the minimum clock period (max clock frequency) at which the IP block will work. (I cannot keep running implementation (RTL export) for every idea I come up with)
    QUESTION 1: Could you please provide guidance on this regard?
    QUESTION 2: just because the depth of the pipelined for loop increases should not mean that the minimum clock period increases, am I right? (I would expect the tool to automatically register intermediate results accordingly in order to achieve the targetted clock period). Is there an alternative to reg(...), or any other approach I could follow?
    Thanks in advance,
    Javier
     

    hello
    In your code if you think that the FSM is the issue, then you should remove the for loop!
    I'm not joking.. when you remove it, then your IP will have an II of 1 and a latency N; but from your integration perspective, this is almost the same: you still need AXIS IPs to push data to the VHLS IP.
    The fact that your VHLS IP read an unknown number of samples and then stops and has to be restarted in SW doesn't help you. It's a bit of overhead actually.
    If you remove the loop you should find that it goes faster.
    Last point, I don't know what is the F max achievable in the ZC702 's programmable logic but I think that 200 MHz should already be plenty!?
    What is the rest of the design going?

  • How to adjust line sampling frequency in photoshop CC on surface pro 3

    I just got a Surface Pro 3 so that I could work on my art on the go. I am used to working on a mac with a cintiq, but fell for the hype that photoshop was optimized to work with SP3. However, when I try to use the brush tool, my lines look AWFUL! no matter what I do, the lines are wavy and messy looking. I read that PSCC raised the sampling frequency rate so that the line would look better, but I think THAT is exactly why it looks awful. the Surface pro seems to be picking up EVERY microscopic twitch. I cant work like this ... and I am surely not the only person using PSCC on the SP3, as they're ADVERTISED to work together. How do I adjust the sampling frequency so that my line looks as smooth as it does on a Wacom in photoshop? ...because THAT image below.. looks like crap.

    I'm not an artist so I don't draw using my Surface pro 3.  Many have the same issue with wacom tablets and cintigu from what I read.  They seem to be able to solve their concerns by installing a plugin Lazy Nezumi Pro - Mouse and Pen Smoothing for PhotoShop and other Apps Remember I don't draw and have not installed that plug-in so I can not state that it works well on a surface pro 3.
    Doing a Web search  I found this I quote
    Hi guys,
    I'm the author of Lazy Nezumi Pro!
    Here's the situation with the Surface Pro 3: I'm not 100% sure, but I think the people who got LNP to work with the SP3 the ones who have Photoshop CC 2014, which uses the new Windows 8 native tablet API (instead of Wintab, which is very old tech). If you have this version, make sure you enable Windows Ink for it to work right. It's possible that the N-Trig Wintab implementation is just buggy, or that it's doing something very differently than what LNP is expecting (this is probably the case since I have reports that PS works fine without LNP). Unfortunately I won't know more until I get a SP3 myself and do some testing/debugging.
    I'm sorry I can't help more right now, but I'll keep you updated!  

  • How can I programmatically determine the capabilities of a card under NI-DAQmx (e.g. max sample rate, number of AI/AO/CTR channels, etc.

    Is there a DAQ_Get_Device_Info() equivalent for NIDAQmx? I need to iterate thru all the devices on my system, and build up a list of device capabilities. The system may include M-series and E-series cards.

    Attached is a program I've used in the past to determine number of AI channels. It could be easily modified to check for AO or digital or counter. Also, there is a ton of properties that you have access to (i.e. max sample rate, max/min voltage inputs, etc.) that are accessed as properties of the type of channel, or timing properties, as opposed to properties of the board. Check out the DAQmx C Reference Help (Usually at Start>>All Programs>>National Instruments>>NI-DAQ). Expand the NI-DAQmx C Properties, and look at the List of Channel Properties, and Timing Properties, etc.
    -Alan A.
    Attachments:
    Device_Info.vi ‏25 KB

  • Measure scan frequency using 2 e-series boards and an external clock.

    I am setting up a data acquisition system in which the user has the ability to select between multiple sources for the scan frequency. The user can choose either hardware based and select the scan frequency or an external source, in this case, an encoder. When the user selects the external case, I would like to measure the scan frequency. I have a PXI chassis with 3 6071E DAQ boards. I have the encoder pulse train wired into the master board and would like to use one of the slave boards to count the pulses and measure the frequency. I based the code I developed on example code that shipped with LabView. However, when I drive the master board with a known frequency using its internal clock,
    I measure 0 frequency on using the slave boards counter.
    Attachments:
    DAQ_3_Boards_-_State_Machine_-_Scaled_Array.llb ‏1525 KB

    It sounds like you have your program setup to measure the frequency of the clock only when you are using an external source. The counter won't get an input if you are not using an external source, right? So, when you drive the master board with an internal clock, the counter will have nothing to count.

  • PCI-6023E DAQ card maximum sampling frequency

    Hello
    I am using PCI-6023E DAQ card in pc-based ETS solution (and writing appilication in LabView 7.1 with RT module). The card has 200kS/s maximum sampling frequency, but it can be set for much higher sampling frequencies and the waveform acquired appears to be correct (i.e. i've tried setting it fo 1MS/s and sampling 400kHz sine, which is obviously above Nyquist frequency for 200kS/s card, but on spectral graph, main peak is at 400kHz). Is the card driver doing some kind of free/coherent sampling?
    Moreover, when sampling frequency is set to 200kS/s, the card seems to be doing same thing - i.e. for 200kS/s and sample block size of 200kS, graph should be updated once in a second, but it's updating slighty slower.
    I'd really appreciate if someone could explain me (or gave me a link to materials) what exactly is happening here? Is driver doing some background work, or maybe it is problem with network latency/unstability ? What is the impact of this effect on real-time aquisition?
    Thanks in advance
    Jan Kienig

    Since the fundemental is 4 times the nyquist, then what you are measuring is an alias of the fundemental. This works well as long as the fundemental is a repetitive signal. Sampling every other peak and every other node looks the same as sampling every peak and node. Tektronix exploited this on their 7S series sampling heads. Another use of this phenomena is the effective demodulation of high frequency signals as long as the bandwidth meets nyquist. As with your card, if the input amplifier supported it, I could extract modulation information from a 500 MHz signal so long as the the bandwith of that modulation did not exceed 100 kHz.
    Parker

  • PXI-7831R analog input max sampling rate?

    I'm using 5 of the analog inputs on the 7831R and seem to only be able to get a max sampling rate of 10K per channel. Looking at the specs it should do at least 10 times this per channel, also the time for the A/D loop is 228 ticks of the 40MHz clock so this would suggest a higher sampling rate than just 10K. I'm missing something here... Any ideas as to what the deal is?
    pete

    Hi Pete,
    You are right in saying that the 7831R boards should acquire data much faster than 10KHz. Infact you can take a look at the actual specs from the data sheet http://sine.ni.com/nips/cds/view/p/lang/en/nid/14757 as well as the product manual. However, I am concerned that you may be measuring the rate of acquisition on the RT VI (or any host vi) you are running as opposed measuring it in the FPGA VI.
    Your time critical loop may be running at 228 ticks of the 40 MHz clock but the data is buffered (at that rate) and has to be then transferred to the communication loop which then sends the data to the host vi (which may be your RT vi). Hence, the rate at which host VI receives the data will be much slower than the FPGA VI is acquiring data.
    Hope this helps!
    Prashanth

Maybe you are looking for

  • P67A-GD65 No BIOS/boot with DisplayPort

    Hi, Just replaced Asus P8P67 Deluxe with this board, as the Asus suffered from random power downs. Turns out, this board did exactly the same as I was typing this. That problem aside (added more vcore), this board will not even boot into the BIOS (v1

  • Lr3.6 shuts down w/o reason and refuses to obey mouse clicks.

    i run lr3.6 under windows xp on a dell t3400 workstation.  lately, it shuts down unexpectedly and refuses to follow directions like "set flag" or "export". HELP

  • Viewing Office 365 Calendar entries on Z10

     Hi I have a Z10 connected to Office 365. When I made the connection I could sync email calendar, notes etc. I have lost the ability to see appointments in the calendar function in the phone. I can see appointments in the hub for that day.  I can als

  • Another 4.4.2 issue - cannot clear notifications

    And I am also having data connection issues as mentioned by others in this forum. C'mon Verizon - fix this update or roll it back!

  • Time Capsule/Machine - Duplicated Volume

    Hi, since I have upgraded to SnowLeopard something went wrong with my time machine. I have a Time Capsule hard drive called "TimeCapsule". Now every time Time Machine runs, it seems to be mounting two volumes (called backup of xxxx and MAC OSX for so