Ways of synchronizing PXI fpga clock

It seems that there are two different ways of synchronizing the code on the PXI 7852R to the clock of the PXI-1062Q chassis. What is the difference:
1.
As described in http://digital.ni.com/public.nsf/allkb/D108D4AF937524CA862570FF004C2A48?OpenDocument
I'm not quite sure what this actually does: I have a 40 MHz onboard clock on my fpga; is that the one that will be synchronized? Will thus the 10 MHz chassis clock automatically be multiplied to 40 MHz?
2.
Use a single cycle timed loop and choose the chassis 10 MHz clock (or a derived clock from it) as the loop clock
What is the difference between these to ways? What does the first way do exactly (does it also apply to untimed loops)?

Hi tom_ifms,
the first methode synchronizes your 40 MHz clock to the 10 MHz PXI clock. With the second methode the SCTL uses the PXI clock directly.
Hope this helps
Kind regards
Carsten

Similar Messages

  • How can i change fpga clock for whole code

    Hi i'm new at labview and using 8.6 demo(real time and fpga modules).
    i developed some codes for practise and want to change clock freq to 200MHz from 40M for all codes.
    i mean how can i use derived clock for all my vi.
    LabVIEW 2009, Windows 7
    Solved!
    Go to Solution.

    PAR done!
    ERROR:Xflow - Program par returned error code 31. Aborting flow execution...
    when i use 200mhz as fpga clock i'm getting this error after compilation.
    is there a way to use fpga other then 40mhz. and i dont want to use timed loops because i am using
    loops inside loop.
    LabVIEW 2009, Windows 7

  • How Can I Driving the PXI Backplane Clock with a PXI-6608?

    How Can I Driving the PXI Backplane Clock with a PXI-6608?
    Is the PXI Backplane Clock the timebase of the PXI Trigger BUS?
    How Can I use a PXI-6608 as the timebase of the PXI Trigger BUS?
    Thanks!

    Hi,
    The 6608 must be in slot 2 of the chassis. Once you make a call to the DAQ driver, the OCXO on the 6608 will be routed to the PXI_Clk 10 line.
    More information on the backplane and routing clock signals can be found in these Knowledge Bases:
    http://digital.ni.com/public.nsf/websearch/5EC3704​19A5ECA7A86256CFC0061C528?OpenDocument
    http://digital.ni.com/public.nsf/websearch/D5B8D4D​3B67DF1E086256BF8007BBF93?OpenDocument
    I hope this helps. Have a Great Day!
    George

  • Can't replace PXI onboard clock with PXI-6682 oscillator clock on PXI back plane

    I have installed a PXI-6682 timing module in slot 2 of a PXI-1031chassis with a PXI-8110 controller running Hypervisor and RT.  The 6682 is installed on the RT system under Hypervisor for GPS timing during measurements but I would like to put the TCXO on the PXI-1031 chassis backplane (replacing the PXI onboard clock).  When I try to run the "Route Clock.vi" that I found in the example finder, all I get are error messages indicating that a parameter for this operation is invalid.  The source terminal is "Oscillator" and the destination terminal is "PXI-Clk10_In".  How do I determine which parameter is invalid?  Any suggestions?  Thanks
    Solved!
    Go to Solution.

    Hi vugt,
    I tested this out and have what I believe to be your final answer.
    The short answer:  The PXI-6682 can be used with NI-Hypervisor on the Windows side, but not on the Real-Time side.
    The long answer:
    While the NI-Hypervisor Manager allow you to put the PXI-6682 on two different systems, it is still only one PXI card on the PCI bus, so really only one system can access it at a time.
    Therefore we need to assign both "devices" to either Windows or Real-Time.  For our purposes, let's assume we assign it to Real-Time.
    On an NI-Hypervisor system, each PCI interrupt line can only be assigned to one operating system.  Either it can be assigned to Windows, or it can be assigned to RT.
    Here is where the problem arises: The PXI-6682 needs to be located in a System Timing slot (generally slot 2).  However, this slot (at least in the 1000B chassis I tested in) is located on the same PCI interrupt line as the chipset.  Windows is required to have access to this PCI interrupt line, so devices used by Real-Time cannot be located on this interrupt line.
    This prevents us from being able to use the PXI-6682 in Real-Time.
    The NI-Hypervisor Manager will try to tell you to resolve this conflict by moving the PXI-6682 to slot 3, however, now the System Timing slot requirement is not met.
    This does not prevent the card from being used on the Windows side of an NI-Hypervisor system, or on a purely Real-Time system.
    Have a great day,
    Chris V
    Applications Engineer
    National Instruments

  • Best way to synchronize multiple FPGAs

    I have multiple PXI-7833R FPGAs and I need all of the AIs to be sampled at the same times (across all FPGAs). As I sample all of the individual AI channels, I buffer the data (write to DMA), scan it and check for a user defined trigger in a different loop. Once I discover this in one channel, I save the data from all FPGAs. In terms of synchronizing the sampling, I had begun, from one FPGA, to send a signal over the PXI trigger line to tell the others to sample, but I assume this does not guarantee synchronization. If I base all of the separate FPGA VIs off of the PXI clock, how do I synchronize the loops to sample at the same clock times?
    Thanks
    Solved!
    Go to Solution.

    Hi,
    Well, if it's not very timecritical you could pass messages through the host, thats right.
    Another way, which is quite hard to implement though, would be to use the other available pxi trigger lines to send messages directly from one FPGA to the other. You would need something like a handshaking, a kind of master who directs which slave is allowed to send, a kind of clock for synchronization and so on.
    I cannot give you detailed information, since I never did that by myself, but I know other projects where this works quite good.
    Maybe another forums user can give you some better advice.
    Thanks,
    Christian

  • Source Synchronous Input: Capture clock/Launch Clock analysis

    Hi, I have a Source Synchronous LVDS DDR input into a Kintex7, the launching clock is edge-aligned to the data and capture clock should capture on opposite edge (a launch on the rising edge should be captured by the falling edge). I have designed it to work at 100Mhz by compensating the clock insertion delay with a PLL (to save the MMCM for other purposes) using BUFH (the timing is not so tight to use BUFIO/BUFR). The PLL also centers the opposing edge on the data window by shifting -90°. Now the launching clock waveform is {5.0 0.0}, and the waveform generated by the PLL is {2.5 7.5}, this is reported correctly by Vivado. But when vivado analyses the setup path (I have set the proper set_input_delays) the following happens:
    -Launching clock rising edge is correctly at 5.0ns at the input data PIN.
    -Capture clock falling edge is INCORRECTLY 7.5ns AT THE CLOCK PIN.
    What I don't get is: why Vivado, that recognizes that the capture clock is a generated clock by the PLL, uses the {2.5 7.5} waveform AT THE CLOCK PIN, and not at the ouptut of the PLL BUFH. I mean the falling edge of the capture clock should be 7.5ns at the output of the BUFH, not at the input to the FPGA (I see that the PLL correctly shifts this 7.5ns to be 5ns at the BUFH, but this is not what actually happens).
    Doing the calculations manually the interface meets setup/hold with ease. I just want Vivado to make the proper analysis.

    I am not 100% certain I followed all your logic, but I think the issue is that you aren't following how clocks are treated in SDC/XDC.
    In Vivado/SDC/XDC, there are two separate concepts - clock phase, and clock propagation. For calculating how the launch/capture edge relationship is done, it is done based purely on phase - propagation does not factor in to it. Regardless of how they are generated, you have two clocks
      - the primary clock (generated by the create_clock command), which has a waveform of {0.0 5.0} (I am not sure why you say the opposite - {5.0 0} is not a meaningful representation in XDC)
      - the automatcially generated clock at the output of the PLL, which has edges at {2.5 7.5}. Its a little irrelevent how you defined it (with a +90 or -90 degree shift since the interface is DDR) - for the sake of argument, I will say its +90 degrees.
    First, recognize that when you define a DDR input and use an IDDR to capture it, you are defining four static timing paths - you have two startpoints, one from your set_input_delay and one from your set_input_delay -clock_fall -add_delay. You also have two endpoints, one from the rising edge clock at the IDDR and one at the falling edge of the IDDR. This generates 4 paths
      a) rising input -> falling IDDR
      b) falling input -> rising IDDR
      c) rising input -> rising IDDR
      d) falling input -> falling IDDR
    All four paths exist, and all are timed.
    Now you need to understand the rules that Vivado uses to determine launch and capture edges. For this system, it is easy - the capture edge is always the edge of the capture clock that is the earliest that follows the launch edge. So in this case (assuming launching is {0 5.0} and capture is {2.5 7.5} will be
      a) rise at 0 -> fall at 7.5
      b) fall at 5 -> to rise at 12.5 
      c) rise at 0 -> rise at 2.5 (this is the most critical one, so a) is irrelevent)
      d) fall at 5 -> fall at 7.5 (this is the most critical one, so b) is irrelevent)
    Now that it has determined the launch and capture edges for all the paths, it starts the propagation at the primary clocks. For your set_input_delay these are the clock pin. For the capture IDDR, the edge starts at the primary clock, propagates through the PLL (which adjusts it for clock propagation but not for the phase shift), and ultimately to the IDDR clock.
    Now, in a real system this is what is going to happen - I am not sure why you think this is incorrect. If, however, there is some reason to believe that c and d are false paths, then you have to declare them as such (which will then leave a and b as the ones that remain). To do this, you would work with a virtual clock - you would define TWO clocks - the primary clock to the clock pin, and an idential virtual clock for your set_input_delays
    # Create the real and virtual clock
    create_clock -period 10 -name real_clk [get_ports clk_pin]
    create_clock -period 10 -name virt_clk
    # Define the input delay with respect to both edges of the virtual clock
    set_input_delay <delay> -clock virt_clk [get_ports data_pin]
    set_input_delay <delay> -clock virt_clk [get_ports data_pin] -clock_fall -add_delay
    # Disable the rising to falling and falling to rising paths
    set_false_path -rise_from [get_clocks real_clk] -fall_to [get_clocks virt_clk]
    set_false_path -fall_from [get_clocks real_clk] -rise_to [get_clocks virt_clk]
    Even though "real_clk" and "virt_clk" are different clocks all clocks in Vivado are related by default, so the fact that they have the same period and starting phase (which defaults to {0 5.0}) then they are effectively the same clock.
    The reason for using the virtual clock is to make sure that if there is a rising to falling edge path inside the FPGA, you don't accidentally declare it false too - and these will happen between the IDDR and fabric logic (if it is in OPPOSITE_EDGE mode).
    I hope this is clear... It can be a bit confusing, but it does make sense.
    Avrum

  • Synchronizing PXIe-6556

    Hi,
    I'm trying to do the following two different synchronization tasks (for proof of concept) with two PXIe-6556 cards both in a PXIe-1073 chassis:
    Synchronize two generation sessions with one generation session on each 6556 card.  If I configure one session to have a SoftwareStartTrigger and export StartTrigger on PxiTrig0Str, the second session will successfully trigger from it.  However, they aren't aligned, which I think is due to them not having a shared reference clock?
    Synchronize two generation sessions on a single 6556 card.  This one I haven't had any luck and I'm not sure is possible.
    Thoughts?
    Thanks for the help.

    Hi Jesse_O and PeanutButterOven,
    Thanks for the information.  I was able to sync two different 6556 cards with TCLK.  I only created a generation session on each card.  Tomorrow I'll work on sync'ing two generate and acquire sessions.
    I do have one question for now.  When I'm watching the flashy lights on the front of the 6556s during the execution of the code, I notice the Active LED goes red for a brief period on both cards while the niTCLK.Synchronize() method is executing.  Is this to be expected?  Maybe part of the synchronization process?  When I run the niTCLK.Initiate() method the data shows up synchronized on the oscope, so right now it appears that it is working correctly.
    Below is my C# code I'm executing for this demo.  Is there anything I have or missing that would cause the Active LED to display red?
    niHSDIO pxiBoxGenerate1;
    niHSDIO pxiBoxGenerate2;
    string resourceName1 = "Dev2";
    string resourceName2 = "Dev3";
    string channelList1 = "0";
    string channelList2 = "0";
    double sampleClockRate = 100000;
    string channel1WordString = "10101010100";
    string channel2WordString = "01010101010";
    string digitalWordString1 = channel1WordString;
    string digitalWordString2 = channel2WordString;
    List<System.IntPtr> sessionsL = new List<IntPtr>();
    byte[] digitalWordByte1 = digitalWordString1.Select(n => Convert.ToByte(char.GetNumericValue(n))).ToArray();
    byte[] digitalWordByte2 = digitalWordString2.Select(n => Convert.ToByte(char.GetNumericValue(n))).ToArray();
    System.IntPtr session1;
    pxiBoxGenerate1 = niHSDIO.InitGenerationSession(resourceName1, true, true, "", out session1);
    sessionsL.Add(session1);
    System.IntPtr session2;
    pxiBoxGenerate2 = niHSDIO.InitGenerationSession(resourceName2, true, true, "", out session2);
    sessionsL.Add(session2);
    pxiBoxGenerate1.AssignDynamicChannels(channelList1);
    pxiBoxGenerate1.ConfigureSampleClock(niHSDIOConstants.OnBoardClockStr, sampleClockRate);
    pxiBoxGenerate1.ConfigureGenerationMode(niHSDIOConstants.Waveform);
    pxiBoxGenerate1.ConfigureGenerationRepeat(niHSDIOConstants.Finite, 1);
    pxiBoxGenerate1.WriteNamedWaveformWDT("Test", digitalWordByte1.Length, niHSDIOConstants.GroupByChannel, digitalWordByte1);
    pxiBoxGenerate2.AssignDynamicChannels(channelList2);
    pxiBoxGenerate2.ConfigureSampleClock(niHSDIOConstants.OnBoardClockStr, sampleClockRate);
    pxiBoxGenerate2.ConfigureGenerationMode(niHSDIOConstants.Waveform);
    pxiBoxGenerate2.ConfigureGenerationRepeat(niHSDIOConstants.Finite, 1);
    pxiBoxGenerate2.WriteNamedWaveformWDT("Test", digitalWordByte2.Length, niHSDIOConstants.GroupByChannel, digitalWordByte2);
    pxiBoxGenerate2.Initiate();
    int niTCLKError = 0;
    niTCLKError = niTCLK.ConfigureForHomogeneousTriggers((uint)sessionsL.Count, sessionsL.ToArray());
    niTCLKError = niTCLK.Synchronize((uint)sessionsL.Count, sessionsL.ToArray(), 0);
    niTCLKError = niTCLK.Initiate((uint)sessionsL.Count, sessionsL.ToArray());
    pxiBoxGenerate1.WaitUntilDone(-1);
    pxiBoxGenerate1.DeleteNamedWaveform("Test");
    pxiBoxGenerate2.DeleteNamedWaveform("Test");
    Thanks for your help,
    Harold

  • How to affect the actual FPGA clock rate?

    Hallo,
    I developed a FPGA-VI, which should run with 80 MHz clock rate on the PCI 7833R board.
    This was no problem at the beginning, but meanwhile the code got bigger (Slices: 30%). Now the compiler says that there is an error with the timing constraints. The maximum clock it can reach is 74 MHz.
    Unfortunately I need the 80 Mhz to be fast enough.
    What parts of the code influence the actual clock rate?
    How can I get it faster?

    I think you missunderstood what I mean:
    The problem is not how many ticks a while loop needs for one cycle.
    The problem is the clock rate setting for the FPGA-board. It is set to 80 MHz and not the Default 40 MHz.
    When I compile my VI the compiler reports that it can't keep the 80 MHz. That means for example that a Single Cycled Timed Loop doesn't need 12.5 ns but for instance 13.4 ns.
    If I separate my code in 2 parts and compile each part as a separate project, the compiler can keep the 80 MHz. Only together it doesn't work.
    So the whole problem has something to do with the size of the project, the depth of the structures etc.
    So I need to know for example what structures slow down the clock most, or something like that.

  • Is there an obvious way to prevent an FPGA multiply from using DSPs?

    In FPGA coding a High Throughput Multiply function will take advantage of any available DSPs on your FPGA. This is great, unless you have more multipliers than available DSPs. My FPGA code is currently failing to fit, and although it's using just 50% of the Registers/Slices, it's trying to use more DSPs than are available (My Virtex 5 chip has 64 DSPs).
    In the help I'm pretty sure it says the compiler will revert to using slices for multiplications in the absence of sufficient DSPs, but my experience is otherwise.
    If I convert just one of the High Throughput Multiply functions to a standard LabVIEW Math multiply primitive I receive timing errors from the compiler (even though there are minimal timing constraints in my code), and it therefore still fails to compile.
    So my question is: Is there a way to prevent a Multiply function in LabVIEW FPGA from using DSPs?
    Thoric (CLA, CLED, CTD and LabVIEW Champion)

    Dragis wrote:
    Unfortunately, this is a bug/feature of the Xilinx tools. Once the tools have decided to choose DSP blocks for the multipliers, it will only use them and not place any in logic. There is a way to disable automatic usage of DSP blocks, you'll have to work with NI customer support to make that change. If you use this option, you'll have to manually drop the High Throughput Multiplier everywhere you want a DSP48 to be used.
    I've gone through Technical Support with this issue once before about a year ago and there was no mention of a 'solution', just the unhelpful suggestion to "use fewer multipliers". I'll raise another support case and link to this thread, maybe they'll need to speak with you directly to understand what this option is.
    I'm perfectly happy, in fact I'd prefer, to be able to specifically select where a DSP is and isn't used for a multiplication. If reasonable, I'd suggest this option to disable automatic usage of DSPs be made public (either through the knowledgebase, or as an actual feature in the FPGA toolkit perhaps).
    Thanks Dragis.
    Thoric (CLA, CLED, CTD and LabVIEW Champion)

  • Is there a way to dismiss the alarm clock without having to unlock the phone?

    This has been annoying and, quite honestly, I don't remember it being a problem before the iOS 5 update.
    When an alarm goes off using the default clock / alarm app and the phone is locked, all I get is the option to Snooze. The only way to actually dismiss the alarm so that it does not go off again is to unlock the phone while the alarm is going off.
    Is there a way - either by pressing a button on the phone or getting a dismiss button next to the snooze button - to dismiss an alarm in the lock screen without having to unlock the phone? I know if the phone is unlocked, the alert will give me the option of either "Snooze" or "Ok," but I don't get that same option in lock mode.
    Any help with this would be appreiciated.

    The unlock slider at the bottom of the lock screen should turn off the alarm.

  • Is there a way to keep the nano (Clock face) ON all the time?

    Hello all.
    I know this may sound silly, and definitely a drain on the battery life.
    (But) I like my nano Clock face "always on", it's good to look at anyway, with all the new clock faces.
    I thin the nano "defaults" and the screen fades to black in a certain amount of time.
    Is there a way to
    Keep it "lit" and stay "always on"? OR
    Prolong the "screen lit time"?
    Thanks and cheers

    Hey Everyone,
    DISCOVERED THE ANSWER!!!!!
    I recently got one of these fun little squares and I just figured out (what seems to be) a simple way to keep the clock face on. (Though, I certainly agree with the comments about the ultimate impracticality of doing so for a long time, as the the battery in this thing only has so much to give.)  I'm running the latest update on it.
    This seemed too easy to believe when I did it! No need to keep it empty of music! No need to keep it in Fitness running! I think this works because the clock face, stop watch, and timer are all in the same segment of it's little nano operating system.
    If you turn on the stopwatch then switch back to the clock face, it will stay fully illuminated as long as the stopwatch is running. I think the limit on the stopwatch is 24 hours, but my guess is your battery is going to be dead long before that. I'd recommend setting the brightness super low if you're going to do this.
    Anyway, problem solved if you want to keep it lit for however long you like. (of course you have to sacrifice the stopwatch)
    Enjoy!!

  • 6300 - Any way to change the digital clock appeara...

    Hi all,
    Very happy with my new 6300 except for one thing - the digital clock on the normal standby screen is difficult to read and quite ugly with its pretend LED display. On every other screen, and when the display blanks, the time is shown in the very readable standard system font. Can the clock on the standby screen be configured that way? Or changed in any way?
    Thanks for any help.

    well you get two clock options in 6300 one the digital and the other one is analogue.there is only one style to both the clocks and you can't configure it.while browsing the menu the clock will appear in system default font only.
           what you can do,if you want to change the clock's appearance for a while is,you can apply a theme which has a swf slock set as the wallpaper.you can easily find such themes if you google them out.
    Was this post of some help? Click 'Kudos' star on the right hand side of this post. Your gesture will be highly appreciated!

  • VISA Server and PXI FPGA

    I have a PXI-1042 with two PXI-7833R FPGA cards.  I wrote some new code for them, but the PXI doesn't have labview on it (only run-time engine), so I want to run my code remotely on my office PC while controling the device through visa server.  On the PXI, I started visa server and allowed all access.  At the beginning, I was able to see the PXI backplane as a device remotely, but I wasn't able to see the two FPGA cards.  After a while, not even the backplane showed up.  Anybody has a soluation for this?  Thanks!
    Kudos and Accepted as Solution are welcome!

    Hello jyang72211,
    Make sure that the National Instruments RIO Server is running on your Windows computer. The RIO server allows other systems to access RIO devices on your local computer through network connections. You can check the status of the service by Control Panel>>Administrative Tools>>Services. With the RIO Server service running on your computer please double check that you have the correct remote access settings configured in Measurement & Automation Explorer. In MAX, go to Tools>>NI-RIO Settings and make sure that you have an "*" checked in the Remote Device Access list. I have included links to some documentation which have more information regarding this issue. Hope this helps!
    Remote Application Development for Windows based CompactRIO
    http://www.ni.com/white-paper/13050/en
    How do I Configure the RIO Server on my 908x cRIO running Windows Embedded Standard 7?
    http://digital.ni.com/public.nsf/allkb/E10AEC4FFBB​784368625795100790AA4?OpenDocument
    Paul-B
    Applications Engineer
    National Instruments

  • Best way how to write FPGA data in rt cRIO system in tdms file

    Hej,
    I am struggling to write measured data from an analog input (NI 9215) sampled at up to 20 kHz to a tdms file in the rt system (crio-9022). I just need to save several periods of 4 arbitrary analog signals at frequencies between 5 Hz and 1KHz. So storing up to 50k values should already be enough.
    I use a high priority and a low priority loop. First I tried to adapt the example from the "Getting Started with CompactRIO - Logging Data to Disk" (http://zone.ni.com/devzone/cda/tut/p/id/11198). But when I used this in my high priority loop (running at 1ms), the loop runs out of time and the rt system becomes unresponsible. If I change the number of elements to write (the number of elements to wait for in the fifo read block) it becomes better, but still data is lost because the loop finishes late.
    So I was thinking to create a RT fifo and to store all the values from the measurement first in this memory inside the high priority loop and then write the values to the tdms file in the low priority loop. This time I used the read/write fpga block instead of the FPGA fifo block. It was already working better but writing the files to the tdms file took a lot of time since each value was read and written to the tdms file individually. Unfortunately I could not find a possibility how to write the whole rt fifo to the tdms file at once. Is there a block available or is it possible to create a big array first and then write the data to the tdms file at once? My code I tried is in this second picture.
    I hope someone can give me some tips which method should be better for my project and a hint what I did wrong or what I can optimize. I stucked for days now on how to save my measurements on the cRIO system.
    Thank you very much in advance. Have a nice weekend.
    Best regards
    Andy
    Solved!
    Go to Solution.

    HiXiebo and Christian,
    thank you very much for your answers. Actually, my high priority loop is much slower. I run it with a maximum loop time of 50us = 20kHz or slower, depending on my Signal I want to measure. So my data producing rate is maximum 4*8*20k=640 KB/s. My low priority loop runs at 2ms to 5 ms (much slower then the high priority loop), since I am doing just some simple math calculation there and control the front panel in this loop.
    I understand that it is much more efficient to write blocks of data (e.g. 1024*32KB instead of just 32KB) to a file with TDMS. But is it also the same for a queue or RT FIFO, i.e. does the block size of the data chunks also matter for the queue and RT FIFO?
    @Xiebo: I understand that caching the read data from the FPGA in the high priority loop first will improve my code. But I do not know how I can cache the data I read? I was thinking to do it with the FPGA FIFO, but the FPGA read/write blocks seem to be faster for me and I do not know why? Can you tell me a block/vi to cache the data I read from the FPGA or maybe even an example?
    @Christian: This NI_MinimumBufferSize property looks exactly what I was looking for. But my question is now if I should put the tdms write VI's in my high priority loop and read directly from the FPGA FIFO buffer to the file as it is done in the Disk logging example at http://zone.ni.com/devzone/cda/tut/p/id/11198? Or is it better to read the data from the FPGA via the FPGA read/write function, write the data to a RT FIFO in the high priority loop and then write the data with the  NI_MinimumBufferSize property option to the tdms file from the RT FIFO in the low priority loop?
    In summary, I am still unsure if the FPGA FIFO or the FPGA read/write function with a queue or RT FIFO is better for me and how I can create a cache to build chunks of data blocks to write.
    Thank you very much in advance for your help.
    Best regards
    Andy

  • Hate AM/PM, is there no way to change to 24hr clock in iCal..?

    I would really prefer to use a 24 hour clock for my appointments in iCal, how do I change it from AM/PM..??

    Good answer..! Didn't realize the time digits can be clicked on and changed to a different format. That did the trick, now I'm in a 24 hour world again..!
    Thanks..!!

Maybe you are looking for