Clock domain crossing FIFO sanity check

Hi all,
   I am having an issue with producing a one clock cycle wide pulse output.  I have dealt with crossing clock domains before but I just want to make sure I am not doing anything wrong.
I have two clock domains that are mesochronous, both are 250MHz from two different external devices that are deskewed in a DCM.
I generate a 1 clock cycle wide pulse from clock domain 1 periodically.  I need this pulse to cross the domain coherently into clock domain 2 so that the period remains the same.  In other words the latency from crossing clock domains must have a a constant latency (the amount of latency does not matter as long as it is the same all the time).
I am using a coregen generated asynchronous block RAM FIFO.  The write enable is the pulse output from clock domain 1 and the empty flag of the FIFO is the output pulse (except I negate it and register the output in an IOB FF clocked with clock domain 2s clock).  The output FF is actually a FDCPE, since it is necessary on power up to send an asynchronous '1' to the device.  Once the asynchronous '1' is outputted, it is cleared and never used again.  I am guessing that an FDCPE primitive will act the same as a FF.
Here is the issue:
  On power up, if it works, it will continue to work at all temperatures for as along as the system is powered on.  If on power up it does not work, it will continue to not work for as long as the system is powered on.  I am assuming this is a power-on phase issue.  Would this have something to do with how I am crossing the clock domains or should I look elsewhere?
When I say it does not work I mean:
Clock domain 1 is from an ADC and clock domain 2 is from a device similar to a DAC.  The output of the DAC feeds back into the ADC.  When it does work, the output I create from the DAC is coherently read from the ADC.  When it does not work, it looks as if the DAC output is started at different clock cycles (multiple phase shifts).  This is why I am assuming that it is due to signal coherency, but we have used this scheme for crossing clock domains multiple times and it has always worked without any issues.
Sorry if this is not enough information.
Thanks

I don't entirely understand the description of the problem you are seeing - we need more context for that. But I will address the clock crossing.
I don't see anything fundamentally wrong with the clock crossing mechanism you are describing. However, it is VERY expensive for what you are using it for. In fact, even though you are using a clock crossing FIFO, you aren't actually using the storage of the FIFO - you are just using the address counters and full/empty flag generation (which is implemented in fabric logic), and completely ignoring/wasting the RAM.
There are many simple circuits for doing this clock crossing. As long as you can ensure that you will never get one pulse less than 3 (maybe even 2) after the previous one, then the circuit shown below (a toggle event synchronizer) is simple cheap and effective
This circuit takes your pulse event on the source clock domain, converts it into a toggle event, which is then synchronized through a two stage synchronizer and then edge detected in the destination domain.
You don't say what tool you are using (Vivado or ISE) - in either tool it will need some constraints.
In Vivado, you should set the ASYNC_REG property  on the two middle flip-flops
set_property ASYNC_REG TRUE [get_cells {signal_meta_reg signal_dst_reg}]
You will also need some sort of exception on the clock crossing (since all clocks in Vivado are related by default). My preferred one is
set_max_delay -datapath_only 4 -from [get_cells event_toggle_reg] -to [get_cells signal_meta_reg]
You should still use this even though both clocks are 4ns periods (so the requirement will end up being 4ns anyway) but the -datapath_only flag tells the tools not to analyze the clock insertion... Furthermore, on this synchronizer, a max_delay isn't technically needed (since there is only one signal being synchronized), so you could set the path as being false, but it is good habit to use this constraint anyway, since other synchronizers need it.
If you can't guarantee that there are 2 clocks between events, then you can use a simple Gray code counter on the source domain to count events, and send the count to the destination side, which will generate one output pulse for each count received - this is basically what the logic in the FIFO is doing, but without carrying around the useless RAM.
And, by the way, if you are going to stick to the FIFO, why not use a distributed RAM based FIFO - you won't need to waste the block RAM... If you make it 64x1, then you will only waste two LUTs for the useless RAM instead of an entire block RAM.
Not that this matters, but you say that the clocks are mesochronous - are they really? To be mesochronous, they need to derive from the same oscillator; they may go through very different paths, but they must come from the same frequency source. Merely both being 250MHz does not make them mesochronous (but, as I say, that doesn't matter for this clock crosser ).
As for the rest of it - I don't think the clock crossing is the source of your problem. Its vaguely possible that you are messing up the FIFO logic by giving it a pulse too close to the deassertion of the reset; the built-in FIFOs have a requirement that the WR_EN not be asserted within a handful of clocks after the deassertion of rst. But you say you are using the block RAM based one, which probably doesn't need this. So its probably not the clock crosser...
Avrum
 

Similar Messages

  • Clock Domain Crossing with FIFO

    Hi,
    I have a project for Spartan 6 xlx16. In this project I have:
    - clk_in: 20 MHz from an external Quart
    This clk_in (20 MHz) has the followin path:
    - 20 MHz -> Clock Wizard -> 20 MHZ -> Clock Wizard -> Out1: 22 MHz, Out2: 44 MHz.
    The 22 MHz is used to acquire data, and 44 MHz is used to transmit the data. So I used a FIFO: 
    en_fifo_nempty <= not en_fifo_empty;
    my_fifo : fifo_generator_v9_3
    Port map (
    rst => reset_ien,
    wr_clk => clk22,
    wr_en => '1',
    din => data_in,
    rd_clk => clk44,
    rd_en => en_fifo_nempty,
    dout => data_out,
    full => open,
    empty => en_fifo_empty);
    I have a analog channel where I can compare in parallel the original data with Acquired + Transmitted Data. I synthesize one and I notice that Data bit 7 is not stable. Make some changes Data bit 4 in not stable. Some more changes data bit 5.. and son on. 
    What am I missing here? What am I'm doing wrong? In *ucf file I have no special constrains (only clock constrains).
    There are no timing/setup errors. 
    PS: I'm not allowed to change the Acquisition and Transmission Modules so I have to stick to 22 MHz and 44 MHz.
    Thaks,
    Paul
     

    The input data is aquired synchronous from an ADC and processed with the 22 MHz (digital filtering, adder ...). ADC clock is generated dividing the 22 MHz clock with /2 ( simple clock divider). 
    With 20 MHz and 40 MHz everything is ok.
    I have to increase the dynamic of the system with 10% and not allowed to change some VHDL modules so I tried this overclocking (ADC can work at way higher frequencies). With 22 MHz and 44 MHz and absolutely no error from ISE Tool it doesn't work correctly any more.

  • FIFO across clock domains

    I'm using a FlexRIO 7966R for digital signal manipulation and need to buffer data across clock domains. By buffer I mean I need to be able to store in memory a variable amount of data before it's read back out in order to achieve a data delay. I can successfully write to the FIFO in one clock domain and read data from the FIFO in another clock domain, but as soon as I introduce the "Get Number of Elements to Read" function the compilation fails with a timing violation. It appears that this method cannot execute quickly enough:
    I tried moving the "Get Number of Elements to Read" function into another slower clock domain SCTL but the compiler then states that it has to be in the same clock domain as the Read FIFO function, so that doesn't help.
    Any thoughts anyone?
    Thoric (CLA, CLED, CTD and LabVIEW Champion)

    Intaris wrote:
    Correct, BRAM does not cross clock domains. This is why i proposed splitting the work into two parts, domain crossing and delay.
    Using the BRAM on the receivinG side only you can implement a circular buffer of size x with write index incremented each cycle and the read position is relative to this.  By changing the offset between write and read (all on the receiver side) you can implement any delay up to x.  Your receiver order would be read FIFO (in every cycle), write to BRAM, read from BRAM and continue.
    That way your FIFO for crossing domains can be much smaller, saving LUTs and registers.
    Regarding recucing the delay: If your sender is sending data as fast as your receiver can read them, reducing the delay sounds like it is always going to be lossy.  You can do this with the BRAM by adjusting the offset between write and read accordingly, effectively skipping data.
    Im in the mountains on holiday so i cant post code for another week.....
    Other topic.... I though the max clock on a 7966 was 326 MHz? I know on a 7965 its listed as 326 MHz.
    Thanks for the insight Intaris.
    My FIFOs are set to use BRAM, so will your proposal of creating a small FIFO for crossing the clock domains plus a separate BRAM block for buffering achieve much saving in fabric? Isn't that the same amount of BRAM, plus a bit for your FIFO? I might go ahead and create a test implementation to see the difference in FPGA resource usage...
    I'm using a 5782 Module with independent 500MHz clock.
    Thoric (CLA, CLED, CTD and LabVIEW Champion)

  • LabVIEW FPGA: Multiple SCTL versus one SCTL (same clock domain)

    Hello NI forums,
    Question:
    See the attached picture from a modified version of the LabVIEW DRAM FIFO example. It probably explains my question more effectively than the paragraphs below.
    What is the difference to the LabVIEW / Xilinx compiliers, if any, between placing two independent branches of code in the same SCTL, versus placing them in individual SCTLs (in the same clock domain)?
    Misc. comments:
    I have briefly experimented with this concept using the included LabVIEW DRAM FIFO example (example finder >> Hardware Input and Output >> FlexRIO >> External Memory >> Simple External Memory FIFO.lvproj).
    I compiled the default example (the read and write interfaces are in separate 40MHz SCTLs) five separate times. Then I put the read and write interfaces in the same 40MHz SCTL and compiled another five times. The result (when both read and write interfaces were in the same SCTL) was a reduction in resource usage (according to the compilation summary).
    However, due to my lack of knowledge I'm hesitant to conclude that placing everything in one SCTL is always the best option. For example, I do not know what is created 'behind the scenes' with each SCTL. Perhaps putting independent branches of code in separate SCTLs makes it possible to route clock, reset, implicit enable, etc. signals more effectively.
    Background information:
    My task involves acquiring 2 channels of analog data using the NI 5772 and PXIe-7966. Data acquisition takes place in a 200MHz SCTL, and downstream processing is performed in a 100MHz SCTL.
    During a vast majority of the 100MHz SCTL processing stages of the FPGA VI, the 2 channels of data do not interact with eachother. So it would be easy for me to place them in separate 100MHz loops if doing so would somehow help the design (timing, resource usage, etc.).
    Thanks!
    Attachments:
    question.png ‏76 KB

    Intaris
    Trusted Enthusiast
    Posts: 3,264
    Re: LabVIEW FPGA: Multiple SCTL versus one SCTL (same clock domain)
    ‎10-28-2014 12:11 PM
    Just out of interest, what is the resource usage differential between the two versions?
    In response to the above comment,
    This is a little embarrassing, but it seems like the resource usage is similar than I initially thought for this particular example. I think the previous compilations that I based my assumption on coincidentally used more resources in the 2-SCTL loop case. I just compiled each version two additional times (see below).
    Here's the version with everything in one loop:
    Device Utilization
    Total Slices: 17.6% (2587 out of 14720)
    Slice Registers: 9.5% (5583 out of 58880)
    Slice LUTs: 8.2% (4855 out of 58880)
    DSP48s: 0.0% (0 out of 640)
    Block RAMs: 2.5% (6 out of 244)
    Device Utilization
    Total Slices: 16.9% (2493 out of 14720)
    Slice Registers: 9.5% (5583 out of 58880)
    Slice LUTs: 8.3% (4858 out of 58880)
    DSP48s: 0.0% (0 out of 640)
    Block RAMs: 2.5% (6 out of 244)
    Here's the version with the read and write in separate loops:
    Device Utilization
    Total Slices: 16.4% (2407 out of 14720)
    Slice Registers: 9.5% (5583 out of 58880)
    Slice LUTs: 8.2% (4852 out of 58880)
    DSP48s: 0.0% (0 out of 640)
    Block RAMs: 2.5% (6 out of 244)
    Device Utilization
    Total Slices: 19.4% (2859 out of 14720)
    Slice Registers: 9.5% (5583 out of 58880)
    Slice LUTs: 8.3% (4859 out of 58880)
    DSP48s: 0.0% (0 out of 640)
    Block RAMs: 2.5% (6 out of 244)

  • Bailout: failed spill-split-recycle sanity check

    hi all
    i hope someone can help me!
    we have an enterprise portals productive installation on aix5.3
    it is based on netweaver 7.0
    i have just installed a java dialog instance on a windows 2003 server 64 bit which is happily talking to the central instance on aix BUT after changing the java parameters (sapnote 723909) for this new instance, the local windows dev_server0 file now complains with the following error:
    JHVM_BuildArgumentList: main method arguments of node server0
    210       com.sap.engine.core.configuration.impl.addons.PropertyName::init (154 bytes)
    211  !    com.sap.engine.core.configuration.impl.cache.CachedConfiguration::prepareRead (167 bytes)
    212       java.util.AbstractCollection::toArray (68 bytes)
    213       com.sap.engine.lib.xml.dom.DOMDocHandler1::onCustomEvent (130 bytes)
    214       com.sap.engine.lib.xml.dom.DOMDocHandler1::charData (73 bytes)
    Bailout: failed spill-split-recycle sanity check
    211   COMPILE FAILED
    (there are many lines but i have just included one occurrence)
    i just wondered if anyone could tell me which java parameter could have caused this?
    i appreciate i could take out each parameter one by one and stop/start the instance to see when it stops complaining (and i will do this if no-one can help me)
    many thanks
    kirsty

    ok,
    i guess it was obvious really.....
    the flag: -XX:+PrintCompilation (which excludes certain classes from compiling in order to prevent possible server VM crashes) is recommended in note 723909 to help avoid crashes caused by certain classes.
    the thing is this flag seems to cause the class to fail a sanity check in order to stop it compiling which then writes out to the server log causing me unnecessary anguish.
    if the flag is there to prevent classes compiling it should do it cleanly and quietly so im not sure now whether i want it or not!!!
    anyway, apologies for anyone's time i wasted
    kirsty

  • Xnee-2.00 Installation - /lib/cpp fails sanity check

    When I first tried running the configure file to start the installation process for this application, I ran into error after error and found solutions to them by installing packages off the SunOS cd's or from the sunfreeware site. But I cant find a solution to this problem.
    the output when running the configure script is as follows for the problem:
    checking how to run the C preprocessor... /lib/cpp
    configure: error: C preprocessor "/lib/cpp" fails sanity check
    the config.log file is very long and full of compile errors...many are repeats and around the part where its failing the only problem i can see is a "conftest.c" line 14: Cant find include file assert.h....
    what are the packages i need for cpp to function, i have gcc-3.3.2 installed, are there others?? is there something else i'm going to run into later?? any help will be greatly appreciated

    Have you try to use pkg-get
    to get the binary from Sun?
    If you really need to compile it by yourself, get studio first to make it easier. It will install developer package library in the box and help to resolve your dependence problem.

  • Need info on BW system sanity check

    Hi ,
    I am new to this Area,
    can any one tell me what is the BMW system sanity check?
    when we need to do this check?
    on what BW system we need to do this?
    Is there any pre defined steps we need to follow on sanity check?
    Is there any check list available for sanity check?
    Points will be assigned for your feed back.

    Hi Lakshmi,
    Go to transaction code RSRV and see the test available there.
    Also read documentation available for each test , it is self explainatory.
    hope it helps
    Regards
    Vikash

  • Db sanity check

    Hi All,
    I would like to develop sanity check script for our product Oracle Db.
    It should be put on crontab, worked periodically.
    How can I retrieve simple pl/sql query(for example how many connections and user..) elapsed time?. Is it possible to achieve this goal by using UNIX scripts(perl, bash etc..) and specific pl/sql statement or have I write to application with java or C?

    Bash and Sql can be used for that, but you should clarify a bit more what you want to do. Here a small example :
    $ cat users.sql
    col username for a20
    select sid, serial#, username, to_char(logon_time,'dd/mm/yyyy hh24:mi:ss') logon_time,
            to_number(sysdate - logon_time)*1440 elapsed_minutes
    from v$session
    where username is not null
    order by username, logon_time
    exit
    $ sqlplus -s / as sysdba @users
           SID    SERIAL# USERNAME             LOGON_TIME          ELAPSED_MINUTES
           144         38 SCOTT                02/04/2006 15:18:10      21.6666667
           142         14 SCOTT                02/04/2006 15:28:41           11.15
           143        105 SYS                  02/04/2006 15:39:50               0
           159        113 TEST                 02/04/2006 15:18:43      21.1166667
    $                                                                                                                        

  • Glibc 2.19 & find: sanity check of the fnmatch() library function fail

    Since yesterdays update to glibc 2.19, find doesn't like searches by name. The update threw a hole lot of segmentations faults
    [2014-02-12 15:06] [PACMAN] Running 'pacman --color auto -Sy'
    [2014-02-12 15:06] [PACMAN] synchronizing package lists
    [2014-02-12 15:07] [PACMAN] Running 'pacman --color auto -S -u'
    [2014-02-12 15:07] [PACMAN] starting full system upgrade
    [2014-02-12 15:08] [PACMAN] upgraded apr-util (1.5.3-1 -> 1.5.3-2)
    [2014-02-12 15:08] [PACMAN] upgraded linux-api-headers (3.12.4-1 -> 3.13.2-1)
    [2014-02-12 15:08] [ALPM] warning: /etc/locale.gen installed as /etc/locale.gen.pacnew
    [2014-02-12 15:08] [ALPM-SCRIPTLET] Generating locales...
    [2014-02-12 15:08] [ALPM-SCRIPTLET] de_DE.UTF-8
    [2014-02-12 15:08] [ALPM-SCRIPTLET] en_US.UTF-8
    [2014-02-12 15:08] [ALPM-SCRIPTLET] Generation complete.
    [2014-02-12 15:08] [PACMAN] upgraded glibc (2.18-12 -> 2.19-1)
    [2014-02-12 15:08] [PACMAN] upgraded binutils (2.24-1 -> 2.24-2)
    [2014-02-12 15:08] [PACMAN] upgraded gcc-libs (4.8.2-7 -> 4.8.2-8)
    [2014-02-12 15:08] [PACMAN] upgraded elfutils (0.157-1 -> 0.158-1)
    [2014-02-12 15:08] [PACMAN] upgraded gcc (4.8.2-7 -> 4.8.2-8)
    [2014-02-12 15:08] [PACMAN] upgraded shared-mime-info (1.2-1 -> 1.2-2)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_Zt9oRn/.INSTALL: line 1: 10554 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdelibs (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdegraphics-mobipocket (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded nepomuk-core (4.12.1-2 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kactivities (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kde-base-artwork (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded oxygen-icons (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_lNExqJ/.INSTALL: line 1: 10562 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdebase-runtime (4.12.1-3 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdebase-lib (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded nepomuk-widgets (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdebase-dolphin (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdebase-konsole (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdebase-plasma (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_jc0NOD/.INSTALL: line 1: 10571 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdepim-runtime (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM] warning: /usr/share/config/kdm/kdmrc installed as /usr/share/config/kdm/kdmrc.pacnew
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_9emqkf/.INSTALL: line 10: 10582 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdebase-workspace (4.11.6-1 -> 4.11.6-2)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_HVYliW/.INSTALL: line 1: 10587 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded libkipi (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_JMyH2D/.INSTALL: line 1: 10591 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdegraphics-gwenview (4.12.1-2 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_BOmkIm/.INSTALL: line 1: 10595 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdegraphics-kcolorchooser (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_V9lYA5/.INSTALL: line 1: 10598 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdegraphics-ksnapshot (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded libkexiv2 (4.12.1-2 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_NKbRqP/.INSTALL: line 1: 10602 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdegraphics-okular (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_1CccYz/.INSTALL: line 1: 10606 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded libkdcraw (4.12.1-2 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdegraphics-thumbnailers (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdemultimedia-ffmpegthumbs (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_PRYdSk/.INSTALL: line 1: 10609 Segmentation fault (core dumped) xdg-icon-resource forceupdate --theme hicolor &>/dev/null
    [2014-02-12 15:08] [PACMAN] upgraded kdemultimedia-kmix (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdemultimedia-mplayerthumbs (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdepimlibs (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded kdeutils-ark (4.12.1-1 -> 4.12.2-1)
    [2014-02-12 15:08] [PACMAN] upgraded lib32-elfutils (0.157-1 -> 0.158-1)
    [2014-02-12 15:08] [PACMAN] upgraded lib32-glibc (2.18-12 -> 2.19-1)
    [2014-02-12 15:08] [PACMAN] upgraded lib32-gcc-libs (4.8.2-7 -> 4.8.2-8)
    [2014-02-12 15:08] [PACMAN] upgraded libsasl (2.1.26-6 -> 2.1.26-7)
    [2014-02-12 15:08] [PACMAN] upgraded mpd (0.18.7-1 -> 0.18.8-1)
    [2014-02-12 15:08] [PACMAN] upgraded nginx (1.4.4-2 -> 1.4.5-1)
    [2014-02-12 15:08] [PACMAN] upgraded openjpeg (1.5.1-1 -> 1.5.1-2)
    [2014-02-12 15:08] [PACMAN] upgraded pam (1.1.8-2 -> 1.1.8-3)
    [2014-02-12 15:08] [PACMAN] upgraded python2-numpy (1.8.0-1 -> 1.8.0-2)
    [2014-02-12 15:08] [PACMAN] upgraded redland (1:1.0.17-1 -> 1:1.0.17-2)
    [2014-02-12 15:08] [PACMAN] upgraded s-nail (14.5.2-3 -> 14.5.2-4)
    [2014-02-12 15:08] [PACMAN] upgraded sudo (1.8.9.p4-1 -> 1.8.9.p5-1)
    [2014-02-12 15:08] [PACMAN] upgraded systemd (208-10 -> 208-11)
    [2014-02-12 15:08] [PACMAN] upgraded systemd-sysvcompat (208-10 -> 208-11)
    [2014-02-12 15:08] [ALPM-SCRIPTLET] /tmp/alpm_7YRIvT/.INSTALL: line 10: 10659 Segmentation fault (core dumped) mkfontdir usr/share/fonts/local
    [2014-02-12 15:08] [PACMAN] upgraded terminus-font (4.38-3 -> 4.38-4)
    [2014-02-12 15:08] [PACMAN] upgraded vim-systemd (20130410-1 -> 20140209-1)
    [2014-02-12 15:08] [PACMAN] upgraded whois (5.1.0-1 -> 5.1.1-1)
    [2014-02-12 15:08] [PACMAN] upgraded xdg-utils (1.1.0.git20140109-1 -> 1.1.0.git20140207-1)
    [2014-02-12 15:08] [PACMAN] upgraded xf86-video-intel (2.99.909-2 -> 2.99.910-1)
    After that nearly all applications segfaulted but somehow it everything works again. Except for find (which I only recognized after mkinitcpio destroyed my initramfs......):
    # find -name \*.pkg.tar.xz
    find: sanity check of the fnmatch() library function failed.
    Regex, type and all other operators work as expected:
    # find -regex '.*\.pkg\.tar\.xz'
    ./gcc-4.8.2-7-x86_64.pkg.tar.xz
    ./binutils-2.24-1-x86_64.pkg.tar.xz
    ./glibc-2.18-12-x86_64.pkg.tar.xz
    ./gcc-libs-4.8.2-7-x86_64.pkg.tar.xz
    # find -type d
    # pacman -Q findutils glibc
    findutils 4.4.2-5
    glibc 2.19-1
    First is suspected it to be a locale issue, but everything seems to be fine:
    # diff -u0 {,/}etc/locale.gen
    --- etc/locale.gen 2014-02-07 23:56:45.000000000 +0100
    +++ /etc/locale.gen 2014-02-12 19:43:03.037279970 +0100
    @@ -124 +124 @@
    -#de_DE.UTF-8 UTF-8
    +de_DE.UTF-8 UTF-8
    @@ -161 +161 @@
    -#en_US.UTF-8 UTF-8
    +en_US.UTF-8 UTF-8
    # locale-gen
    Generating locales...
    de_DE.UTF-8
    en_US.UTF-8
    Generation complete.
    Downgrading to glibc 2.18-12 solved the problem for now...
    # pacman -U --noprogressbar --noconfirm *
    loading packages...
    warning: downgrading package binutils (2.24-2 => 2.24-1)
    warning: downgrading package gcc (4.8.2-8 => 4.8.2-7)
    warning: downgrading package gcc-libs (4.8.2-8 => 4.8.2-7)
    warning: downgrading package glibc (2.19-1 => 2.18-12)
    resolving dependencies...
    looking for inter-conflicts...
    Packages (4): binutils-2.24-1 gcc-4.8.2-7 gcc-libs-4.8.2-7 glibc-2.18-12
    Total Installed Size: 134.89 MiB
    Net Upgrade Size: -0.42 MiB
    :: Proceed with installation? [Y/n]
    checking keyring...
    checking package integrity...
    loading package files...
    checking for file conflicts...
    checking available disk space...
    downgrading glibc...
    warning: /etc/locale.gen installed as /etc/locale.gen.pacnew
    downgrading binutils...
    downgrading gcc-libs...
    downgrading gcc...
    # find -name \*.pkg.tar.xz
    ./gcc-4.8.2-7-x86_64.pkg.tar.xz
    ./binutils-2.24-1-x86_64.pkg.tar.xz
    ./glibc-2.18-12-x86_64.pkg.tar.xz
    ./gcc-libs-4.8.2-7-x86_64.pkg.tar.xz
    According to the findutils manual one should file a bug report for this message, but I don't think they expect glibc to be the buggy implementation of fnmatch that looks enough like the GNU version to fool configure, but which doesn't work properly.
    Does anybody experience similar problems? Does anybody have suggestions how to solve this?
    Last edited by auti (2014-02-13 22:14:12)

    I've regenerated the locales multiple times but only the upgrade to findutils 4.5.12 worked.
    glibc 2.19-2 works with the current findutils.
    But if a corrupt locale archive file caused this, why didn't findutils 4.5.12 complained about this?
    Anyhow: It works, I'm happy; thanks for your effort, Allan!

  • How to realize cross-plant ATP check in 46c when creating SO

    Hi all
    For cross-plant ATP check is not available in 46c when creating SO, is there other way to realize it? Can it be realized through user-exit or enhancement or development?
    Is there anyone can help me?
    Best regards
    Egg

    see these user exits for VA01. Second one can be used for ATP check...
    check below enhancements in SMOD Tcode .
    SDAPO001 Activating Sourcing Subitem Quantity Propagation
    SDTRM001 Reschedule schedule lines without a new ATP check
    V45A0002 Predefine sold-to party in sales document
    V45A0003 Collector for customer function modulpool MV45A
    V45A0004 Copy packing proposal
    V45E0001 Update the purchase order from the sales order
    V45E0002 Data transfer in procurement elements (PRreq., assembly)
    V45L0001 SD component supplier processing (customer enhancements)
    V45P0001 SD customer function for cross-company code sales
    V45S0001 Update sales document from configuration
    V45S0003 MRP-relevance for incomplete configuration
    V45S0004 Effectivity type in sales order
    V45W0001 SD Service Management: Forward Contract Data to Item
    V46H0001 SD Customer functions for resource-related billing
    V60F0001 SD Billing plan (customer enhancement) diff. to billing plan
    V45A0001 Determine alternative materials for product selection
    Regards,
    Madan gopal Sharma..
    REWARD POINTS

  • Sanity check - migrating library to new mac

    I'm migrating to a new machine and will use the opportunity to do a fresh install of Aperture. I'd be grateful for a reminder / sanity check on how to migrate my data / settings after the install:
    All I need to do (after Aperture is installed, updated and checked) is replace the default library file with my Aperture library from my previous machine and start Aperture ?
    My library includes both local and referenced (to a network drive) masters. Am I likely to need to re-establish connection with all of the referenced masters ?
    What about keywords and other preferences ? Can someone remind me where these live and what needs to be moved ?
    Many thanks.
    Paul

    macpaul11 wrote:
    I'm migrating to a new machine and will use the opportunity to do a fresh install of Aperture.
    Yes, it is important to do a fresh install of all apps rather than "migrate" apps. Repair Permissions immediately before and immediately after each installation.
    All I need to do (after Aperture is installed, updated and checked) is replace the default library file with my Aperture library from my previous machine and start Aperture ?
    Correct.
    My library includes both local and referenced (to a network drive) masters. Am I likely to need to re-establish connection with all of the referenced masters ?
    No, Aperture should handle that.
    What about keywords and other preferences ? Can someone remind me where these live and what needs to be moved ?
    Again, Aperture should handle that.
    Good luck!
    -Allen Wicks

  • Hot to configure sanity-checks ?

    Hi All
    I went through the support files  and found such kind of log messages during peak hours as below,
    2010-07-11 11:55:47 | INFO  | CPU #000 | Started filtering packets of type 'TCP Non-SYN' received on interface # 0. Reason: Started filtering due to attack detection
    2010-07-11 12:00:35 | INFO  | CPU #000 | Started filtering packets of type 'TCP No-SYN + RST' received on interface # 0. Reason: Started filtering due to attack detection
    2010-07-11 13:07:25 | INFO  | CPU #000 | Stopped filtering packets of type 'TCP No-SYN + RST' received on interface # 0. Reason: Stopped filtering for an administrative pause
    Basically those logs mean that SCE detect attacks and then in order to protect itself, it put those attack traffic in filter, one hour later, SCE remove the flows from filter and check again, if attack persist, SCE put attack traffic in filter again.
    Could we decrease the time for filtering traffic ? like 10 minutes ?

    Hello,
    I believe this is what you're looking for:
    SCE8000#>configure
    SCE8000(config)#>interface LineCard 0
    SCE8000(config if)#>sanity-checks attack-filter times filtering-cycle max-attack-time
    SCE8000#>show interface LineCard 0 sanity-checks attack-filter times
    Filtering cycle: 3600 seconds.
    Max attack time: 86400 seconds.
    Hope that helps,
    Best regards.

  • MPLS Sanity-check

    Hi,
    I need someone to do a sanity check on a MPLS design.
    I doing some consulting in a expanding metronet, in Sweden.
    I've got the need to run ethernet over ATM, so I'm thinking EoMPLS. So my query, is it possible to run EoMPLS over the following
    configuration. If so, can I also runs q-in-q (if I reconfigure the MTU size on both sides).
    1Q-TRUNK - 7200VXR - ATM-SWITCH (NON-MPLS) - ATM-SWITCH (NON-MPLS) - 7200VXR - 1Q-TRUNK
    Thanks
    Best regards
    Daniel

    It is certainly possible to run MPLS (frame mode) on the interface/subinterface between the two 7200s, which will allow you to run EoMPLS. You should also be able to configure q-in-q in this configuration.
    Hope this helps,

  • Rebuilding RAID set - need sanity check

    I'm in the process of rebuilding a G5 used as a server that was built by a predecessor. It boots off two 250GB drives that are RAIDed. I don't have the password, so I attempted to change it using a Tiger install disk. Somehow booting off the DVD has destroyed the RAID set. I'm booting off a FW drive we use for diagnostics, and the drives show up in DiskUtility, but no RAID set up. if I do a 'sudo diskutil checkRAID' it says there's no RAID set to check.
    If I list the drives they show up fine. Output (edited) is:
    /dev/disk2
    #: type name size identifier
    0: Applepartitionscheme *233.8 GB disk2
    1: Applepartitionmap 31.5 KB disk2s1
    2: AppleDriverOpenFirmware 512.0 KB disk2s2
    3: AppleBootRAID 233.8 GB disk2s3
    /dev/disk4
    #: type name size identifier
    0: Applepartitionscheme *233.8 GB disk4
    1: Applepartitionmap 31.5 KB disk4s1
    2: AppleDriverOpenFirmware 512.0 KB disk4s2
    3: AppleBootRAID 233.8 GB disk4s3
    So the sanity check is this: If I run a repair mirror (sudo diskutil repairMirror disk2 disk4) will it wipe both drives or just rebuild the existing RAID setup?

    oh and that message keeps popping up, even when I log out of the website.  If I choose "cancel" everything now continues as it should.

  • OES2SP3 to OES11SP1 Sanity Check

    I have an OES2SP3 (SLES10SP4) server that I want to move to OES11SP1 (SLES11SP2). The tree is stable, but I am having some issues with the OES2SP3 server that I don't want to spend the time troubleshooting. My Master replica and all services is on a NW65 server and I have the luxury of being able to wipe the OES2 server. The OES2 server only does some file backups via scripts to an internal RAID and holds a RW replica. Looking for a sanity check on my process...
    1. copy scripts from server to backup
    2. remove eDir from server using nds-uninstall
    (the OES doc says that ndsconfig rm is not supported in OES2SP3)
    3. wait for replica ring to be stable - monitor via dsrepair on NW65 server
    4. remove any references to the OES2 server via ConsoleOne
    5. wait for replica ring to be stable - monitor via dsrepair on NW65 server
    6. fresh install of SLES11SP2 and OES11SP1 (eDir only) onto server
    7. add other OES11 services to server
    TIA
    -L

    Originally Posted by LarryResch
    I have an OES2SP3 (SLES10SP4) server that I want to move to OES11SP1 (SLES11SP2). The tree is stable, but I am having some issues with the OES2SP3 server that I don't want to spend the time troubleshooting. My Master replica and all services is on a NW65 server and I have the luxury of being able to wipe the OES2 server. The OES2 server only does some file backups via scripts to an internal RAID and holds a RW replica. Looking for a sanity check on my process...
    1. copy scripts from server to backup
    2. remove eDir from server using nds-uninstall
    (the OES doc says that ndsconfig rm is not supported in OES2SP3)
    3. wait for replica ring to be stable - monitor via dsrepair on NW65 server
    4. remove any references to the OES2 server via ConsoleOne
    5. wait for replica ring to be stable - monitor via dsrepair on NW65 server
    6. fresh install of SLES11SP2 and OES11SP1 (eDir only) onto server
    7. add other OES11 services to server
    TIA
    -L
    Hi Larry,
    If this is a regular fileserver (not housing iFolder and such special services)... your steps seem to cover the bases.
    As an extra preparation step, if the OES server has it's own eDir replica, I'd remove it at least an hour or so before moving on to removing the OES server from the tree. Also running a quick eDir health check after having done so and before moving on to remove the server is a good routine.
    Out of curiosity, do you have a link to the docs specifying 'ndsconfig rm' is unsupported? Have not run into issues with it but also had not seen that reference in the docs (not that I read them daily :P ).
    If you have NSS volumes, you can relink them to the new server using the update NDS option within nssmu. Do note that you will also have to relink object properties to the volumes (like the user home directories).
    Your tree CA configuration is also still valid (and not housed on the OES2 server)?
    Cheers,
    Willem

Maybe you are looking for

  • Counter counts twice as many edges as it should, but correct frequency. Why?

    Hi everyone, I am using a PCI 6122 (S Series, DAQ-STC) and Labview 7.1 on a Windows XP.  There are only two counters on this card (Ctr 0 and Ctr 1) and I need to use both of them to count edges.  While testing the counters, I am inputing a TTL signal

  • PDF not displaying properly in Acrotbat

    Hi all, I'm having an issue with a document I have created in Photoshop not displaying properly when opened in Acrobat. It is a brochure for my work, that I have worked on and printed many times. When I open it in Acrobat after creating the PDF, gaps

  • An error occured in the blob cache

    An error occured in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)'. Understand that this know issue on SharePoint 2010 is there any fix for this ?  http://blogs.msdn.com/b/

  • About This Mac Storage shows wrong size

    Hello, the story so far Actions: - MBA early 2014 (128GB) maverick updated to yosemite in November 2014 - fresh install of Yosemite via wifi in January 2015 - followed instructions as suggested here http://support.apple.com/kb/PH14243 Consequences af

  • Computer stalls on restart after ONYX

    My friend has a 24" 2.4g iMac running 10.5.8 I recommend Onyx to him and he downloaded, installed and says he ran maintenance. Then went to restart. computer chimes, apple logo, exactly 3 revolutions of the gear and then freezes. Apparently there is