Virtex-7 FPGA VC707

Virtex - 7 FPGA VC707 power module could not be started no what the output would be sent back to the xilinx can help repair

hi,
you need to go through the debug checklist available at http://www.xilinx.com/support/answers/51233.html
this will help determine if the board is functional.
--hs

Similar Messages

  • Connecting 5V digital intputs to Virtex 5 FPGA

    Hello,
    I would like to connect 5V digital pins of ds1103 dSPACE board to the digital inputs of the Virtex 5 FPGA.
    I have read (http://www.xilinx.com/support/answers/10835.html) that the digital input voltages to the FPGA can actually be a bit higher than 3.3V as long as the current does not exceed 10mA. They recommand connecting a resistor in series in order to guarantee that. However, the digital outputs from the dSPACE board (whose voltage can be lower than 5V depending on the load) cannot provide more than 10mA anyway so there should not be any limitation problem.
    Still, I am not completely confident about connecting 60 pins of 5V from dSPACE to the FPGA. Can anyone confirm that the direct connection (without the series resistor) will not harm the FPGA? Or is it still preferrable to use series resistors (x60)?
    Thank you
    Thomas Geury

    Thank you for your reply.
    The dSPACE board digital pins output an absolute maximum current of 10 mA, so only the latch-up limit is a constraint.
    I will indeed consider using series resistors then. If the voltage (max) from the dSPACE pins is 5V, and the clamp diodes of the FPGA I/Os are forward-driven for a voltage of at least 3.5V (see http://www.xilinx.com/support/answers/10835.html), then the resistors guaranteeing a current limited to 100 mA / 60 inputs should be (5-3.5)/(10e-3/60) = 900 ohms, right? Is that how you got to 2,000 ohms, including a safety margin as there will be a few output pins used as well?
    Also, is the latch-up limit of 100 mA for the whole FPGA or per bank? I see on the datasheet (see http://www.xilinx.com/support/documentation/data_sheets/ds100.pdf, page 2) that my model (Virtex 5 XC5VLX50) has 17 I/O banks; would that allow me to use I/O of different banks in order to have an overall higher latch-up limit?
    Thanks
    Thomas Geury
     

  • FPGA Poission Random Number

    Hello,
    I have Xilinix Virtex 5 FPGA and i want to implement poisson random number generator.
    FPGA works at 100 MHz clock and at every cycle it should calculate poisson random number. Mean value of poisson distribution can change at every clock cycle. I want to make random time pulse generator with possion distribution. Pulse rate can vary between 1 and 1 billion pulses per second.
    How i can make this generator? Please, help

    b,
    There are two kinds of randomn number generators that are commonly implemented in a FPGA device:  a true random number generator (very hard to do), and a pseudo random number generator (trival to do).
    The pseudo  random generator is done using linear feedback shift registers, and its statistics are well understood, and do not vary (in fact, the sequence repeats depending on the length of the LFSR).
    Attached is a form of true random number generator.
    To get a specific distribution (e.g. Poisson) you would need to verify and [perhaps filter the genberated numbers.
    Poisson being as close to radomn as possible (for example, radioactive decay times are Poisson distributed), a true randomn number generator is where I would start (the attachment).
     

  • A question about correct SEU in Virtex4 LX25 FPGA

    I have a problem when I scrube the DSP, IOB, CLK, CLB using the data stored in the Flash to correct the SEU in Virtex4 LX25 FPGA. This is my flow chart:
    step1: read the state register to verify the state register
    step2:  read the control register to verify the control register
    step3:  write a 32bit data to FAR register and read FAR register to virify the FAR register
    step4:  scrube the DSP, IOB, CLK, CLB logic in the Virtex4 LX25 FPGA using the data stored in the Flash(I shield the LUT RAM and SRL16).
    Problem: when the step4 is done, if I immediately implement the step1 the scrube programe can influence the CLB logic. But when the step4 is done, if I delay 0.1s to implement the step1 the scrube programe has no influence on CLB logic.
    Is this means that I should wait a minute when I begin the next scrube process? Thank you in advance. 
     

    Hi Lesea, Virtex4 is not supported by Soft Error Mitigation (SEM) Core, and I need to protect the Virtex4 LX25 FPGA by simple scrubbing. The configuration management engine I designed is the same as that in Correcting Single-Event Upsets in Virtex-4 FPGA Configuration Memory. I also do the fault injection experiment, and the configuration management engine can correct the faults in the Virtex4 LX25 FPGA.
    On my board: digital signal A---->encode---->D/A---->A/D---->decode---->digital signal B
    When I don't scrubbing the FPGA, the signal B is always the same as the signal A, so I think my board and the software in the FPGA operate well. When I scrubbing the FPGA too frequently, the signal B is not always the same as signal A(I only scrubbing the DSP, IOB, CLK, and CLB logic and I shield the LUT RAM and SRL16 logic). So I think the problem is caused by my configuration management engine.
    I also find that the time between two scrubbing process can not be too short. For instance, if I delay only several clks(2.3825MHz) before the next scrubbing process, the signal B is not always the same as the signal A. But if I delay 0.1s, 1s or 10s before the next scrubbing process, the signal B is always the same as the signal A. So how can I solve the problem? I can do experiment to find the minimal time between the two scrubbing process that have no influence on my software. I want to know what causes this problem. Thank you:)
    Regards,
    Zhiyuan Peng
     

  • Write latency on PCIe from PC (host) to FPGA

    Hello,
    In my application, the target is to write 64 Bytes from the PC to the FPGA with the minimum latency.
    I have implemented the “Virtex-7 FPGA Gen3 Integrated Block for PCI Express(3.0)” IP, that seems the best candidate for that and I am using the AXI STREAM “m_axi_cq” interface to write data to internal FPGA memory.
    The configuration is this one:
    PCIe GEN 3 / 8 lanes.
    AXI at 250 MHz with data bus = 256 bits.
    On the µP = XEON E5 V2, the right core is bound to the FPGA.
    I am using, specific Intel instruction to transfer directly bloc of 256 bits (in my example I am using 2 times this instruction in C code). I do not want to call a DMA, because I am afraid to lose more time to call the DMA than to make a direct memory write.
    In the FPGA, I put an ILA core to monitor the access (see waveforms).
    - First bus is the output of the Xilinx core (“m_axi_cq” bus)
    - Second bus is the “memory write signal” of 8 memories of 32 bits data width => 16 x writes of 32 bits = 64 Bytes
    Question:
    - Even if I use 2 Intel instruction to transfer 2x256 bits, I see 4 transfers of 128 bits => is it normal ?
    - It seems that we cannot have a TLP bigger than 256 bits (without DMA access) => do you confirm?
    - Between 2 write access, I have 23 clock cycles = 23*4ns = 92 ns, and I do not arrive to decrease this score => did I reached the minimum possible ?
    Many thanks for your attention

     
    Be careful with watching the latency. 
    It's HIGHLY variable in our cases.
    We've got our PCIE block (endpoint on the FPGA) issuing block reads to the cpu host and/or an NVIDIA GPU.
    We've measured on various systems, average latencies in the 180-240 clocks range.  (Clock = pcie user clock, 250 MHz).  That's not's great, but for us tolerable. 
    The problem was the distribution in the latencies.  The WORST case latencies can be terrible.  > 1600 clocks in some cases.    Plotting a few series of results, we see a bimodal distbution of latencies.  I.e. a bunch hovering around 150, and a bunch hovering around 300.  The average being the above results.  Usually around the point where the latencies are tending to move from one "average" to the other, we get the outlier extra long latency.
    Probably something to do with cache flushes/fills happening over on the CPU.
    In any event, because of this high variability of latencies, our first designs broke real time.  We had to re-architect things to handle.
    Bandwidth's not a problem for PCIE. Predicable latency, however is troublesome...
    Regards,
    Mark
     

  • Spartan 3e Fpga

    Hi  Guys,
    I have a Doubt Regarding Spartan3e Xup Board,
    Does Ni Supports Spartan 3e Fpga  or Xup Board. The Driver which is released by ni only supports for Xup board or Fpga?
    Regards, 
    Krish.

    Hi,
     For the Spartan-3E board  (which is bought from Xilinx) to work with Labview FPGA, you need to install a driver which is provided on National Instruments site https://lumen.ni.com/nicif/us/infolvfpgaxilsprtn/content.xhtml ... The driver was developed by NI to make researchers and educators use the more available Spartan-3E boards without a need to buy NI-RIO boards (On which there is a xilinx virtex-5 FPGA).
    Till now, I guess this is the only kit supported by NI to work with Labview FPGA, so your other choice is to buy a NI-RIO boards.
    Note : The above driver works with Labview FPGA 8.6. You can search the forum to find the 8.5 driver.
    -- Walid F. Abdelfatah 
    Message Edited by wfarid on 08-19-2009 01:43 PM

  • Derived clock problem?

    I am tring to derive a 25MHz clock using a NI PXI-7842R and labview project won't allow that exact clock
    But when I try doing the same thing for a PXI-7830R target, I am sucessful.
    What is going on?
    I am choosing a base clock of 40MHz
    to get 25MHz, the multiplier is 5 and the divisor is 8
    When I right click on New Derived clock, I only get the option of entering a new clock frequency.
    Why does the tool not let me just specify the multiplier and divisor?
    I am using Labview version 2010
    Solved!
    Go to Solution.
    Attachments:
    derived clocks.lvproj ‏16 KB

    LabVIEW FPGA does use the built-in DCMs for the Virtex 5,
    however, the parameters for the DCM with a 40 MHz input clock do not allow for
    25 MHz on the Virtex 5.
    In trying to instantiate a derived clock, there are four
    possible options, a 1X clock, a 2X clock, CLKDV, which is a phase-aligned clock
    operating at a fraction of the input clock, and CLKFX, which takes the
    multiplier and divisor and creates a clock of (M/D) * clock in rate.
    Obviously, 1X and 2X will not work because they yield a 40 MHz and 80 MHz clock, respectively.  The other two are limited by the specs for
    the DCM.
    In looking at the Virtex-5 FPGA User Guide in the section
    Clock Management Technology, you will see a section for DCM Attributes.  CLKDV_DIVIDE (p. 58) is an attribute set for the DCM
    that tells the DCM what rate the CLKDV output should run at.  If you look at the available attributes, you
    can see that 5/8 does not fit into one of the valid configurations, so we can’t
    use CLKDV.
    We also cannot use CLKFX because the Virtex-5 DC and
    Switching Characteristics Data Sheet shows that the valid ranges of CLKFX are
    32 MHz to 140 MHz for the Low-Frequency Mode and 140 MHz to 350 MHz for the
    High-Frequency Range (p. 57).  Since 25 MHz is below the minimum rate, we can't create it from this DCM output port.
    You can do this on the PXI-7830R because that
    has a Virtex-II FPGA with different characteristics and attributes.
    Donovan

  • National Instruments PXI with IEEE 802.15.4 standard (ZigBee)

    Bonjour,
    En fait, je travaille sur  un projet qui a pour but d’implémenter un émetteur/récepteur Zigbee en bande de base reconfigurable sur la plateforme d'évaluation XUPV5-LX110T qui embarque un Virtex 5. Je suis actuellement dans la phase de test réel.
    Premièrement, Je veux envoyer mes données venant d’un pc vers un FPGA et de les recevoir (pour traiter mes signaux sur Matlab). Est-ce-que cette tâche est faisable ou non ? Y a-t-il une solution pour ça en utilisant un média de communication (la liaison série par exemple)
    Deuxièmement, Y a-t-il un équipement de mesure et de test de National Instruments à l’aide des PXI qui supporte le protocole sans fil Zigbee ou autrement la norme IEEE 802.15.4 (à savoir RF Vector Signal Generator et Vector Signal Analyzer) de la partie frontale analogique que ce soit en émission ou en réception?
    Et merci d’avance pour tout le monde.
    Hello,
    In fact, I'm working on a project which aims to implement a reconfigurable Zigbee tranceiver on XUPV5-LX110T Evaluation platform which integrates a Virtex 5 FPGA. I am currently in the phase of real test.
    First, I want to send my data from a PC to FPGA and receive it (to treat my signals on Matlab). Is this possible or not? If yes, Is there a solution for it using a medium of communication (e.g. serial link)
    Second, is there a measuring equipment and testing National Instruments using PXI which supports the Zigbee wireless protocol or otherwise IEEE 802.15.4 standard (i.e. RF Vector Signal Generator and Vector Signal Analyzer) of the analog front-end either in transmission or reception?
    And thanks a lot in advance for everyone.

    Hello,
    I am not sure what data you will be collecting, or how you intend on using the board. Perhaps you can explain your application a little bit more?
    Is the FPGA code already developed for your application with the XUPV5-LX110T board? As long as the developed FPGA code is able to communicate with your PC via whatever protocol you choose, then you can use that as a channel to send data back and forth. Since the board is capable of many different I/O connections, you can pretty much sending/receive data over which ever connection you prefer, Ethernet, RS-232, etc.
    Just to clear up any confusion, if you do not already have FPGA code for the board, this is not something you would be able to develop with LabVIEW FPGA programming. The XUPV5-LX110T board is not supported for programming its FPGA using LabVIEW FPGA. You can however, program in labVIEW to communicate data back and forth with the I/O you have chosen to connect with to your PC, such as Ethernet or RS-232, as mentioned above.
    As far as measuring equipment NI offers for testing with the Zibee (IEEE 802.15.4) wireless protocol in the PXI platform, if your application requires you to both transmit to, and received from the board, and then you would need either both a Vector Signal Generator and Vector Signal Analyzer, or a Vector Signal Transceiver. See the list below for some examples of what we have to offer.
    VSAs: NI PXI-5661, NI PXIe-5663E
    VSGs: NI PXI-5671, NI PXIe-5672/5673E
    VSTs: NI PXIe-5644R/5645R/5646R
    From my knowledge of ZigBee, you would be capable of communicating with the board using any of these devices.
    Matthew R.
    Applications Engineer
    National Instruments

  • Labview freezes during installation

    I have an enduser that I am setting up a repurprosed computer.  The computer was originally used by a summer marketing intern so there was never any NI software.  I was setting it up, and installed 8.6 by accident as the engineer then instructed me he preferred 2010 sp1.  I ran uninstall from the CLI as the GUI didnt have a way I found to uninstall it completely with one swoop.
    Yesterday it froze during the drivers disc, but now, I redid it after the uninstall and now it froze on the disc two - product 14 of 16: Currently installing Compilation Tools for Virtex-II FPGA Devices.  It has been sititng like this for the past hour. 
    The end user does have Xilinx Webpack previously installed, v.14. 
    Office 2007.
    Lotus Notes 8.5.3
    Firefox
    Windows 7 x64
    I am using the Labview 2010 SP1 Platform DVD set
    Any advice is greatly appreciated.
    Thanks and regards

    Exactly what happens when it "freezes"? Or, how do you know it freezes?
    If the rest of the code takes negligible time to execute, the 250 ms Wait in the last frame of the innermost sequence structrure will require >182 seconds to complete the nested for loops.  There are no indicators in those loops, so it would not be obvious whether the program was still running normally or not during those 3 minutes.
    Try putting indicators on the "i" terminals of the for loops. That will tell you whether the loops are executing, and when it freezez, you will be able to see how many iterations have completed.  Also put error indicators inside the loop so that you can see if any errors are reported. Unfortunately the Velmex driver does not report errors.
    Highlight Execution is useful for finding problems in code. But if the freeze occurs after several hundred iterations of the inner loop, you will grow old waiting to see what happens. A combination of Breakpoints and Highlight Execution is more versatile for troubleshooting loops.
    Sequence structures are almost never the appropriate choice for well written LabVIEW code. I would probably create several subVIs wrapping around the Velmex driver, with each subVI perfoming one task. Include error in and error out terminals (even if the Velmex driver does not return errors for most tasks) so that you can use dataflow for controlling the order of execution.
    Lynn

  • Can't simulate GTH wrapper generated from scratch

    Hi,
    I need to use some GTH transceivers of the Virtex-6 FPGA (ML630 eval board) to communicate with a high speed digital-to-analog converter (DAC). I am firstly trying to simulate the wrapper of the GTH transceiver I created, in order to understand the signals assertion flow. However, in simulation the serial pins for transmission are always '1' (both p and n pins).
    The wrapper is created using core generator with option "from scratch" to operate in 9.92 Gbps with no line coding. The simulation uses the example design and testbench generated with the wrapper.
    However, If I generate the wrapper with a pre-defined template (e.g. 10GBASE-R), the simulation works fine.
    Does anybody know how can I solve this problem or some tip the could help me?
    Additional information:
    - ISE version 14.7
    - GTH transceiver wizard version: 1.11
    - FPGA: xc6vhx565t-2ff1924 (ML630)
    Thanks in advance.

    I solved the problem. It seems to be a problem with GTH transceiver wizard, because, even setting in wizard the option "full rate", it generates an example design for "full rate", but init.vhd file is parameterized with "full rate" disabled, and the core never starts.

  • LabVIEW DSC 8.0 examples that deal with events check for valid timestamp.Why?

    Hi folks !
    There are examples that come with LabVIEW DSC 8.0 that deal with alarm events, In these examples - DSC Alarms Event Structure Support.vi contained in DSC Alarms Demo.lvproj, for instance - when an alarm event occurs, the code checks for a valid time stamp - 17:00:00.000 31/12/1975. I´m confused, can anyone help me understanding why it´s done?
    Thanks !

    Hello marc8470,
    Each Virtex 5 FPGA bank requires an external voltage reference.  The FlexRIO FPGA module provides this reference in the form of Vccoa and Vccob.  Because there are two voltage references available on the FlexRIO FPGA module, each Vcco reference is connected to 2 IO banks.  The Adapter Module Interface and Protocol chapter of the FlexRIO MDK manual has a table that indicates which GPIO banks are referenced to which Vcco reference.  The Vcco levels set in the general section of the adapter module configuration file are not used by the Xilinx compiler, but instead by the fixed FlexRIO logic to configure the external voltage references.  The IO standard constraints section of the adapter module configuration file is used during compile to configure the output drivers in the Virtex 5.  If the general VccoALevel and VccoBLevel values do not match the IO standard constraints, no error will occur during compile, but the hardware will not be configured correctly during runtime.  The logic families used by each general purpose IO (GPIO) line must match that of the Vcco levels set in the general section of the adapter module configuration file.  A mismatch in values could result to incorrect behavior or possible damage to the FlexRIO FPGA module or the adapter module. 
    In the future, please use the email address included in your NI FlexRIO Adapter Module Development Kit (MDK) User Manual to send your questions directly to the FlexRIO MDK support team.  This group has experience with specific FlexRIO MDK questions such as this one. 
    The FlexRIO MDK manual is designed to provide all of the information a hardware designer will need to create a FlexRIO adapter module.  National Instruments is always improving and working on new releases of the FlexRIO MDK.  Please feel free to use the support email address in the FlexRIO MDK manual to send me any feedback you have on the contents of the manual.
    Regards,
    Browning G
    FlexRIO R&D

  • 16 bits BPI in BPI down mode

    Hi All,
    I am trying to use the BPI down mode on a Virtex 6 FPGA.
    I have a 16 bits Spansion 1Gb flash (S29GL01GP).
    I generate the MCS file using: 
    promgen -w -p mcs -c FF -o ./configname_x16 -s 131072 -d 03ffffff configname.bit -bpi_dc parallel -data_width 16 
    Impacts manages to write and verify the config in the flash.
    But cannot get it to load the config in the FPGA (no done, PCIe not recognized)
    It configures correctly in BPI up (-u 0000000 instead of -d 03ffffff ). 
    I found some posts that mentionned issues that are similar but related to EDK. There was a issue about big-little endian. Can that be my problem too?
    Did anyone successfully loaded an fpga in a similar setup (bpi-down, 16bits, 1Gb flash)?
    Any hints appreciated.
    I'd like to be able to reconfigure the FPGA from a processor  and the BPI u/d pin is the only pin that I have access to on my board, M pins aren't accessible, changing the board isn't an option.
    Thanks,
    raphael
     

    No one?
    So should I understand that using BPI down with 16 bit wide flash interface isn't used?
     

  • CRIO 9025 Data Logging (128+ Channels at @ 1000Hz)

    Hello,
    I have the following configuration:
    (1) cRIO 9025 controller on a NI 9118 8-slot Virtex-5 FPGA chassis, loaded with AI/TC modules
    (2) NI 9144 8-slot EtherCAT expansion chassis connected to the 9118 via EtherCAT, loaded with AI/TC modules
    I have been tasked with recording data on ALL of these channels at 1000Hz. I have been toying with various types of software architectures and cannot come up with an optimal system. I would like to log data directly to the cRIO 9025's on-board storage (and later retrieve via FTP).
    What is the best way to approach this? What type of architecture should I be using?
    Any help is very appreciated. Thanks!

    Hi SR,
    Unfortunately, you won't be able to daisy chain multiple 9024 controllers. As explained in this Developer Zone article, you can use a 9024 as a master with a series of 9144 slave chassis. If you're using multiple 9024s, you'll want to put them on on the same network and write a master application that communicates with all of the systems. Hope that helps.
    Andy H.
    National Instruments

  • PCI-7833 board space

    Hi, I'd like to ask for some help get a better understanding how this FPGA works.  I'm using PCI-7833R.  As the feild programmable gate array, whenever I'd like to run a new FPGA vi, without using emulation,  the program is compiled onto the array.  Does it mean certain portion of the gate array is irreversibly programmed?  For PCI-7833R, how many times can I compile my FPGA vi onto the board?  How much space does it have?  Would it happen that the whole board of array is programmed and I cannot use the card again?  Thank you so much!  I appreciate any help or information.  Any reference or fundamental documents would be very much appreciated as well. 
    Solved!
    Go to Solution.

    soljiang:
    The FPGA is fullly reconfigurable, so if you place a new FPGA VI on your 7833R, you will simply overwrite any previous configuration, and will not run out of space. Programming the FPGA is NOT irreversible, so don't feel bad if you have to change your VI and recompile it; prototyping like that is precisely what the board is designed for.
    The 7833R has a Virtex-II FPGA, so it hs 3 million gates (see tables in the following articles or the product page here).
    If you're looking for some fundamental documents, here are some good ones on the NI Developer Zone.
    FPGAs - Under the Hood
    Introduction to FPGA Technology: Top Five Benefits
    Caleb Harris
    National Instruments | Mechanical Engineer | http://www.ni.com/support

  • A question about Bar Colors in SSRS Bar Chart

    I have a chart in a report that has 27 categories, and no series. The categories all display properly with the correct results, so the chart is informationally (not sure that's a word) correct but all the bars are the same color. If I try to ass a series, each bar shows up as a different color but now each category has 27 bars in it (actually, it has one really thin bar and room for 26 more that are not showing.)
    So, my question is, can I have the colors of the individual bars be different for each category without having to use a series? I know I can use a SWITCH Statement and assign a different color to each value, but that isn't a viable solution as the categories could change and new ones could show up later.
    Oh, in case the information is relevant: I am building the report in Visual Studio 2008.
    This topic first appeared in the Spiceworks Community

    Hi Lesea, Virtex4 is not supported by Soft Error Mitigation (SEM) Core, and I need to protect the Virtex4 LX25 FPGA by simple scrubbing. The configuration management engine I designed is the same as that in Correcting Single-Event Upsets in Virtex-4 FPGA Configuration Memory. I also do the fault injection experiment, and the configuration management engine can correct the faults in the Virtex4 LX25 FPGA.
    On my board: digital signal A---->encode---->D/A---->A/D---->decode---->digital signal B
    When I don't scrubbing the FPGA, the signal B is always the same as the signal A, so I think my board and the software in the FPGA operate well. When I scrubbing the FPGA too frequently, the signal B is not always the same as signal A(I only scrubbing the DSP, IOB, CLK, and CLB logic and I shield the LUT RAM and SRL16 logic). So I think the problem is caused by my configuration management engine.
    I also find that the time between two scrubbing process can not be too short. For instance, if I delay only several clks(2.3825MHz) before the next scrubbing process, the signal B is not always the same as the signal A. But if I delay 0.1s, 1s or 10s before the next scrubbing process, the signal B is always the same as the signal A. So how can I solve the problem? I can do experiment to find the minimal time between the two scrubbing process that have no influence on my software. I want to know what causes this problem. Thank you:)
    Regards,
    Zhiyuan Peng
     

Maybe you are looking for

  • Printer issue w/Windows 7

    Hi there. Just set up Airport Extreme for my Mac desktop, 2 IPADs and a Windows 7 laptop. All are connected and working fine as wireless router. The Mac recognized the Officejet 7410 printer, which is printing fine, but the Windows 7 laptop does not.

  • Dvd to ipod

    Is there a good (free would be better) software for watching a dvd on the ipod? I have capability to copy my dvds for back ups but haven't found a good converter to make 1 video file. If they make such a thing kurt

  • Portal Integration with NW CE 7.3 (Trial)

    Hi, Can anyone let me know the steps that I need to follow for integrating "SAP NetWeaver BI 7.0 (NPL) " with NW CE 7.3 Portal (CES) ? Thanks Regards

  • Addrow  problem in "Define Serial Numbers" form

    I have an item which is maintained by series nos at all transactions. i have to create series nos for that item for all documents. I have developed a specific logic for series nos for populating the nos in the "Define Serial Numbers" form matrix. Whe

  • Cyrillic in GoLive

    I am making a website with english and cyrillic version. I was able to make the english version and it is functioning fine. But I don't know how to work with cyrillic (Mongolian to be specific) letters in GoLive. I asked Godaddy.com and they said tha