Grrr... Xilinx compile failures

I appreciate the effort NI has put into making FPGA programming available to the unwashed masses.  When things work correctly it's slicker than a greased seal in an oil factory.  When things go bad it's extremely frustrating with few clues about how to fix the problem.
I've been struggling with intermittent compile failures on a cRIO-9074.  The latest problem when I added code to make the fpga LED flash while the fpga is running.  That generated a compile error, so I removed that code.  Now it still won't compile, even though it's the exact same code as before.
I've attached the Xilinx log file.  There are several different types of errors, each of which is repeated multiple times.  Links are to the Xilinx KB articles:
ERROR:coreutil - ios failure
ERROR:sim:928 - Could not open destination 'PkgBeatleTypedefs.vhd' for writing.
ERROR:ConstraintSystem:58 - Constraint <INST "*oInputFalling*" TNM =
   "CcInputFallingRegs";> [toplevel_gen.ucf(141)]: INST "*oInputFalling*" does
   not match any design objects.
ERROR:ConstraintSystem:59 - Constraint <TIMESPEC "TS_AsynchMite30"= FROM
   PADS(mIoHWord_n) TO PADS(mIoDmaReq<*>) 0 ns;> [toplevel_gen.ucf(703)]: PADS
   "mIoHWord_n" not found.  Please verify that:
   1. The specified design element actually exists in the original design.
   2. The specified object is spelled correctly in the constraint source file.
According to Xilinx, the first error should be ignored--the design will load and run fine.  Is that possible when compiling within Labview?  Is there a way to run the compiler tools directly, and would that even help?  The second error requires modifying the UCF file, and the third requires various tools and options not available (afaik) to LV developers.
I've been fighting the FPGA compiler for about a month.  Its unpredicability is deadly for small businesses trying to deliver something to a customer.  I'm about ready to throw the whole thing in the trash and go in another direction, simply because I can more accurately estimate how long it will take me to implement on a different platform.
[Edit]
I just tried recompiling the fpga vi again.  This time I receive a new error:
LabVIEW FPGA:  An internal software error in the LabVIEW FPGA Module has occurred.  Please contact National Instruments technical support at ni.com/support.
Click the 'Details' button for additional information.
Compilation Time
Date submitted: 11/28/2012 9:28 AM
Last update: 11/28/2012 9:30 AM
Time waiting in queue: 00:05
Time compiling: 01:55
- PlanAhead: 01:50
- Core Generator: 00:00
- Synthesis - Xst: 00:01
- Translate: 00:01
Attachments:
XilinxLog.txt ‏1302 KB

I'm using a 9237 in slot 3 and setting the sample rate to 1.613 kS/sec.  Slots 1 and 2 have a 9411 and 9422 that I will read using the scan engine.  (Some of my RT test code uses the 9422, some doesn't.  It doesn't seem to be related to this problem.)
Interestingly, I added a small bit of code again to try and get the LED to flash while the FPGA is running.
...and I got all sorts of new compile errors, such as...
ERROR:HDLCompiler:806 -
   "C:/NIFPGA/jobs/Rtxj7d7_KSw9nkc/NiFpgaAG_00000000_WhileLoop.vhd" Line 208:
   Syntax error near "downto".
ERROR:HDLCompiler:806 -
   "C:/NIFPGA/jobs/Rtxj7d7_KSw9nkc/NiFpgaAG_00000000_WhileLoop.vhd" Line 224:
   Syntax error near "3".
ERROR:HDLCompiler:806 -
   "C:/NIFPGA/jobs/Rtxj7d7_KSw9nkc/NiFpgaAG_00000000_WhileLoop.vhd" Line 240:
   Syntax error near "}".
ERROR:HDLCompiler:806 -
   "C:/NIFPGA/jobs/Rtxj7d7_KSw9nkc/NiFpgaAG_00000000_WhileLoop.vhd" Line 244:
   Syntax error near "`".
At the very end of the log it says, "Sorry, too many errors.."  I guess it just gave up.  I know the feeling.
I tried deleting that code and recompiling the vi, and it still won't compile.  I assume if I create another new vi via copy and paste it will work again, but something weird is going on.
Attachments:
XilinxLog - FPGA Main 2 (DMA) - Added flashing LED.txt ‏141 KB

Similar Messages

  • Stub compile failure when entity has two 1:1 relationships

    I think I've found a bug in the Sun J2EE SDK (surprised? no, but looking for workaround...)
    The stub code that is generated during deployment of my .ear file creates two "__reverse_item_uid" fields for my ItemBean entity.
    This started when I added a second 1:1 relationship to my ItemBean. I have a PurchaseItem-Item relationship and have now added a SalesOrderItem-Item relationship. They are both 1:1 and unidirectional. See relevent section from ejb-jar.xml
    <ejb-relation>
    <ejb-relationship-role>
    <ejb-relationship-role-name>PurchaseItem-Item</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>PurchaseItem</ejb-name>
    </relationship-role-source>
    <cmr-field>
    <cmr-field-name>item</cmr-field-name>
    </cmr-field>
    </ejb-relationship-role>
    <ejb-relationship-role>
    <ejb-relationship-role-name>Item-PurchaseItem</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>Item</ejb-name>
    </relationship-role-source>
    </ejb-relationship-role>
    </ejb-relation>
    <ejb-relation>
    <ejb-relationship-role>
    <ejb-relationship-role-name>SalesOrderItem-Item</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>SalesOrderItem</ejb-name>
    </relationship-role-source>
    <cmr-field>
    <cmr-field-name>item</cmr-field-name>
    </cmr-field>
    </ejb-relationship-role>
    <ejb-relationship-role>
    <ejb-relationship-role-name>Item-SalesOrderItem</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>Item</ejb-name>
    </relationship-role-source>
    </ejb-relationship-role>
    </ejb-relation>
    When I add my .jar to my application using the GUI of the deploytool, set everything up (jndi, generate sql, ...) run verifier is OK, and then deploy, the deploy will fail due to stub compilation failure.
    If I then open up the sun-j2ee-ri.xml that is generated and found in my c:\j2sdkee1.3.1\repository\myname\applications\myapp1234.jar notice the use of two "__reverse_item_uid" fields in the following sample:
    <ejb>
    <ejb-name>Item</ejb-name>
    <jndi-name>Item</jndi-name>
    <gen-classes />
    <ejb20-cmp>
    <sql-statement>
    <operation>storeRow</operation>
    <sql>UPDATE "ItemBeanTable" SET "__reverse_item_uid" = ? , "__reverse_item_uid" = ? , "itemType" = ? , "location" = ? , "name" = ? , "peachID" = ? , "price1" = ? , "price2" = ? , "price3" = ? , "unitmeasure" = ? , "version" = ? WHERE "uid" = ? </sql>
    </sql-statement>
    GRRRRR!!!!! What do I do to avoid this? Can't the Sun Ref Impl generate this and the stub code smarter so that it does something like use the <ejb-relationship-role-name> as the fieldname?
    Any help would be greatly appreciated :)
    -Gretel

    I figured it out... here's the answer in case anyone is wondering...
    Use different field names for 'item' in both PurchaseItem and SalesOrderItem. In other words, here is the correct segment of ejb-jar.xml
    <ejb-relation>
    <ejb-relationship-role>
    <ejb-relationship-role-name>SalesOrderItem-Item</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>SalesOrderItem</ejb-name>
    </relationship-role-source>
    <cmr-field>
    <cmr-field-name>itemHack</cmr-field-name>
    </cmr-field>
    </ejb-relationship-role>
    <ejb-relationship-role>
    <ejb-relationship-role-name>Item-SalesOrderItem</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>Item</ejb-name>
    </relationship-role-source>
    </ejb-relationship-role>
    </ejb-relation>
    Simply by changing the <cmr-field-name>itemHack</cmr-field-name> from 'item' to 'itemHack' :)
    Sometimes in the land of EJB development, it's the "easy" things that can waste a lot of time.

  • Ni xilinx compile tools 14.4 Error

    Hi all,
    I tried to compile my LabVIEW FPGA VI with the last version of xilinx compile tools 14.4
    I got the following error
    I re-installed the ni xilinx compile tools 14.4 TWICEn and the the result is the same.
    Help.... please.
    Thanks,

    I would bet that this is an issue with a botched UNINSTALL of Xilinx 11.5. To fix:
    Go to 'C:\NIFPGA\programs'
    Do you have a 'Xilinx11_5' folder? If so, I bet it's empty/corrupt. Move it to another directory for this test.
    If you don't have one... well then I'm wrong and the issue is elsewhere
    Do you expect to have/need Xilinx 11.5 on the system (for compiling with LabVIEW 2010)?
    Cheers!
    TJ G

  • FPGA - compilation failure

    Hello all, I'm studying labVIEW FPGA. When I tried to run my first FPGA program, the compilation always failed. Please see attached picture for details.
    PCI-7830R is installed and works properly in my PC.
    The software I installed -
    labVIEW 2010
    FPGA module 2010
    xilinx 10.1 (I did not install version 11.5 since I got information from http://digital.ni.com/public.nsf/allkb/ed6fc9cf7b983cfd86256dce0072e313?OpenDocument)
    Above software are evaluation version.
    Does anyone know the cause of the failure? Thanks in advance for your comments.
    Attachments:
    compilation error.JPG ‏108 KB

    Hello Christian, thank you so much for the comments. Yes, you are right. This time I follow you suggestion to simulate the VI as below, it can work. However it still can not pass compilation. It shows the same failure as I mentioned above.
    Thanks again for your help,
    Cole

  • Xilinx Compilation Error: HDLCompiler:432 Formal eiosignal has no actual or default value

    Hi,
    I have compiled several programs for sbRIOs previously but have not run into compilation errors before. I can't seem to find any support to see what is actually going poorly. Any help with this would be appreciated!
    The Compilation Status summary is as follows: 
    LabVIEW FPGA: The compilation failed due to a xilinx error.
    Details:
    ERROR:HDLCompiler:432 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 87: Formal <eiosignal> has no actual or default value.
    INFO:TclTasksC:1850 - process run : Synthesize - XST is done.
    INFO:HDLCompiler:1408 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000032_CustomNode.vhd" Line 18. eiosignal is declared here
    ERROR:HDLCompiler:432 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 106: Formal <eiosignal> has no actual or default value.
    INFO:HDLCompiler:1408 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000033_CustomNode.vhd" Line 18. eiosignal is declared here
    ERROR:HDLCompiler:432 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 125: Formal <eiosignal> has no actual or default value.
    INFO:HDLCompiler:1408 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000034_CustomNode.vhd" Line 18. eiosignal is declared here
    ERROR:HDLCompiler:432 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 144: Formal <eiosignal> has no actual or default value.
    INFO:HDLCompiler:1408 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000035_CustomNode.vhd" Line 18. eiosignal is declared here
    ERROR:HDLCompiler:432 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 163: Formal <eiosignal> has no actual or default value.
    INFO:HDLCompiler:1408 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000036_CustomNode.vhd" Line 18. eiosignal is declared here
    ERROR:HDLCompiler:432 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 182: Formal <eiosignal> has no actual or default value.
    INFO:HDLCompiler:1408 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000037_CustomNode.vhd" Line 18. eiosignal is declared here
    ERROR:HDLCompiler:432 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 201: Formal <eiosignal> has no actual or default value.
    INFO:HDLCompiler:1408 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000038_CustomNode.vhd" Line 18. eiosignal is declared here
    ERROR:HDLCompiler:432 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 220: Formal <eiosignal> has no actual or default value.
    INFO:HDLCompiler:1408 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000039_CustomNode.vhd" Line 18. eiosignal is declared here
    ERROR:HDLCompiler:854 - "C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd" Line 50: Unit <vhdl_labview> ignored due to previous errors.
    VHDL file C:\NIFPGA\jobs\R6n310u_Z1R8lYC\NiFpgaAG_00000031_SequenceFrame.vhd ignored due to errors
    -->
    Total memory usage is 189944 kilobytes
    Number of errors : 9 ( 0 filtered)
    Number of warnings : 4 ( 0 filtered)
    Number of infos : 0 ( 0 filtered)
    Process "Synthesize - XST" failed
    Solved!
    Go to Solution.

    Hi DiracDeltaForce,
    As a first pass, I would recommend disabling or deleting a section of code that you suspect may cause the compile error and see if you can get through synthesis.  Once you get through a compile, you have at least isolated the trouble spot.
    Something I would look for in your code is attempts to access the same IO node in multiple clock domains, ie inside and outside of SCTL (single-cycle timed loops), timed sequence structures, or in muliple timed structures with different clock rates.  Attempting this would force LabVIEW to create arbitraion and hand-shaking logic to safely pass data between clock domains.  This type of logic doesn't work in a timed structure because the hand-shaking operation takes multiple clock cycles.
    If you are only using traditional sequence structures (rather than the timed sequence structures) I wouldn't suspect this type of issue.
    -spex
    Spex
    National Instruments
    To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be...

  • FPGA Xilinx compilation failed. Access to the path 'C:\NIFPGA\corecache\activity.log' is denied.

    I have just rceived my cRIO.  I have installed all the necessary software and drivers and have written a VI in FPGA mode.  However, when I try to compile the code I get the follow error message (see attachment).  I have contacted NI support and they have been unable to solve this proble as of yet.
    The NI engineer gave me the following advise:
    1. Launch the Registry Editor by selecting Start » Run and then entering regedit in the Run window.  
    2. Press the Enter button to open the editor. 
    3. Find the following registry key in the path below:  
    HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\National Instruments\LabVIEW\11.0\AddOns\FPGA\CompilerPath_Xilinx12_4 
    * the Wow6432Node may not be correct in all cases. on my PC, you go to HKEY_LOCAL_MACHINE\SOFTWARE|National Instruments\LabVIEW\11.0\AddOns\FPGA\CompilerPath_Xilinx12_4 
    4. Under the Data Tab, you should see the path that your CompilerPath_Xilinx12_4 is linked to. Please confirm that it is linked to C:\NIFGPA\programs\Xilinx12_4 
    If you have Xilinx 11.5 installed, make sure it also is linked to the proper path. 
    5. If you have previous versions of LabVIEW installed, you will have to go to those versions as well. For Example if you have LabVIEW 2010 installed, please go to 10.0\AddOns\FPGA 
    Make sure that the CompilerPath_Xilinx12_4 or CompilerPath_Xilinx11_5 or CompilerPath_Xilinx10_1 are all correctly placed under the C drive as well as seen above in step 4. 
    6) Make sure that the "Working Directory" under LabVIEW 11.0 has the correct path of C:\NIFPGA\ 
    7) After this, close otu the Registry Editor and navigate to where the FPGA Compile Worker is located. By default it should be here: C:\Program Files (x86)\National Instruments\FPGA\CompileWorker 
    8) In here open the "WorkerRootDirectory.txt" document and change the path. Make sure that it points to the C Drive. 
    9) Save and close the file. Restart your computer.
    All of this was already correct and still cannot compile.  
    Solved!
    Go to Solution.
    Attachments:
    FPGA.png ‏131 KB

    Usually, an "Access to the path ... is denied." error stems from user permissions.
    What Operating System are you using?
    Are you using this tool from an administrative account?
    Is there a custom security policy on your workstation that dictates user rights and permissions?
    Is it possible to run LabVIEW as an administrator?  (ex. in Windows 7, you would right-click on LabVIEW.exe and select "run as administrator.")
    Best regards,
    Matthew H.
    Applications Engineer
    National Instruments

  • Fpga compile failure Process "Map" failed

    Hi All,
    We purchased a new cRIO 9068 and I am trying to get it installed into a new project with some new cards. We also upgraded our projects to LV2013 - sp1. I have been unable to complete a compile of the FPGA bitfile. It runs for 25 or so minutes and when It goes to do the "final Device Utilization(map)" it fails. It looks like I might be missing something, I but I have installed and reinstalled LV2013 and the Xilinx versions (10.1, 13.4, and 14.4) That does not seem to fix the issue.
    Any Ideas on what it is looking for might also be helpful since a search for "Map" did not help.
    Thanks in advance.
    Solved!
    Go to Solution.

    Here is the log file from the failed compile.
    Attachments:
    XilinxLog.txt ‏2763 KB

  • Compiler failures with Studio 11 on Solaris 10 x64

    The compiler gives me the following error when I try to compile the larger tests in my environment:
    "Tests.cc", [main]:ube: error: Assert has been violated at '/set/venus_patch/builds.intel-S2/build.0509/intel-S2/lang/ube/opt/src/cfg.c 3150'.
    I am using the following compilation command:
    /opt/SUNWspro/bin/CC -xtarget=opteron -xarch=amd64 -xO0 -library=stlport4 [includes] -c -o obj/lib/Tests.o lib/Tests.cc
    I get a different error if I change the optimization level to 2:
    compiler(iropt) error: connect_labelrefs: undefined label L175 in main
    And yet another message if I use no optimization option at all:
    Assembler: Tests.cc
    "/tmp/yabeAAAX7aqaq", line 20533 : Illegal subtraction in ... ".L209 - .L_y162"
    Failure in /opt/SUNWspro/prod/bin/fbe, status = 0x7f00
    Fatal Error exec'ing /opt/SUNWspro/prod/bin/fbe
    My patches are up to date as of right now, according to smpatch:
    smpatch analyzeNo patches required.
    I am not running compilations in parallel. I have 4 GB free in my swap.
    Any ideas?

    The machine came preinstalled with Solaris 10 and Studio 11. The compiler has been kept up to date by smpatch, as far as I can tell. I remember the update manager installing 120759 and 121018, for example.
    Is this correct?
    comptest> /opt/SUNWspro/bin/CC -V -O hello.cc
    CC: Sun C++ 5.8 Patch 121018-11 2007/05/02
    ir2hf: Sun Compiler Common 11 Patch 120759-14 2007/06/25
    ube: Sun Compiler Common 11 Patch 120759-14 2007/06/25
    /opt/SUNWspro/prod/bin/c++filt: Sun C++ 5.8 2005/10/13
    ccfe: Sun C++ 5.8 Patch 121018-11 2007/05/02
    iropt: Sun Compiler Common 11 Patch 120759-14 2007/06/25
    ld: Software Generation Utilities - Solaris Link Editors: 5.10-1.486
    Thanks for your prompt replies.

  • Inclusion of cxxabi.h and iostream causes compile failure

    stephen@hal:/tmp$ cat test.cpp
    #include <iostream>
    #include <cxxabi.h>
    int main()
      return 0;
    stephen@hal:/tmp$ /home/stephen/solaris-studio/SolarisStudio12.4-linux-x86-bin/solarisstudio12.4/bin/CC -std=c++03 -Wp,-I/usr/include/x86_64-linux-gnu/ -c test.cpp
    "/home/stephen/solaris-studio/SolarisStudio12.4-linux-x86-bin/solarisstudio12.4/lib/compilers/CC-gcc/include/c++/4.8.2/cxxabi.h", line 131: Error: Only one of a set of overloaded functions can be extern "C".
    1 Error(s) detected.
    This happens on at least ubuntu and fedora.

    > - don't include <cxxabi.h>, or
    Actually the problematic behavior is more complicated than just a conflict betwen external cxxabi.h and internal definition.
    There is no failure if you "just include" cxxabi.h.
    Our internal definition agrees with what is defined in cxxabi.h
    And compiler does emit error message if you include cxxabi.h after iostream.
    Something fishy happens between exactly these two headers, causing cxxabi.h to define atexit differently.
    It does not happen with other STL headers.
    Say, including <string> and then cxxabi.h works ok.
    > I needed another work around for including fcntl.h and string
    I'm not sure how you came from "fcntl.h and string" to "cxxabi.h and iostream".
    You also mention "stlport" in your cmake commit and I assume stlport is -library=stlport4.
    Which means it has nothing to do with -std=c++03/-std=c++11/-compat=g modes that use G++ STL headers.
    If you want to make your change only for sunCC -library=stlport4 then you can try hacking around with some additional #ifdefs
    but I would really like to know why that rather innocent fcntl.h causes the problem.
    regards,
      Fedor.

  • RH9 Batch Compile Failures

    I have a batch file set up to batch compile numerous projects one after the other for RoboHelp 9. The output is CHM format. When doing the compile, usually at least one project fails to completely compile and the log file shows only this information:
    Adobe (R) RoboHelp Project Command Line Compiler version 9.0.0.228
    Copyright (C) 2006-2007, Adobe Systems Incorporated and its licensors. All rights reserved.
    Project: C:\TechComm\SPIRIT\source\projects\directd\directd.xpj
    Layout: ssl_directd_hdi.
    Output: C:\TechComm\SPIRIT\output\help\directdhdi.chm.
    Scanning project for compilation....
    Scanning finished.
    Warning: No baggage file description.
    There are no files attached to the project. The baggage file contains some CHMs that were used for remote links, but they are not included in the project.
    Additionally, if I recompile this project using the batch file, it will build fine. How can I stop these initial failures? It's not just this project, but others will randomly not generate and the log file shows the same information.

    Thanks for your suggestions.
    Matt, I have tried "Mass Compile" and it is nothing to do with the FPGA. It just opens all your VIs, resaves them, and relinks them to their subVIs. This helps with some cross-linking issues.
    Bruce, the behavior you describe is what I would have expected initially. However, it seems like I do have to recompile all the subVIs. Right now I don't see anything changing on the FPGA side if I recompile my FPGA shell VI, but not my subVIs.
    It makes some sense that compiling the FPGA shell VI would not automatically recompile all the subVIs - the full compile including all subVIs takes about three hours, as compared to less than an hour to recompile the shell VI alone. So there is a strong reason to want subVI compiles to be independent of the shell compile.
    Thanks,
    Dave

  • [WORKAROUND] xxdiff (and xxdiff-hg) compile failures

    I'm getting failures when trying to build both xxdiff or xxdiff-hg (using yaourt as the frontend). Both have failed for the last few months when trying to compile resParser_yacc.cpp:
    g++ -c -pipe -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_REENTRANT -Wall -W -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED -I/usr/share/qt4/mkspecs/linux-g++ -I. -I/usr/include/qt4/QtCore -I/usr/include/qt4/QtGui -I/usr/include/qt4 -I. -I. -o moc_merged.o moc_merged.cpp
    /usr/lib/qt4/bin/moc -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED -I/usr/share/qt4/mkspecs/linux-g++ -I. -I/usr/include/qt4/QtCore -I/usr/include/qt4/QtGui -I/usr/include/qt4 -I. -I. markers.h -o moc_markers.cpp
    g++ -c -pipe -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_REENTRANT -Wall -W -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED -I/usr/share/qt4/mkspecs/linux-g++ -I. -I/usr/include/qt4/QtCore -I/usr/include/qt4/QtGui -I/usr/include/qt4 -I. -I. -o moc_markers.o moc_markers.cpp
    g++ -c -pipe -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_REENTRANT -Wall -W -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED -I/usr/share/qt4/mkspecs/linux-g++ -I. -I/usr/include/qt4/QtCore -I/usr/include/qt4/QtGui -I/usr/include/qt4 -I. -I. -o resParser_yacc.o resParser_yacc.cpp
    resParser.y: In function ‘int resParserparse()’:
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:188:23: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setPreferredGeometry( geometry );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:199:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setMaximize( true );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:209:23: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setStyleKey( styleKey );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:223:26: note: in expansion of macro ‘RESOURCES’
    if ( !RESOURCES->setAccelerator( XxAccel($3), $5 ) ) {
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:235:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setColor(
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:243:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setColor(
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:256:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setBoolOpt( XxBoolOpt( $1 - XxResParser::BOOLKWD_BASE ), $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:281:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setCommand( XxCommand($3), $5 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:288:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setCommandSwitch( XxCommandSwitch($3), $5 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:295:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setInitSwitch( XxCommandSwitch($3), $5 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:302:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setFontApp( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:307:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setFontText( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:314:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setTag( XxTag($3), $5 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:321:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setShowOpt( XxShowOpt($3), $5 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:328:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setTabWidth( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:335:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setOverviewFileWidth( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:342:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setOverviewSepWidth( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:349:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setVerticalLinePos( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:356:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setClipboardHeadFormat( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:363:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setClipboardLineFormat( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:370:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setHordiffType( XxHordiff($3) );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:377:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setHordiffMax( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:384:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setHordiffContext( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:391:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setShowPaneMergedViewPercent( $3 );
    ^
    resParser.y:41:48: error: ‘resources’ was not declared in this scope
    #define RESOURCES ( static_cast<XxResources*>(resources) )
    ^
    resParser.y:398:20: note: in expansion of macro ‘RESOURCES’
    RESOURCES->setMergedFilename( $3 );
    ^
    make: *** [resParser_yacc.o] Error 1
    ==> ERROR: A failure occurred in build().
    Aborting...
    ==> ERROR: Makepkg was unable to build xxdiff.
    ==> Restart building xxdiff ? [y/N]
    Anyone else encountering this? My last successful build was xxdiff-hg 404-1 in April, but it is now failing for me for both packages on three different machines (all x86_64). My systems are all up-to-date. I've not found any comments about this in the forum, wiki, AUR page, or after some fair amount of googling, so I'm at a loss (and not familiar enough with C++ to make any headway). Any thoughts? Thanks.
    Last edited by mike_r (2013-08-26 19:46:37)

    Your first step should be to try without yaourt (or any other AUR helper). Those may be convenient but they are not recommended for troubleshooting. See if you can reproduce it with makepkg.
    EDIT: Post on the AUR page. xxdiff doesn't compile here either.
    Last edited by cfr (2013-08-26 02:14:00)

  • Does the Xilinx compiler "learn" from past compilatio​ns?

    Hi all,
    I noticed that if I have piece of FPGA code that is close to meeting timing, after I can get it to compile once, by trying the compile multiple times, it will keep compiling on subsequenct builds as I modify parts of the code not in the path of the code that was causing the violation. 
    I also jsut upgraded to LabVIEW FPGA 2014 and the latest xilinx tools (14.7 ISE) from LabVIEW FPGA 2012 and my compiles take way longer and is not meeting timing, on the same code. Is this because I just flushed all the old data from previous compiles?

    Hey qfman,
    Every time you compile the resource mapping starts from scratch so I believe this is most likely a "chance" behaviour you are seeing. Between LabVIEW FPGA versions there are different compiling algorithms/overhead which might come into play for the timing or resource allocation during compile. As far as LabVIEW FPGA 2014, are you using your own computer or the Cloud Compile to compile the code? I'd recommend trying the Cloud Compile (http://digital.ni.com/public.nsf/allkb/C272BBA0A69​59DB6862578DB00808AC3) and seeing if you get the same behaviour.
    Hope this helps!
    Xavier
    Applications Engineering Specialist
    National Instruments

  • +d flag can cause C++ compilation failure in 12.4 beta refresh

    I was using the July refresh of Solaris Studio 12.4 beta to compile cppunit 1.13.2 and came across a file that fails to compile when the +d flag is used on the C++ compiler.
    The file is XmlOutputter.cpp.  I have put a slightly modified version of the pre-processed source code for this on pastebin here: http://pastebin.com/9gHkYXnX - save it to XmlOutputter.pre.cpp.  (Alternatively you can get the full source code for cppunit 1.13.2 here: http://dev-www.libreoffice.org/src/cppunit-1.13.2.tar.gz )
    The first weird thing is that when compiling the original pre-processed file like this:
    CC -mt -std=c++11 -m64  -O4 -c -o XmlOutputter.lo XmlOutputter.pre.cpp
    I got the error:
    "/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/include/CC/gnu/builtins.h", line 248: Error: Multiple declaration for __sun_va_list.
    1 Error(s) detected.
    which I didn't get when compiling the original un-pre-processed file.  I worked around that by commenting out lines 6-11 in the pre-processed file.  This is what is on pastebin, so if you want to look at this first problem then uncomment lines 6-11.  However, I'm not particularly worried about this as I don't generally have a need to compile pre-processed source code.
    The difference I wanted to report was that if you compile the pre-processed file (exactly as it is on pastebin with lines 6-11 commented out) using:
    CC -mt -std=c++11 -m64  -O4 -c -o XmlOutputter.lo XmlOutputter.pre.cpp
    then everything is fine.  However, if you compile it using:
    CC -mt -std=c++11 -m64  -O4 +d -c -o XmlOutputter.lo XmlOutputter.pre.cpp
    then you get this error:
    "/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/CC-gcc/include/c++/4.8.2/tuple", line 1088: Error: Cannot use unknown type to initialize CppUnit::TestFailure*.
    "/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/CC-gcc/include/c++/4.8.2/tuple", line 1075:     Where: While instantiating "std::pair<CppUnit::Test*const, CppUnit::TestFailure*>::pair<CppUnit::Test*const&, 0>(std::tuple<CppUnit::Test*const&>&, std::tuple<>&, std::_Index_tuple<0>, std::_Index_tuple<>)".
    "/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/CC-gcc/include/c++/4.8.2/tuple", line 1075:     Where: Instantiated from std::_Rb_tree<CppUnit::Test*, std::pair<CppUnit::Test*const, CppUnit::TestFailure*>, std::_Select1st<std::pair<CppUnit::Test*const, CppUnit::TestFailure*>>, std::less<CppUnit::Test*>, std::allocator<std::pair<CppUnit::Test*const, CppUnit::TestFailure*>>>::_M_emplace_hint_unique<const std::piecewise_construct_t&, std::tuple<CppUnit::Test*const&>, std::tuple<>>(std::_Rb_tree_const_iterator<std::pair<CppUnit::Test*const, CppUnit::TestFailure*>>, const std::piecewise_construct_t&, std::tuple<CppUnit::Test*const&>&&, std::tuple<>&&).
    "/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/CC-gcc/include/c++/4.8.2/bits/stl_map.h", line 467:     Where: Instantiated from non-template code.
    1 Error(s) detected.
    Also, the only reason I was using the +d flag at all was to work around the problem reported here: Re: >> Assertion:   (../lnk/foldconst.cc, line 230) (Studio 12.4 Beta, -std=c++11)  I assume that's now fixed, so there is no need for me to use +d, but it would be interesting to know why +d causes an error when compiling XmlOutputter.cpp.
    I'm working on Oracle Solaris 10 1/13 s10x_u11wos_24a X86

    > Error: Multiple declaration for __sun_va_list.
    Preprocessed files do not always behave the same as original ones.
    But here you hit a bug which is present on x86-64 platform (Solaris/Linux) only.
    You can just delete definition of __sun_va_list type at the start of preprocessed file and it should go fine.
    > I assume that's now fixed, so there is no need for me to use +d
    Yes, it is fixed in July Beta.
    > Error: Cannot use unknown type to initialize CppUnit::TestFailure*.
    Bug 19159587 filed (C++11: errors on a simple <map> usage).
    Thanks for reporting a problem.

  • Nvidia 173 and kernel 2.6.28 compile failure & nvidia 180 crashes

    **please if this the wrong place to post this tell me or transfer so i can know**
    Hey , im having troubles with my Nvidia driver for long time now i hope you can help me solve it out so theres the deal :
    my computer is : NVidia Geforce 9600 GT , intel E8400 core duo , rest aint important i think
    I used ubuntu with this computer on drivers 173 and 177 (nvidia) and its worked just fine. someday ubuntu upgrades it to the 180 series and from then X never started. so i used my Windows XP some where the 180 drivers wasnt so well functional and crashed pretty much alot. then i downloaded opensuse 11 where i hoped my problems will be solved but instead all i got is :
    [images]
    http://img18.imageshack.us/img18/9379/dsaf.png
    http://img516.imageshack.us/img516/9462/dsaa.png
    [images]
    **this problem occur randomly after X amount of time**
    **when this happen on gnome in arch its or im realy lucky to hit the "alt + F2" and execute "metacity --replace" in time so i will have approx 40 seconds before its happen again to save all my things and restart X , if im not lucky X freeze and computer freeze and i must hard reset**
    i moved from opensuse because its was very slow boot time and because of the graphic drivers in this stage i got myself Arch x86_64 CD , followed the wiki configured network and xorg and everything also installed the "nvidia" package via packman its worked for some time (1 day ?) and then very same problem of opensuse occur . so i tried every driver after the 180.22 , EVERY of them up to the latest beta 185 . ALL of them result the same thing , (expect 185 which there the GPU wont produce red singals).
    for the record : my opensuse was x86_64 , my ubuntu was x86 , and with arch i used x86_64 first and now im using x86. same problem occur in both x86 and x86_64
    i wrote all this to give you a better idea of my problem so maybe anyone will have a soulation. and maybe someone else will find it usefull will help anyone in the future.
    all i want to do is to downgrage back to 173 drivers where i know its worked with ubuntu without any problems at all but i cant theres the actual problem :
    when trying to install from the package itsself (sh NVIDIA-Linux-x86-173.14.12-pkg0.run ) i get this :
    ERROR: If you are using a Linux 2.4 kernel, please make sure
    you either have configured kernel sources matching your
    kernel or the correct set of kernel headers installed
    on your system.
    If you are using a Linux 2.6 kernel, please make sure
    you have configured kernel sources matching your kernel
    installed on your system. If you specified a separate
    output directory using either the "KBUILD_OUTPUT" or
    the "O" KBUILD parameter, make sure to specify this
    directory with the SYSOUT environment variable or with
    the equivalent nvidia-installer command line option.
    Depending on where and how the kernel sources (or the
    kernel headers) were installed, you may need to specify
    their location with the SYSSRC environment variable or
    the equivalent nvidia-installer command line option.
    so i went and tried to use the package build i fetch the PKGBUILD and nvidia.install files from here :
    http://repos.archlinux.org/viewvc.cgi/n … iew=markup
    http://repos.archlinux.org/viewvc.cgi/n … iew=markup
    and then i changed the kernel requirement (my kernel : 2.6.28-ARCH)
    so both of my files looks like this :
    PKGBUILD :
    pkgname=nvidia
    pkgver=173.14.12
    _kernver='2.6.28-ARCH'
    pkgrel=1
    pkgdesc="NVIDIA drivers for kernel26."
    arch=('i686' 'x86_64')
    [ "$CARCH" = "i686" ] && ARCH=x86
    [ "$CARCH" = "x86_64" ] && ARCH=x86_64
    url="http://www.nvidia.com/"
    depends=('kernel26>=2.6.27' 'kernel26<2.6.29' 'nvidia-utils')
    conflicts=('nvidia-96xx' 'nvidia-71xx' 'nvidia-legacy')
    license=('custom')
    install=nvidia.install
    source=(http://us.download.nvidia.com/XFree86/Linux-$ARCH/${pkgver}/NVIDIA-Linux-$ARCH-${pkgver}-pkg0.run)
    md5sums=('76b8eba1b14fc273a1a4044705b0aa56')
    [ "$CARCH" = "x86_64" ] && md5sums=('8675e4ca65033b343c8c77b2ce82e71d')
    build()
    # Extract
    cd $startdir/src/
    sh NVIDIA-Linux-$ARCH-${pkgver}-pkg0.run --extract-only
    cd NVIDIA-Linux-$ARCH-${pkgver}-pkg0
    # Any extra patches are applied in here...
    cd usr/src/nv/
    ln -s Makefile.kbuild Makefile
    make SYSSRC=/lib/modules/${_kernver}/build module || return 1
    # install kernel module
    mkdir -p $startdir/pkg/lib/modules/${_kernver}/kernel/drivers/video/
    install -m644 nvidia.ko $startdir/pkg/lib/modules/${_kernver}/kernel/drivers/video/
    sed -i -e "s/KERNEL_VERSION='.*'/KERNEL_VERSION='${_kernver}'/" $startdir/*.install
    nvidia.install
    # arg 1: the new package version
    post_install() {
    KERNEL_VERSION='2.6.28-ARCH'
    depmod -v $KERNEL_VERSION > /dev/null 2>&1
    # arg 1: the new package version
    # arg 2: the old package version
    post_upgrade() {
    post_install $1
    rmmod nvidia || echo 'In order to use the new nvidia module, exit Xserver and unload it manually.'
    # arg 1: the old package version
    post_remove() {
    KERNEL_VERSION='2.6.28-ARCH'
    depmod -v $KERNEL_VERSION > /dev/null 2>&1
    op=$1
    shift
    $op $*
    installed deps : pacman -S nvidia-utils
    and now i built the package hopefully its will work ,
    bash-3.2# makepkg -c --asroot
    ==> Making package: nvidia 173.14.12-1 i686 (Sat Mar 28 11:08:28 IDT 2009)
    ==> WARNING: Running makepkg as root...
    ==> Checking Runtime Dependencies...
    ==> Checking Buildtime Dependencies...
    ==> Retrieving Sources...
    -> Downloading NVIDIA-Linux-x86-173.14.12-pkg0.run...
    --2009-03-28 11:08:28-- http://us.download.nvidia.com/XFree86/Linux-x86/173.14.12/NVIDIA-Linux-x86-173.14.12-pkg0.run
    Resolving us.download.nvidia.com... 212.199.205.201, 212.199.205.216
    Connecting to us.download.nvidia.com|212.199.205.201|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 11587505 (11M) [application/octet-stream]
    Saving to: `NVIDIA-Linux-x86-173.14.12-pkg0.run.part'
    100%[======================================>] 11,587,505 304K/s in 39s
    2009-03-28 11:09:08 (287 KB/s) - `NVIDIA-Linux-x86-173.14.12-pkg0.run.part' saved [11587505/11587505]
    ==> Validating source files with md5sums...
    NVIDIA-Linux-x86-173.14.12-pkg0.run ... Passed
    ==> Extracting Sources...
    ==> Starting build()...
    Creating directory NVIDIA-Linux-x86-173.14.12-pkg0
    Verifying archive integrity... OK
    Uncompressing NVIDIA Accelerated Graphics Driver for Linux-x86 173.14.12....................................................................................................................................
    If you are using a Linux 2.4 kernel, please make sure
    you either have configured kernel sources matching your
    kernel or the correct set of kernel headers installed
    on your system.
    If you are using a Linux 2.6 kernel, please make sure
    you have configured kernel sources matching your kernel
    installed on your system. If you specified a separate
    output directory using either the "KBUILD_OUTPUT" or
    the "O" KBUILD parameter, make sure to specify this
    directory with the SYSOUT environment variable or with
    the equivalent nvidia-installer command line option.
    Depending on where and how the kernel sources (or the
    kernel headers) were installed, you may need to specify
    their location with the SYSSRC environment variable or
    the equivalent nvidia-installer command line option.
    *** Unable to determine the target kernel version. ***
    make: *** [select_makefile] Error 1
    ==> ERROR: Build Failed.
    Aborting...
    bash-3.2#
    after a little search on google i understood that my kernel sources are missing (using the -e (expert) on nvidia installer let you set location but its no success on normal /lib/modules/2.6.28-ARCH/build)
    so how can i solve this and compile & install the 173 drivers on my i686 , arch , 2.6.28 ?
    thanks in advance for reading.

    After posting this message to the IRC channel , i was told that i can install the nvidia-173xx drivers , its seems to work but i can assure the problem would not occur again so ill just have to watch out, if you have any other fix for this please post.

  • SQLJ compile failure in derived class

    I have successfully comiled and run the "SimpleExample" defined
    in the Help Topics "Developing Applications Using SQLJ" page.
    However, if I make a simple modification to make the class
    derived from another class (DoNothing class shown below is the
    simplest case I've tried) I get compilation errors:
    Error (52) Illegal INTO ... bind variables list: illegal
    expression..
    Error (0) SQLJ translation aborted.
    Error (0) sqlj.framework.TranslationException. Error occured in
    SQLJ translation.
    Modified SimpleExample looks like:
    public class SimpleExample extends DoNothing {
    ......as before
    where DoNothing is defined as:
    package RDBInterface; // My SimpleExample is in same package
    public class DoNothing {
    public DoNothing() {
    Any ideas about this?
    null

    Andy,
    I got the answer to that in another thread,
    cheers Jon
    Re: SQLJ-Problem with JDeveloper 2.0
    From: Chris Stead (guest)
    Email: [email protected]
    Date: Tue Feb 02 13:07 CST 1999
    Markus Rosenkranz (guest) wrote:
    : Hi,
    : I tried to rebuild an SQLJ-file with the new JDev. 2.0.
    Whenever
    : there is an iterator definition in a derived class compilation
    : failed. By removing the extends clause in the class definition
    : the compilation error could be avoided. It seems that the
    : iterator definition is ignored. With JDev. 1.1 everthing
    worked
    : fine. How can this problem be solved.
    : TIA Markus
    Hi Markus,
    Your question seems similar to the one that was just resolved.
    Here are the specifics:
    I'm using the production SQLJ and getting a frustrating error
    of:
    -- "Left hand side of assignment does not have a Java type."
    I've reduced my testcase down to the absolute
    minimum, but maybe I'm missing something obvious...
    package oracle.xml.website;
    import java.sql.SQLException;
    import javax.servlet.http.*;
    #sql iterator empiter ( String empname );
    public class WebXSL extends HttpServlet {
    public void foo() throws SQLException {
    empiter myEmps = null;
    #sql myEmps = {SELECT ename empname from EMP order by sal
    desc };
    Hi,
    Could you please check whether the class HttpServlet is
    available
    in your CLASSPATH? The type resolver could be failing to find
    this class in the process of looking for the definition of
    'empiter', which is the type of your iterator variable myEmps.
    The error message is somewhat obscure, we will be working on
    improving it..
    The SQLJ translator does a full type resolution of Java
    variables
    and expressions used in #sql statements, following JLS rules of
    scoping and precedence for class and interface hierarchies. It
    looks for classes in the CLASSPATH, as well as in the .sqlj and
    .java source files specified on the sqlj command-line. So, if
    you have .sqlj and .java files that are mutually dependent, you
    could do:
    sqlj Foo.sqlj Bar.java
    Please let us know if your problem persists.. and see also bug
    801780 for a related discussion.
    - Julie
    Julie,
    Your suggestion helped! Thanks.
    With 20/20 hindsight now, it would have been much more
    helpful if the SQLJ translator reported an error message like:
    -> Left hand side of assignment is not a Java type.
    -> Unable to resolve class "HttpServlet". Check CLASSPATH
    That would have keyed me into the problem many hours ago :-)
    You suggestion lead me to test sqlj-ing my testcase
    both outside and inside the JDeveloper environment.
    Outside the environment, if I make sure J:\lib\jsdk.jar is
    in my classpath, then all is well.
    Inside the environment, I had included the named
    library for "JavaWebServer" in my project libraries
    and its classpath info was properly set to J:\lib\jsdk.jar,
    but it appears that somehow JDev is not properly passing
    this project-level classpath info to the SQLJ translator.
    I was able to solve my problem (a hack!) by adding
    J:\lib\jsdk.jar
    to the:
    IDEClasspath=
    setting in the J:\bin\jdeveloper.ini file which I shouldn't
    have to do. I filed Bug 813116 for the JDev team.
    null

Maybe you are looking for