+d flag can cause C++ compilation failure in 12.4 beta refresh

I was using the July refresh of Solaris Studio 12.4 beta to compile cppunit 1.13.2 and came across a file that fails to compile when the +d flag is used on the C++ compiler.
The file is XmlOutputter.cpp.  I have put a slightly modified version of the pre-processed source code for this on pastebin here: http://pastebin.com/9gHkYXnX - save it to XmlOutputter.pre.cpp.  (Alternatively you can get the full source code for cppunit 1.13.2 here: http://dev-www.libreoffice.org/src/cppunit-1.13.2.tar.gz )
The first weird thing is that when compiling the original pre-processed file like this:
CC -mt -std=c++11 -m64  -O4 -c -o XmlOutputter.lo XmlOutputter.pre.cpp
I got the error:
"/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/include/CC/gnu/builtins.h", line 248: Error: Multiple declaration for __sun_va_list.
1 Error(s) detected.
which I didn't get when compiling the original un-pre-processed file.  I worked around that by commenting out lines 6-11 in the pre-processed file.  This is what is on pastebin, so if you want to look at this first problem then uncomment lines 6-11.  However, I'm not particularly worried about this as I don't generally have a need to compile pre-processed source code.
The difference I wanted to report was that if you compile the pre-processed file (exactly as it is on pastebin with lines 6-11 commented out) using:
CC -mt -std=c++11 -m64  -O4 -c -o XmlOutputter.lo XmlOutputter.pre.cpp
then everything is fine.  However, if you compile it using:
CC -mt -std=c++11 -m64  -O4 +d -c -o XmlOutputter.lo XmlOutputter.pre.cpp
then you get this error:
"/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/CC-gcc/include/c++/4.8.2/tuple", line 1088: Error: Cannot use unknown type to initialize CppUnit::TestFailure*.
"/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/CC-gcc/include/c++/4.8.2/tuple", line 1075:     Where: While instantiating "std::pair<CppUnit::Test*const, CppUnit::TestFailure*>::pair<CppUnit::Test*const&, 0>(std::tuple<CppUnit::Test*const&>&, std::tuple<>&, std::_Index_tuple<0>, std::_Index_tuple<>)".
"/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/CC-gcc/include/c++/4.8.2/tuple", line 1075:     Where: Instantiated from std::_Rb_tree<CppUnit::Test*, std::pair<CppUnit::Test*const, CppUnit::TestFailure*>, std::_Select1st<std::pair<CppUnit::Test*const, CppUnit::TestFailure*>>, std::less<CppUnit::Test*>, std::allocator<std::pair<CppUnit::Test*const, CppUnit::TestFailure*>>>::_M_emplace_hint_unique<const std::piecewise_construct_t&, std::tuple<CppUnit::Test*const&>, std::tuple<>>(std::_Rb_tree_const_iterator<std::pair<CppUnit::Test*const, CppUnit::TestFailure*>>, const std::piecewise_construct_t&, std::tuple<CppUnit::Test*const&>&&, std::tuple<>&&).
"/opt/SolarisStudio12.4-beta_jul14-solaris-x86/lib/compilers/CC-gcc/include/c++/4.8.2/bits/stl_map.h", line 467:     Where: Instantiated from non-template code.
1 Error(s) detected.
Also, the only reason I was using the +d flag at all was to work around the problem reported here: Re: >> Assertion:   (../lnk/foldconst.cc, line 230) (Studio 12.4 Beta, -std=c++11)  I assume that's now fixed, so there is no need for me to use +d, but it would be interesting to know why +d causes an error when compiling XmlOutputter.cpp.
I'm working on Oracle Solaris 10 1/13 s10x_u11wos_24a X86

> Error: Multiple declaration for __sun_va_list.
Preprocessed files do not always behave the same as original ones.
But here you hit a bug which is present on x86-64 platform (Solaris/Linux) only.
You can just delete definition of __sun_va_list type at the start of preprocessed file and it should go fine.
> I assume that's now fixed, so there is no need for me to use +d
Yes, it is fixed in July Beta.
> Error: Cannot use unknown type to initialize CppUnit::TestFailure*.
Bug 19159587 filed (C++11: errors on a simple <map> usage).
Thanks for reporting a problem.

Similar Messages

  • Is it true that bundle by name can cause FPGA compiles to fail?

    Dear forum,
    I have attached a full log of my experience with this apparent bug, but I will summarize it here:
    I attempted to compile a labview robotics project for cRIO 9074. The code was all on the FPGA, and I'd tested it all before attempting to compile--much of it was skeleton code proving that the self test system would turn on blinky lights on the front panel. My first attempted compile received a strange failure message in stage 3 of labview's pre-xilinx compilation interface, and the message was strange mostly because it was so intensely vague. It read "An internal software error has occurred. Please contact National Instruments technical support at ni.com/support with the following information: Error 1000 occurred at an unidentified location. Possible reason(s): LabVIEW:  The VI is not in a state compatible with this operation."
    It being Tuesday the 6th of July 2010, I was unable to reach either the support staff who were on holiday or the web service, which was down for maintenance, so I began disabling pieces of my code using my debugging conditional disable statements. Eventually I found one skeletal code module, one to interface with analog flood sensors, which did not fail to compile. I compared this with another skeletal code module which did fail to compile, and after several tests noticed that adding a bundle by name from a strict type definition (but not an un-bundle by name or a standard anonymous bundle) would cause the vi to fail the compile. With several more tests I confirmed that while the vi would fail to compile with the bbn, with a standard bundle it would compile successfully all the way to the xilinx stage (by which point I didn't care, the bug was definitively pre-xilinx.) 
    So tonight I will be going into my code and doing a lot of replacing bundle by name blocks with anonymous bundles, but I am still haunted by the extreme vagueness of the error, and by the fact that bundle by name is usually a solid part of the FPGA development platform. It was clearly intended to work. Why didn't it?
    Gray Cortright Thomas
    Franklin W. Olin College of Engineering
    Engineering: Robotics
    Class of 2012
    Needham, MA
    Gray Cortright Thomas
    Franklin W. Olin College of Engineering
    Engineering: Robotics
    Class of 2012
    Needham, MA
    Attachments:
    gtlog2010-07-06-18-24-00.txt ‏7 KB

    So I did as Donovan suggested, and this worked for the code example which I posted. But when I started trying to put more of the bundle by names back in this solution stopped working. I then went on to solve a seemingly unrelated problem with the version that had compiled using only anonymous bundle: it had been unable to run some while loops that were in subvis. When I took the code from these subvis and put the loops on the back panel of the main vi, to my utter horror, they began to function again in the compile (running on the hardware). I was afraid that it was subvis themselves that were causing this failure to run, but was relieved when my next test proved this not to be the case: I took that loop on the front panel where it worked and used edit>create subvi to put it in a subvi. It still worked. Thus I was left to search out the difference between the subvi that successfully compiled, and the subvi which compiled but didn't run--and this was that the failed subvi was part of a library. I placed the new, working, subvi in the same library and the code once again displayed the same symptoms as when it had been tested the first time: the loop would not run even once and the flow of control would never finish with the loop--it just delayed forever. This might be a problem with the way I had used the libraries, but I think everything within them was public. My guess now it that the Libraries don't behave the same way they do on the FPGA as they do on the FPGA simulator, and this is why the vis seemed to work when I simulated them. So... I took all the vis, typedefs, and globals out of the libraries and into virtual folders of the same name and... Everything works! Even bundle by name!
    The implementation of project libraries in the FPGA compiler may be suboptimal in some respects, but I probed only as deep as necessary to run my code.
    And this code is in 7z format because my zip file did not meet the maximum file size requirements of the forum.
    It is also very possible that I misunderstood the functioning of the libraries, and the FPGA simulator was blowing sunshine up my nose when it showed them working properly in simulation. Either way, it is probably worth looking into. This weekend I might try to find the simplest project file that shows these symptoms just to prove I'm not crazy.
    Gray Cortright Thomas
    Franklin W. Olin College of Engineering
    Engineering: Robotics
    Class of 2012
    Needham, MA

  • Error message:  "document may contain binary EPS file which can cause print failure"

    Can anyone help me with this error message?  I have an .indd file that contains about 20 logos and various images.  I was under the impression that using eps / ai / and psd file formats for printing was best in InDesign.  (It is a 25 page document).  When I try to print the document i get the error message that says:
    "This document may contain binary EPS files which can caise the print job to fail.  If the printer produces output then the binary datat did not interfere with printing. Do you want to print this document?"
    Well - my document did not print for me .. so hwo can I fix this?
    Thank you,
    Cmol

    There are many ways to fix this.
    Re-saving EPS files as AI is fine.
    Re-saving EPS files as PSD is a bad plan -- EPS files are vector graphics, and PSD typically aren't. Also, AI files are PDF files that are generally readable by many applications. PSD files, not so much, (though InDesign certainly can).
    (You could also re-save as an EPS file without 8-bit/binary data.)
    What does the error message mean? Well, EPS files are a type of PostScript file, and PostScript has many ways to store image data. It can be stored as 8-bit data, which is the most space-efficient. It can also be stored as base64 ascii, i.e. just letters and numbers and soforth, which takes up more space (I want to say 130% more?).
    Depending on exactly how your computer is connected to your printer, those 8-bit characters can cause problems and may not make it all the way to the printer. If that happens, then you get this problem. The warning message is there to let you know that if you have a non-8-bit-clean printing path, then you may have a problem.

  • FPGA - compilation failure

    Hello all, I'm studying labVIEW FPGA. When I tried to run my first FPGA program, the compilation always failed. Please see attached picture for details.
    PCI-7830R is installed and works properly in my PC.
    The software I installed -
    labVIEW 2010
    FPGA module 2010
    xilinx 10.1 (I did not install version 11.5 since I got information from http://digital.ni.com/public.nsf/allkb/ed6fc9cf7b983cfd86256dce0072e313?OpenDocument)
    Above software are evaluation version.
    Does anyone know the cause of the failure? Thanks in advance for your comments.
    Attachments:
    compilation error.JPG ‏108 KB

    Hello Christian, thank you so much for the comments. Yes, you are right. This time I follow you suggestion to simulate the VI as below, it can work. However it still can not pass compilation. It shows the same failure as I mentioned above.
    Thanks again for your help,
    Cole

  • All of a sudden I can no longer compile??

    I'm seeing the strangest thing. When I try to compile with javac, I now get tons of debug output that looks like this:
    count = 0, total = 110
    count = 0, total = 37
    count = 0, total = 149
    count = 0, total = 73
    count = 0, total = 8
    count = 0, total = 213
    count = 0, total = 43
    Additionally, javac no longer seems able to resolve symbols when I specify a a jar file in the classpath. (Seems to work OK if I extract the classess from the jar file). This started appearing out of the blue (was compiling fine yesterday).
    So basically, I can no longer compile. Anyone have any idea what's going on? I'm using 2sdk1.4.0 on windows 2000.
    It's probably coincidence, but I installed Forte yesterday. I doubt it has any bearing on what I'm seeing, but its the only thing I can think of that changed in my build/system environment.

    In my experience, these "count = 0, total = <some-number>" messages are always due to corrupt jar files. Finding out exactly which jar files are corrupt is not that difficult. Take a look at your classpath, and try removing jars one by one until you no longer get the error messages. Then try adding them back and see which jar it is that causes the problem. Remember though that there may be more than one jar that is corrupted on your system.
    Next, open up a copy Winzip, and try dragging the jar file into it. If the jar file is indeed corrupt, winzip will bomb out with some nasty error message. That's how you know for sure.
    One final trick - try compiling some simple "Hello World" java program that contains no import statements (and doesn't use any java packages besides the obvious java.lang).
    Try building that program with your original classpath (containing the corrupt jar files), and count how many "count = 0, total = ..." messages you are getting. That tells you exactly how many jar files are corrupt in your classpath.
    Hope this helps.
    Amr Shalaby.

  • An account failed to log on unknown username or password. Causing Login audit failures

    I have a SBS11 Essentials server that is getting audit Failures over and over again. There computer account says it's the SBS11 server it's self.  It says unknown user name or bad password. I have checked for scheduled tasks, backup jobs, services and
    non of them are using any special user accounts.  I have used MS network monitor and can't find anything helpful to lead to the issue.  All computers in the network are running Windows 7.  The domain functional level is 2008 R2.
    I get a the 4768 event ID about a Kerberos event and then just after I get a Event ID 4625 account failure with Logon Type 3.  I have includes the events below.  I need to figure what is causing the audit failures as my GFI Test Hacker alert is
    catching it every morning.  Disabling the Test Hacker alert is not a option.  I have used Process Explorer also but can't seem to pin it down.  I also enabled Kerberos logging.
    http://support.microsoft.com/kb/262177?wa=wsignin1.0.  All event codes state its a unknown or no existing account but how do I stop it from happening?
    This is from the System Event log
    A Kerberos Error Message was received:
    on logon session TH.LOCAL\thsbs11e$
    Client Time:
    Server Time: 14:59:53.0000 3/4/2014 Z
    Error Code: 0x6 KDC_ERR_C_PRINCIPAL_UNKNOWN
    Extended Error:
    Client Realm:
    Client Name:
    Server Realm: TH.LOCAL
    Server Name: krbtgt/TH.LOCAL
    Target Name: krbtgt/[email protected]
    Error Text:
    File: e
    Line: 9fe
    Error Data is in record data.
    This is from the Security Event log
    A Kerberos authentication ticket (TGT) was requested.
    Account Information:
    Account Name: S-1-5-21-687067891-4024245798-968362083-1000
    Supplied Realm Name: TH.LOCAL
    User ID: NULL SID
    Service Information:
    Service Name: krbtgt/TH.LOCAL
    Service ID: NULL SID
    Network Information:
    Client Address: ::1
    Client Port: 0
    Additional Information:
    Ticket Options: 0x40810010
    Result Code: 0x6
    Ticket Encryption Type: 0xffffffff
    Pre-Authentication Type: -
    Certificate Information:
    Certificate Issuer Name:
    Certificate Serial Number:
    Certificate Thumbprint:
    Certificate information is only provided if a certificate was used for pre-authentication.
    Pre-authentication types, ticket options, encryption types and result codes are defined in RFC 4120.
    I then get teh following error in the next event
    An account failed to log on.
    Subject:
    Security ID: SYSTEM
    Account Name: THSBS11E$
    Account Domain: TH
    Logon ID: 0x3e7
    Logon Type: 3
    Account For Which Logon Failed:
    Security ID: NULL SID
    Account Name:
    Account Domain:
    Failure Information:
    Failure Reason: Unknown user name or bad password.
    Status: 0xc000006d
    Sub Status: 0xc0000064
    Process Information:
    Caller Process ID: 0x25c
    Caller Process Name: C:\Windows\System32\lsass.exe
    Network Information:
    Workstation Name: THSBS11E
    Source Network Address: -
    Source Port: -
    Detailed Authentication Information:
    Logon Process: Schannel
    Authentication Package: Kerberos
    Transited Services: -
    Package Name (NTLM only): -
    Key Length: 0
    This event is generated when a logon request fails. It is generated on the computer where access was attempted.
    The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.
    The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network).
    The Process Information fields indicate which account and process on the system requested the logon.
    The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.
    The authentication information fields provide detailed information about this specific logon request.
    - Transited services indicate which intermediate services have participated in this logon request.
    - Package name indicates which sub-protocol was used among the NTLM protocols.
    - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.

    Well I opened the case for him and he never followed up with Microsoft :-(
    It's a kerberos issue, we're told to ignore it.  Would you be willing to be patient and stubborn and work with CSS to at least understand what's going on better?  I can tell you it's normal with Essentials but not the exact technical reason it's
    happening.
    Unfortunately TechNet isn't coming back, sorry folks :-(

  • Grrr... Xilinx compile failures

    I appreciate the effort NI has put into making FPGA programming available to the unwashed masses.  When things work correctly it's slicker than a greased seal in an oil factory.  When things go bad it's extremely frustrating with few clues about how to fix the problem.
    I've been struggling with intermittent compile failures on a cRIO-9074.  The latest problem when I added code to make the fpga LED flash while the fpga is running.  That generated a compile error, so I removed that code.  Now it still won't compile, even though it's the exact same code as before.
    I've attached the Xilinx log file.  There are several different types of errors, each of which is repeated multiple times.  Links are to the Xilinx KB articles:
    ERROR:coreutil - ios failure
    ERROR:sim:928 - Could not open destination 'PkgBeatleTypedefs.vhd' for writing.
    ERROR:ConstraintSystem:58 - Constraint <INST "*oInputFalling*" TNM =
       "CcInputFallingRegs";> [toplevel_gen.ucf(141)]: INST "*oInputFalling*" does
       not match any design objects.
    ERROR:ConstraintSystem:59 - Constraint <TIMESPEC "TS_AsynchMite30"= FROM
       PADS(mIoHWord_n) TO PADS(mIoDmaReq<*>) 0 ns;> [toplevel_gen.ucf(703)]: PADS
       "mIoHWord_n" not found.  Please verify that:
       1. The specified design element actually exists in the original design.
       2. The specified object is spelled correctly in the constraint source file.
    According to Xilinx, the first error should be ignored--the design will load and run fine.  Is that possible when compiling within Labview?  Is there a way to run the compiler tools directly, and would that even help?  The second error requires modifying the UCF file, and the third requires various tools and options not available (afaik) to LV developers.
    I've been fighting the FPGA compiler for about a month.  Its unpredicability is deadly for small businesses trying to deliver something to a customer.  I'm about ready to throw the whole thing in the trash and go in another direction, simply because I can more accurately estimate how long it will take me to implement on a different platform.
    [Edit]
    I just tried recompiling the fpga vi again.  This time I receive a new error:
    LabVIEW FPGA:  An internal software error in the LabVIEW FPGA Module has occurred.  Please contact National Instruments technical support at ni.com/support.
    Click the 'Details' button for additional information.
    Compilation Time
    Date submitted: 11/28/2012 9:28 AM
    Last update: 11/28/2012 9:30 AM
    Time waiting in queue: 00:05
    Time compiling: 01:55
    - PlanAhead: 01:50
    - Core Generator: 00:00
    - Synthesis - Xst: 00:01
    - Translate: 00:01
    Attachments:
    XilinxLog.txt ‏1302 KB

    I'm using a 9237 in slot 3 and setting the sample rate to 1.613 kS/sec.  Slots 1 and 2 have a 9411 and 9422 that I will read using the scan engine.  (Some of my RT test code uses the 9422, some doesn't.  It doesn't seem to be related to this problem.)
    Interestingly, I added a small bit of code again to try and get the LED to flash while the FPGA is running.
    ...and I got all sorts of new compile errors, such as...
    ERROR:HDLCompiler:806 -
       "C:/NIFPGA/jobs/Rtxj7d7_KSw9nkc/NiFpgaAG_00000000_WhileLoop.vhd" Line 208:
       Syntax error near "downto".
    ERROR:HDLCompiler:806 -
       "C:/NIFPGA/jobs/Rtxj7d7_KSw9nkc/NiFpgaAG_00000000_WhileLoop.vhd" Line 224:
       Syntax error near "3".
    ERROR:HDLCompiler:806 -
       "C:/NIFPGA/jobs/Rtxj7d7_KSw9nkc/NiFpgaAG_00000000_WhileLoop.vhd" Line 240:
       Syntax error near "}".
    ERROR:HDLCompiler:806 -
       "C:/NIFPGA/jobs/Rtxj7d7_KSw9nkc/NiFpgaAG_00000000_WhileLoop.vhd" Line 244:
       Syntax error near "`".
    At the very end of the log it says, "Sorry, too many errors.."  I guess it just gave up.  I know the feeling.
    I tried deleting that code and recompiling the vi, and it still won't compile.  I assume if I create another new vi via copy and paste it will work again, but something weird is going on.
    Attachments:
    XilinxLog - FPGA Main 2 (DMA) - Added flashing LED.txt ‏141 KB

  • Intercompany Deliveries were created with zero quantity, causing an idoc failure and the remaining delivery too

    Intercompany Deliveries were created with zero quantity, causing an idoc failure and the remaining delivery too
    a. User trying to create delivery but stock not available.
    b. When stock is not there it should show an error message in SAP screen but its not -  like stock not available etc
    c. Fedex gets delivery notice for the delivery; if its more than zero it will create delivery otherwise less than zero it won’t accept.
    d. But issue its creating even its less than zero
    Is there anyone who can help me out...

    When one is created with zero quantities it starts blocking our deliveries from getting through to fedex because deliveries will in doubt have been blocked at fedex. As we process our orders in morning and if there is an STO blocking orders, it will be in afternoon .
    Any suggestions !!!

  • Error messages in 2651XM GW, cause outbound call failure, reboot fix it

    Cisco 2651XM as Gateway, it keep posting these error message and after a period of time, it cause outbound call failure.
    Reboot fix it but there're still error messages...
    How to fix it? It's IOS bug or hardware issue? How to identify?
    Cisco IOS Software, C2600 Software (C2600-IPVOICE-M), Version 12.3(8)T10, RELEASE SOFTWARE (fc2)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2005 by Cisco Systems, Inc.
    Compiled Wed 03-Aug-05 20:45 by hqluong
    ROM: System Bootstrap, Version 12.2(7r) [cmong 7r], RELEASE SOFTWARE (fc1)
    cpchn1-g1 uptime is 6 hours, 56 minutes
    System returned to ROM by reload at 03:52:44 NZST Tue Apr 17 2007
    System restarted at 03:56:27 NZST Tue Apr 17 2007
    System image file is "flash:c2600-ipvoice-mz.123-8.T10.bin"
    Cisco 2651XM (MPC860P) processor (revision 0x100) with 118784K/12288K bytes of memory.
    Processor board ID JAE072000AJ (1555074759)
    M860 processor: part number 5, mask 2
    2 FastEthernet interfaces
    62 Serial interfaces
    2 Channelized E1/PRI ports
    32K bytes of NVRAM.
    32768K bytes of processor board System flash (Read/Write)
    See attach detail error messages

    Cisco 2651XM as Gateway, it keep posting these error message and after a period of time, it cause outbound call failure.
    Reboot fix it but there're still error messages...
    How to fix it? It's IOS bug or hardware issue? How to identify?
    Cisco IOS Software, C2600 Software (C2600-IPVOICE-M), Version 12.3(8)T10, RELEASE SOFTWARE (fc2)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2005 by Cisco Systems, Inc.
    Compiled Wed 03-Aug-05 20:45 by hqluong
    ROM: System Bootstrap, Version 12.2(7r) [cmong 7r], RELEASE SOFTWARE (fc1)
    cpchn1-g1 uptime is 6 hours, 56 minutes
    System returned to ROM by reload at 03:52:44 NZST Tue Apr 17 2007
    System restarted at 03:56:27 NZST Tue Apr 17 2007
    System image file is "flash:c2600-ipvoice-mz.123-8.T10.bin"
    Cisco 2651XM (MPC860P) processor (revision 0x100) with 118784K/12288K bytes of memory.
    Processor board ID JAE072000AJ (1555074759)
    M860 processor: part number 5, mask 2
    2 FastEthernet interfaces
    62 Serial interfaces
    2 Channelized E1/PRI ports
    32K bytes of NVRAM.
    32768K bytes of processor board System flash (Read/Write)
    See attach detail error messages

  • Stub compile failure when entity has two 1:1 relationships

    I think I've found a bug in the Sun J2EE SDK (surprised? no, but looking for workaround...)
    The stub code that is generated during deployment of my .ear file creates two "__reverse_item_uid" fields for my ItemBean entity.
    This started when I added a second 1:1 relationship to my ItemBean. I have a PurchaseItem-Item relationship and have now added a SalesOrderItem-Item relationship. They are both 1:1 and unidirectional. See relevent section from ejb-jar.xml
    <ejb-relation>
    <ejb-relationship-role>
    <ejb-relationship-role-name>PurchaseItem-Item</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>PurchaseItem</ejb-name>
    </relationship-role-source>
    <cmr-field>
    <cmr-field-name>item</cmr-field-name>
    </cmr-field>
    </ejb-relationship-role>
    <ejb-relationship-role>
    <ejb-relationship-role-name>Item-PurchaseItem</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>Item</ejb-name>
    </relationship-role-source>
    </ejb-relationship-role>
    </ejb-relation>
    <ejb-relation>
    <ejb-relationship-role>
    <ejb-relationship-role-name>SalesOrderItem-Item</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>SalesOrderItem</ejb-name>
    </relationship-role-source>
    <cmr-field>
    <cmr-field-name>item</cmr-field-name>
    </cmr-field>
    </ejb-relationship-role>
    <ejb-relationship-role>
    <ejb-relationship-role-name>Item-SalesOrderItem</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>Item</ejb-name>
    </relationship-role-source>
    </ejb-relationship-role>
    </ejb-relation>
    When I add my .jar to my application using the GUI of the deploytool, set everything up (jndi, generate sql, ...) run verifier is OK, and then deploy, the deploy will fail due to stub compilation failure.
    If I then open up the sun-j2ee-ri.xml that is generated and found in my c:\j2sdkee1.3.1\repository\myname\applications\myapp1234.jar notice the use of two "__reverse_item_uid" fields in the following sample:
    <ejb>
    <ejb-name>Item</ejb-name>
    <jndi-name>Item</jndi-name>
    <gen-classes />
    <ejb20-cmp>
    <sql-statement>
    <operation>storeRow</operation>
    <sql>UPDATE "ItemBeanTable" SET "__reverse_item_uid" = ? , "__reverse_item_uid" = ? , "itemType" = ? , "location" = ? , "name" = ? , "peachID" = ? , "price1" = ? , "price2" = ? , "price3" = ? , "unitmeasure" = ? , "version" = ? WHERE "uid" = ? </sql>
    </sql-statement>
    GRRRRR!!!!! What do I do to avoid this? Can't the Sun Ref Impl generate this and the stub code smarter so that it does something like use the <ejb-relationship-role-name> as the fieldname?
    Any help would be greatly appreciated :)
    -Gretel

    I figured it out... here's the answer in case anyone is wondering...
    Use different field names for 'item' in both PurchaseItem and SalesOrderItem. In other words, here is the correct segment of ejb-jar.xml
    <ejb-relation>
    <ejb-relationship-role>
    <ejb-relationship-role-name>SalesOrderItem-Item</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>SalesOrderItem</ejb-name>
    </relationship-role-source>
    <cmr-field>
    <cmr-field-name>itemHack</cmr-field-name>
    </cmr-field>
    </ejb-relationship-role>
    <ejb-relationship-role>
    <ejb-relationship-role-name>Item-SalesOrderItem</ejb-relationship-role-name>
    <multiplicity>One</multiplicity>
    <relationship-role-source>
    <ejb-name>Item</ejb-name>
    </relationship-role-source>
    </ejb-relationship-role>
    </ejb-relation>
    Simply by changing the <cmr-field-name>itemHack</cmr-field-name> from 'item' to 'itemHack' :)
    Sometimes in the land of EJB development, it's the "easy" things that can waste a lot of time.

  • Replicas fill hard discs and cause a total failure

    Hello everyone,
    we have a setup of 2 RG with 3 RNs each. We tried to load around 10'000'000 keys with relatively small values. Unfortunately, this caused our whole cluster to fail. The problem was that the replica nodes got their 40 GB (just for the data) partitions full and gave up. The two masters were still there with only around 5 GB of space taken in their partitions. What could be the reason for this discrepancy between a master and its replicas? The replicas contained warnings like this in their log files:
    121108 17:05:06:073 WARNING [rg1-rn1] Cleaner has 33 files not deleted because they are protected by replication.
    I checked that each replica had 37 JDB files in the env directory, the master just 4 JDB files. I guess that is where the whole storage went missing. Does anyone have an idea what the reason for such behaviour could be? If that would happen in production (which it will sooner or later if we do not know the reasons) it would be a disaster.
    Unfortunately, I have no access to the logs any more because our admins were very fast to clean up the whole mess and setup the kv store anew.
    Cheers,
    Dimo

    Dimo,
    I suspect that the problem is something that we know about and have been actively working on. The problem comes up when a store is run with very significantly undersized caches, and the application does a large number of updates, but then ceases all write activity. In the cases we've looked at, the store's underlying log cleaning falls behind during the time of the heavy application load. It would catch
    up, except that a bug in some metadata maintenance is gating the log cleanup.
    Your case sounds like that, except that you see asymmetrical behavior on the part of the master node. Does the application load consist of only updates, or of mixed updates and reads? That may have some bearing on the asymmetry.
    Our R2 pre-release has some improvements for this problem, and we are actively working on a complete solution. But there are really two issues at hand. What you've seen is poor handling of the case when log cleaning falls behind, and we will be fixing that because it can cause the sort of catastrophic out of disk failure you see. But more fundamentally, it may also be that the store is not optimally configured for your load. Fixing the log cleaning issue might still leave you with performance that's not optimal.
    We've got some documentation in the Admin Guide on how to come up with starting point configurations to best support the application load and the hardware. If you post more information on your application key and data size, and hardware, we can comment on what might work. For example, it sounds like your application might have large keys and small data. We find that smaller keys are generally more efficient in the NoSQL caches.
    We'd be interested in getting more details about your application so that we can use that as a test case for the fix for the log cleaning issue. In our current test cases, the nodes of the cluster all have symmetrical behavior, unlike what you experienced. If that's possible, please contact me at linda dot q dot lee at oracle dot com.
    Regards,
    Linda

  • Trash will not delete from trash folder! What can cause this effect?

    Trash will not delete from trash folder! What can cause this effect?

    u can actually force it to delete, oviously i assume you dont have the application or data etc open neware else?
    try restarting your computer, go into finder close all your open applications etc, servers, **** even clear your data in your web browser, do a disk utility check and verify your drives and repair em if nessicary.
    as chamar suggested look at http://www.thexlab.com/faqs/trash.html
    i had the same problem before but with a good 5-10 minutes of googling i was able to solve it..
    best of luck

  • HT4356 My iPad does not see my printer on the network.  The printer says it is connected and I can ping it from my pc, but the iPad does not seem to be able to find it.  I am going though an AOL supplied router, I have read this can cause issues, is this

    I am trying to print from my iPad to a HP 3520 printer using AirPrint. I have connected the printer to my local wireless network and it confirms it is connected. However my iPad can not find the printer on the network when it searches for it.  I am using an AOL supplied router and have read reports that this can cause problems.  Is this the case? How do I resolve this issue?

    Hi,
    That indeed can be caused by a router as Apple's AirPrint uses Bonjour to communicate with the device.
    Bonjour Services relies on IP Multicasting which have to be supported by your network.
    To isolate the problem, please follow the steps below to enable Wireless Direct and directly connect the iPad to your printer (which means without using the router).
    Once the iPad connected to the Wireless Direct network try printing and check for any difference.
    If the printer can be found via the temporary connection but not through your router that indicates lack of Multicast support or wrong configuration of your router.
    In such a case be sure to contact AOL and check if the router support Multicasting, if so ensure it is enabled within your router configuration, as well ensure it have the latest firmware installed..
    Shlomi

  • Possible cause for Faces failures (migration from an older version)

    I've had no luck with Faces recognizing faces in images.
    I think that the issue is with how the images got to iPhoto.
    I imported 2000+ images from my iBook (iPhoto 04). Of those, Faces failed in ~>95% of the images. Today, I took two junky photos with Photobooth, including a dark and distant shot. Faces recognized that there was a face in the image, although it hadn't learned the face, yet (I know, that takes time).
    I believe the issue might have to do with the fact that I imported images from a distant version (i.e. coming from iPhoto 4 instead of iPhoto 07 or 08, or whatever).
    Or, possibly that I migrated from another machine as opposed to upgrading iPhoto on the same machine?
    Hopefully this line of reasoning will isolate the problem's cause.
    Please reply if you are having success or not, as well how your photos got into iPhoto '09 (i.e. Migration from another computer and type, prior iPhoto version, etc.)
    I'll start...
    I migrated my images from iPhoto 4 from an iBook (PPC) and have had very little, if any, success with Faces recognizing faces in images.
    Thanks!

    I just imported (from my Treo) three images that were exactly the same as the three successes from Photobooth.
    Being from different sources they cannot be the same. Maybe the face in the photo was the same but the rest of the file, i.e. the metadata is different. Some image sources write metadata differently and that can cause issue. It's probable that iPhoto is expecting the metadata to adhere strictly to the EXIF/ITPC standards and some of the sources don't in writing the metadata to the file.
    Report the problem to iPhoto via http://www.apple.com/feedback/iphoto.html and give the details of the photos as you've done here. That will help them better understand the issue and get a fix out.

  • After Effects can't continue: unexpected failure during application startup

    Im running 10.10, Mac OSX, it was running fine before a minor update came out, and now I can not open AE CS6. I have uninstalled, reinstalled it, installed the trial of CC 2014. No messages beside "After Effects can’t continue: unexpected failure during application startup" come up.
    System log:
    Jun 22 15:30:49 Users-iMac.local AdobeCrashDaemon[976]: WARNING: The Gestalt selector gestaltSystemVersion is returning 10.9.0 instead of 10.10.0. Use NSProcessInfo's operatingSystemVersion property to get correct system version number.
                Call location:
    Jun 22 15:30:49 Users-iMac.local AdobeCrashDaemon[976]: 0   CarbonCore                          0x00007fff82bb637d ___Gestalt_SystemVersion_block_invoke + 113
    Jun 22 15:30:49 Users-iMac.local AdobeCrashDaemon[976]: 1   libdispatch.dylib                   0x00007fff8bf70fa2 _dispatch_client_callout + 8
    Jun 22 15:30:49 Users-iMac.local AdobeCrashDaemon[976]: 2   libdispatch.dylib                   0x00007fff8bf70f00 dispatch_once_f + 79
    Jun 22 15:30:49 Users-iMac.local AdobeCrashDaemon[976]: 3   CarbonCore                          0x00007fff82b5e932 _Gestalt_SystemVersion + 987
    Jun 22 15:30:49 Users-iMac.local AdobeCrashDaemon[976]: 4   CarbonCore                          0x00007fff82b5e51f Gestalt + 144
    Jun 22 15:30:49 Users-iMac.local AdobeCrashDaemon[976]: 5   AdobeCrashDaemon                    0x0000000100002e69 -[MyDaemon GetOSVersionMajor] + 33
    Jun 22 15:30:49 Users-iMac.local AdobeCrashDaemon[976]: 6   AdobeCrashDaemon                    0x0000000100002d4a -[MyDaemon isRunningOnLeopard] + 25

    Well, looks like a bug in Yosemite. You might wanan read the pertinent announcements, anyway. At this point Adobe apps are not compatible with OSX 10.10.
    Mylenium

Maybe you are looking for

  • Firefox profile cannot be loaded it may be missing or inaccessible WINDOWS 8

    I have a Windows 8 OS running on a PC. I have been using Firefox as my browser for several months. Suddenly, when I try to load Firefox, I get an error message, "Your Firefox profile cannot be loaded. It may be missing or inaccessible." I have uninst

  • Planned Costs by Quarter

    Hi all, does anyone have any suggestions on how we could enter planned costs per quarter for a project? I see we can enter planned costs per monthly period per cost element but not per quarter. Preferably, I would not need to use cost element plannin

  • Using "Windows98" DownLoads On An Apple Macintosh SE/30 Computer

    I've downloaded tons -- 150MBs -- of absolutely classic "System 6" software for my APPLE "Macintosh SE/30" computer using "Windows98". My question .... Will I be able to use what's been downloaded on a floppy simply by sliding it into my APPLE "Macin

  • Date Formatting in a Result Field: Crystal Reports 2008

    I'm modifying a report at runtime in c# .Net using Crystal Reports 2008. I'm having trouble modifying a date fields format. I have accessed the DateFieldFormat object, and modified the properties, but only the SystemDefaultType property seems to have

  • OS X Lion 10.7.1 unresponsive/freezing during TimeMachine backup

    Hi together, first I must say: From the features point of view I like Lion very much! But: Since I have installed it, the system periodically freezes! It took me quite a while to figure out the cause... It seems that the TimeMachine backup is the pro