Problem in compilation with very large number of method parameters

I have java file which I created using WSDL2Java. Since the actual WSDL has a complex type with a large number of elements(around 600) in it, Consequently the resulting java file(from WSDL2Java) has a method that takes 600 parameters of various types. When I try to compile it using javac at command prompt, it says "Too many parameters" and doesn't compile. The same is compiling successfully using JBuilder X . The only way I could compile successfully at command prompt is by reducing the number of parameters to around 250 but unfortunately that it's not a workable solution. Does Sun specify any upper bound on number of parameters that can be passed to a method?

... a method that takes 600 parameters ...Not compatible with the spec, see Method Descriptors.
When I try to compile it using javac at
command prompt, it says "Too many parameters" and
doesn't compile.As it should.
The same is compiling successfully using JBuilder X .If JBuilder produces a class file, that class file may very well be invalid.
The only way I could compile
successfully at command prompt is by reducing the
number of parameters to around 250Which is what the spec says.
but unfortunately that it's not a workable solution.Pass an array of objects - an array is just one object.
Does Sun specify
any upper bound on number of parameters that can be
passed to a method?Yes.

Similar Messages

  • Pivot table with very large number of columns

    Hello,
    here is the situation:
    One table that contains raw data; from this table I feed one with extract information (3 fields); I have to turn the content in a pivot table
    Ro --- Co --- Va
    A A 1
    A B 1
    A C 2
    B A 11
    Turned in
    A B C...
    A 1 1 2
    B 11 null null
    To do this I do a query like:
    select r, sum(decode(c,'A',Va) COLA, sum(decode(c,'B',Va) COLB , sum(decode(c,'C',Va) COLC,.... sum(decode(c,'XYZ',Va) COLXYZ from table group by r
    The statement is generated by a script (cfmx) and it works until I reach a query that try to have 672 values for c; which means 672 columns...
    Oracle doesn't like that: ORA-01467: sort key too long
    I like this way has it is getting the result fast.
    I have tried different solution a the CFMX level with for that specific query, I got timeout (query table with loop on co within loop on ro)
    Is there any work around?
    I am using Oracle 9i.
    Tahnk you!

    insert into extracted_data select c, r, v, p from full_data where <specific_clause>
    The values for C are from a query: select disctinct c from extracted_data
    and it is the same for R
    R and C are varchar2(3999)
    I suppose that I can split on the first letter of the C column as:
    SELECT r, low.cola, low.colb, . . ., low.colm,
    high.coln, high.colo, . . ., high.colz
    FROM (SELECT r, SUM(DECODE(c, 'A', va)) cola, . . .
    SUM(DECODE(c, 'M', va)) colm
    FROM table
    WHERE c like 'A%'
    GROUP BY r) Alpha_A,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'B%'
    GROUP BY r) Alpha_B,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'C%'
    GROUP BY r) Alpha_C
    (SELECT r, SUM(DECODE(c, 'zN', va)) coln, . . .
    SUM(DECODE(c, 'zZ', va)) colz
    FROM table
    WHERE c like 'Z%'
    GROUP BY r) Alpha_Z
    WHERE alpha_A.r = alpha_B.r and apha_a.r = alpha_C.r ... and alpha_a.r = alpha_z.r
    I will have 27 select statement joined... I have to check if even like that I will not reach the limit within one of the statement select
    "in real life"
    select GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1 from
    (select r, sum(decode(C, 'Wall, unspecified',cases)) W0 from tmp_maqueje where upper(C) like 'W%' group by r) GRPW,
    select r,
    sum(decode(C, 'Ceramic tiles, indoors',cases)) C0,
    sum(decode(C, 'Cement surface, outdoors (Concrete/cement block, see Structural element, A11)',cases)) C1
    from tmp_maqueje where upper(C) like 'C%' group by r) GRPC
    where GRPW.r = GRPC.r
    order by GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1
    Message was edited by:
    maquejp

  • JDev: af:table with a large number of rows

    Hi
    We are developing with JDeveloper 11.1.2.1. We have a VO that returns > 2.000.000 of rows and that we display in a af:table with access mode 'scrollable' (the default) and 'in Batches of' 101. The user can select one row and do CRUD operations in the VO with popups. The application works fine but I read that scroll very large number of rows is not a good idea because can cause OutOfMemory exception if the user uses the scroll bar many times. I have tried with access mode in 'Range Paging' but the application works in strange ways. Sometimes when I select a row to edit, if the selected row is the number 430 in the popup is show it the number 512 and when I want to insert a new row throws this exception:
    oracle.jbo.InvalidOperException: JBO-25053: No se puede navegar con filas no enviadas en RangePaging RowSet.
         at oracle.jbo.server.QueryCollection.get(QueryCollection.java:2132)
         at oracle.jbo.server.QueryCollection.fetchRangeAt(QueryCollection.java:5430)
         at oracle.jbo.server.ViewRowSetIteratorImpl.scrollRange(ViewRowSetIteratorImpl.java:1329)
         at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStartWithRefresh(ViewRowSetIteratorImpl.java:2730)
         at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStart(ViewRowSetIteratorImpl.java:2715)
         at oracle.jbo.server.ViewRowSetImpl.setRangeStart(ViewRowSetImpl.java:3015)
         at oracle.jbo.server.ViewObjectImpl.setRangeStart(ViewObjectImpl.java:10678)
         at oracle.adf.model.binding.DCIteratorBinding.setRangeStart(DCIteratorBinding.java:3552)
         at oracle.adfinternal.view.faces.model.binding.RowDataManager._bringInToRange(RowDataManager.java:101)
         at oracle.adfinternal.view.faces.model.binding.RowDataManager.setRowIndex(RowDataManager.java:55)
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlHierBinding$FacesModel.setRowIndex(FacesCtrlHierBinding.java:800)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    <LoopDiagnostic> <dump> [8261] variableIterator variables passivated >>> TrackQueryPerformed def
    <LifecycleImpl> <_handleException> ADF_FACES-60098:El ciclo de vida de Faces recibe excepciones no tratadas en la fase RENDER_RESPONSE 6
    What is the best way to display this amount of data in a af:table and do CRUD operations?
    Thanks
    Edited by: 972255 on 05/12/2012 09:51

    Hi,
    honestly, the best way is to provide users with an option to filter the result set displayed in the table to reduce the result set size. No-one will query 2.00.000 rows using the table scrollbar.
    So one hint for optimization would be a query form (e.g. af:query)
    To answer your question "srollable" vs. "page range", see
    http://docs.oracle.com/cd/E21043_01/web.1111/b31974/bcadvvo.htm#ADFFD1179
    Pay attention to what is written in the context of +"The range paging access mode is typically used for paging through read-only row sets, and often is used with read-only view objects.".+
    Frank

  • Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, how to get back standard sized homepage?

    Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, need to return to standard sized homescreen?

    Triple click the Home button and then go to Settings>General>Accessibility and turn Zoom off. If problems see:
    iPhone: Configuring accessibility features (including VoiceOver and Zoom)

  • JRockit for applications with very large heaps

    I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
    I used the JRMC to profile the operation and here were the results that I thought were interesting:
    liveset 30%
    heap fragmentation 2.5%
    GC Pause time average 600ms
    GC Pause time max 2.5 sec
    It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
    For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
    Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
    My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
    Thanks in advance for any help you can provide.

    Any suggestions for using JRockit with very large heaps?

  • Need to hold very large number - please help

    Hello all,
    I am working with programming the SSH handshake and need to represent a very large number in order to get it to work correctly. The number is:
    179769313486231590770839156793787453197860296048756011706444423684197180216158519368947833795864925541502180565485980503646440548199239100050792877003355816639229553136239076508735759914822574862575007425302077447712589550957937778424442426617334727629299387668709205606050270810842907692932019128194467627007
    which is: 1.7 *10^318
    and we know the largest double is:
    1.7976931348623157E308
    Can someone please help me with representing this number? It is a very large prime number that is used in the key exchange for SSH.
    Thank you all for you time
    Max

    And who's the slowest old sod again?
    This is amazing: I read a new topic, no repliesyet,
    I check it again, nothing yet,
    I craft my reply and presto: some quick fingersbeat
    me to it again ...Grolsch makes slow :P ;)It's still only 11:45ag ... and I'm still waiting ;-)
    Jos

  • Old files with a large number of comments no longer work properly in either Adobe Acrobat DC or or Acrobat Reader

    I made a series of interactive ebooks wiith Adobe Acrobat Pro. They worked fine in the old Adobe Reader. But now the commenting feature does not work properly either in Adobe Reader DC or Adobe Acrobat DC.  When I open the comments, everything slows down or else I get blank pages. The tools are not usable. There seems to be some compatibility issue between what was done in these books with the commenting tools and the new applications. Has anybody experienced the same?

    Hi Anubha,
    Thanks for your reply. With the new Acrobat Reader DC, I have the same problem on two computers, one with Windows 7 and one with Windows 8.1. I downloaded a trial version of Adobe Acrobat DC on the computer with 8,1. Then I found out that I had the same problem with Acrobat DC as I did with Reader DC.
    I made these e-books with Acrobat Adobe Pro 11. I never had a problem with Pro 11 or with previous Readers. Also I have sent these e-books out to other people who had Reader 11 and they had no problems. Now it seems that the books are not usable – at least in the fullest way -- for me or for anyone else using the latest version of Reader.
    Any help would be appreciated. I have put a lot of work into these e-books.
    De : Anubha Goel 
    Envoyé : April-21-15 12:47 AM
    À : michael jaegermeister
    Objet : You have been mentioned by Anubha Goel in Re: old files with a large number of comments no longer work properly in either Adobe Acrobat DC or or Acrobat Reader in Adobe Community
    You have been mentioned
    by Anubha Goel <https://forums.adobe.com/people/Anubha+Goel?et=notification.mention>  in Re: old files with a large number of comments no longer work properly in either Adobe Acrobat DC or or Acrobat Reader in Adobe Community - View Anubha Goel's reference to you <https://forums.adobe.com/message/7456900?et=notification.mention#7456900>

  • "A problem was detected with your serial number"

    HELP! Everytime I open Dreamweaver 8, Fireworks, or any other
    software package I get the message "A problem was detected with
    your serial number." It then gives me the option to retype my
    serial number. After I type in the serial number I get the little
    checkmark like everything is great. But when I press "Continue" it
    comes back with that same message.
    I called Adobe support and they said it sounded like my
    registration key got corrupted and that I would have to follow
    TechNote 4e7826b7 which steps you through removing Studio from the
    computer and deleting orphaned files. I've done this TWICE and when
    I tried to call Adobe technical support, they sent me to customer
    support (with a long hold time in between) and then they sent me to
    technical support. When I mentioned I had already passed this
    route, the Adobe employee said she would inform the supervisor and
    they would get back to me. Guess what. Never happened.
    Surfice it to say I'm still getting the message and I don't
    want to uninstall a third time because I know the outcome. Does
    anyone have any suggestion? Pleeeeeeeease!
    Thanks,
    Glenn Lauderdale

    Same problem here. I just hit 'cancel' and the program opens
    fine.
    Adobe posted a hot fix that allegedly fixes the problem. I've
    run the thing several times, and the annoying box still shows up.
    You can find it here:
    http://www.adobe.com/cfusion/knowledgebase/index.cfm?event=view&id=KC.tn_18976&extid=tn_18 976&dialogID=53494061&iterationID=1&sessionID=4830e245130c574b2b37&stateID=1+0+53482525&mo de=simple
    I'll keep checking to see if any fix comes up.
    John

  • How to handle a large number of query parameters for a Browse screen

    I need to implement an advanced search functionality in a browse screen for a large table.  The table has 80+ columns and therefore will have a large number of possible query parameters.  The screen will be built on a modeled query with all
    of the parameters marked as optional.  Given the large number of parameters, I am thinking that it would be better to use a separate screen to receive the parameter input from the user, rather than a Popup.  Is it possible for example to have a search
    button on the browse screen (screen a) open a new screen (screen b) that contains all of the search parameters, have the user enter the parameters they want, then click a button to send all of the parameters back to screen a where the query is executed and
    the search results are returned to the table control?  This would effectively make screen b an advanced modal window for screen a.  In addition, if the user were to execute the query, then want to change a parameter, they would need to be able to
    re-open screen b and have all of their original parameters still set.  How would you implement this, or otherwise deal with a large number of optional query parameters in the html client?  My initial thinking is to store all of the parameters in
    an object and use beforeShown/afterClosed to pass them between the screens, but I'm not quite sure how to make that work.  TIA

    Wow Josh, thanks.  I have a lot of reading to do.  What I ultimately plan to do with this (my other posts relate to this too), is have a separate screen for advanced filtering that also allows the user to save their queries if desired. 
    There is an excellent way to get at all of the query information in the Query_Executed() method.  I just put an extra Boolean parameter in the query called "SaveQuery" and when true, the Query_Executed event triggers an entry into a table with
    the query name, user name, and parameter value pairs that the user entered.  Upon revisiting the screen, I want the user to be able to select from their saved queries and load all the screen parameters (screen properties) from their selected query. 
    I almost have it working.  It may be as easy as marking all of the screen properties that are query parameters as screen parameters (not required), then passing them in from the saved query data (filtered by username, queryname, and selected
    item).  I'll post an update once I get it.  Probably will have some more questions as I go through it.  Thanks again! 

  • Any suggestions on calculating with very large or small numbers?

    It seems that double values are about 17 decimal places (10e17) in precision.
    Is there a way in iPhone calculations to get more precision for very large and small numbers, like 10e80 and so forth? I know that's more than the entire number of atoms in the universe, but still.
    I tried "long double" but that didn't seem to make any difference.
    Just a limitation?
    Thanks,
    doug

    Hmmm... maybe I was just having a problem with my formatted string then?
    I was using the NSString %g format, which is supposed to print in exponential notation if the number is greater than 1e4 or less than 1e-4, or something like that.
    But I was not getting anything greater exponents than 1e17 and then I was apparently getting overflows because the number were having negative mantissas.
    All the variables involved were double...
    How did you "look at" z?
    Thanks,
    doug

  • Status and messaging for systems with a large number of classes

    Very common dilemma we coders face when creating
    systems involving a large number of classes:
    Any standard framework to take care of the global status of the whole application and of gui subsystems
    for both status handling and report messages to users?
    Something light if possible...and not too much threads.

    Ah, I see,
    I found JPanel with CardLayout or a JTabbedPane very good for control of several GUI in an application - as alternative organization tool I use a JTree, which is used for both, selecting and organizing certain tasks or data in the application - tasks are normally done with data associated with them (that is, what an object is for), so basically a click onto a node in this JTree invokes an interface method of that object (the userObject), which is associated with this node.
    Event handling should be done by the event-handling-thread only as far as possible - it is responsible for it so leave this job to it. This will give you control over the order in which the events are handled. Sometimes it needs a bit more work to obey this rule - for example communication coming from the outside (think of a chat channel for example) must be converted to an event source driven by a thread normally. As soon as it is an event source, you can leave it's event handling to the event handling thread again and problems with concurrent programming are minimized again.
    It is the same with manipulating components or models of components - leave it to the event handling thread using a Runnable and SwingUtilities.invokeLater(Runnable). This way you can be sure that each manipulation is done after the other in the order you have transferred it to the event handling thread.
    When you do this consequently most of your threads will idle most of the time - so give them a break using Thread.sleep(...) - not all platforms provide preemptive multitasking and this way it is garanteed, that the event handling thread will get a chance to run most of the time - which results in fast GUI update and fast event handling.
    Another thing is, that you should use "divide and conquer" also within a single GUI panel - place components in subpanels and transfer the responsibility for the components in this panel to exactly these subpanels - think of a team manager which makes his employees work together. He reports up to his super manager and transfers global order from his boss into specific tasks by delegation to the components, he is managing. If you have this in mind, when you design classes, you will have less problem - each class is responsible for a certain task - define it clearly and define to whom it is reporting (its listeners) and what these listeners may be interested in.
    When you design the communication structure within your hierarchy of classes (directors, managers, team managers and workers) have in mind, that the communication structure should not break the management hierarchy. A director gives global orders to a manager, which delegates several tasks to the team managers, which make their workers do what is needed. This structure makes a big company controlable by directors - the same principles can also keep control within an application.
    greetings Marsian

  • How to design Storage Spaces with a large number of drives

    I am wondering how one might go about designing a storage space for a large number of drives. Specifically I've got 45 x 4TB drives. As i am not extremely familiar with storage spaces, i'm a bit confused as to how I should go about designing this. Here is
    how i would do it in hardware raid and i'd like to know how to best match this setup in Storage Spaces. I've been burned twice now by poorly designed storage spaces and i don't want to get burned again. I want to make sure if a drive fails, i'm able to properly
    replace it without SS tossing it's cookies. 
    In the hardware raid world, i would divide these 45 x 4TB drives into three separate 15 disk Raid 6's. (Thus losing 6 drives to parity) Each raid 6 would show up as a separate volume/drive to the parent
    OS. If any disk failed in any of the three raids, i would simply pull it out and put a new disk back in and the raid would rebuild itself. 
    Here is my best guess for storage spaces. I would create 3 separate storage pools each containing 15 disks. I would then create a separate
    Dual Parity Virtual Disk for each pool. (Also losing 6 drives to parity) Each virtual disk would appear as a separate volume/disk
    to the parent OS. Did i miss anything? 
    Additionally, is there any benefit to breaking up my 45 disks into 3 separate pools? Would it be better to create one giant pool with all 45 disks and then create 3 (or however many) virtual disks on top of that one pool? 

    I am wondering how one might go about designing a storage space for a large number of drives. Specifically I've got 45 x 4TB drives. As i am not extremely familiar with storage spaces, i'm a bit confused as to how I should go about designing this. Here is
    how i would do it in hardware raid and i'd like to know how to best match this setup in Storage Spaces. I've been burned twice now by poorly designed storage spaces and i don't want to get burned again. I want to make sure if a drive fails, i'm able to properly
    replace it without SS tossing it's cookies. 
    In the hardware raid world, i would divide these 45 x 4TB drives into three separate 15 disk Raid 6's. (Thus losing 6 drives to parity) Each raid 6 would show up as a separate volume/drive to the parent
    OS. If any disk failed in any of the three raids, i would simply pull it out and put a new disk back in and the raid would rebuild itself. 
    Here is my best guess for storage spaces. I would create 3 separate storage pools each containing 15 disks. I would then create a separate
    Dual Parity Virtual Disk for each pool. (Also losing 6 drives to parity) Each virtual disk would appear as a separate volume/disk
    to the parent OS. Did i miss anything? 
    Additionally, is there any benefit to breaking up my 45 disks into 3 separate pools? Would it be better to create one giant pool with all 45 disks and then create 3 (or however many) virtual disks on top of that one pool? 
    1) Try to avoid parity and esp. double parity RAIDs with a typical VM workload. It's dominated by small reads (OK) and small writes (not OK as whole parity stripe gets updated with any "ready-modify-write" sequence). As a result writes would be DOG slow.
    Another nasty parity RAID characteristic is very long rebuild times... It's pretty easy to get second (third with double parity) drive failure during re-build process and that would render the whole RAID set useless. Solution would be to use RAID10. Much safer,
    faster to work and rebuild compared to RAID5/6 but wastes half of raw capacity...
    2) Creating "islands" of storage is an extremely effective way of stealing IOPS away from your config. Typical modern RAID set would run out of IOPS long before running out of capacity so unless you're planning to have a file dump of an ice cold data or
    CCTV storage you'll absolutely need all IOPS from all spindles @ the same time. This again means One Big RAID10, OBR10.
    Hope this helped a bit :) Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • How to copy very large number of files from one drive to another???

    I'm a fairly experienced Mac user for serveral years but this problem really has me stumped.
    I'm trying to copy or move 152,000 files from one external drive to another drive. I can highlight (Cmd - A) all the files on the first drive and drag them to the second drive but Finder always shows 32,768 files being copied no matter what I try.
    Any and all suggestions on how to move/copy a large number of files from one external drive to another are greatefully appreciated.
    Thank you in advance,
    Mack

    I would use the command line tool rsync.
    For instance with: rsync -av source-dir destination-dir
    -a The files are transferred in "archive" mode, which ensures that symbolic links, devices, attributes, permissions, ownerships, etc. are preserved in the transfer.
    -v Verbose, so you see the progress.
    Rsync is fast and really, really powerful and many times used in shell scripts and the like to automatically backup and/or sync stuff. Google a bit for more info.

  • Possible bug for emails with a large number of recipients

    When receiving emails which was also sent to a large number of people (more than can fit in two lines), it then displays the text 'and 10 more..." or however more there are. Clicking on this text is supposed to then display all the email addresses the email was sent to. However clicking on this is for some reason doing nothing. Is this a bug, or am I the only one having this problem?

    When receiving emails which was also sent to a large number of people (more than can fit in two lines), it then displays the text 'and 10 more..." or however more there are. Clicking on this text is supposed to then display all the email addresses the email was sent to. However clicking on this is for some reason doing nothing. Is this a bug, or am I the only one having this problem?

  • Reading a csv file with a large number of columns

    Hello
    I have been attempting to read data from large csv files with 38 columns by reading a line using readline and scanning the linebuffer using scan.
    The file size can be up to 100 MB.
    Scan does not seem support the large number of fields.
    Any suggestions on reading the 38 comma separated fields. There is one header line in the file.
    Thanks
    Solved!
    Go to Solution.

    see if strtok() is useful  http://www.elook.org/programming/c/strtok.html

Maybe you are looking for