Devicenet object limitation​s

Hello, We have a PXI with an 8461 Devicenet card as well as other cards running, communicated to by a third party software written in Labview 8.2.1.  We are experiencing an issue when only 25 or so devicenet objects are capable of being loaded and communicated with.  I know Devicenet can have up to 63 macIds, and we are not even close to loading that many and the application will just crash.  Could it be a LabView resource limitation, bus issue...  I do notice the CPU spike when loading the configuration for the Devicenet objects, have plenty of memory though.  Anyone else experiencing a limitation when using LabView when communication with multiple devices on a single devicenet network.

There is definitely an object limitation of 50 including the Port handle so 49 objects for a one port device and 24 for a two port device per port.
The only solution for that is to use a second board in parallel. But I don't know your App. can handle that. Is ther eany kind of error message/number you are getting?
DirkW

Similar Messages

  • Custom Object Limitations

    Hello guys,
    I am trying to document all the limitations of custom object and new features added in new releases of COD.
    If anyone has any documentation or want to share some points, it will be greatly appreciated.
    Thank you,
    asmi

    The number of custom currency in CO 1-6 is 25. The number of custom picklist in CO 1-6 is 100. The number of custom short text fields in CO 1-3 is 60. The number of custom short text fields in CO 4-6 is 90. The number of custom long text fields in CO 1-6 is 30. You will find a table providing this information (page 928-930) in the R17 Online Help document.

  • Beta 2 Object Limits?

    When trying to import a file from AI, I got the following message:
    "The design file you have selected has too many objects to import into this beta version of Adobe Flash Catalyst.Please consider rasterizing complex groups or layers, then try importing it again."
    What is the limit amount? Will this increase with beta 3? Or full release?
    Thanks

    I am trying to open my first AI project in Flash Catalyst but it is not being friendly to me (or I too it evidently)
    I am getting the following error message when trying to open the Ai from Catalyst...
    "The design file you have selected has too many objects to import to Adobe Flash Catalyst"
    It then tells me I should flatten some layers.
    I can't see this being the case because I only have 25 total layers comprising of:
    2 simple rectangle paths w/gradient
    2 arrow symbols
    2 Jpegs
    19 text layers
    I'm on a running Snow leopard on a Mac Pro.
    -Daniel

  • BitmapData object limitations

    I'm developing a tile based side scrolling game (similar to
    mario brothers) using the BitmapData object to create the map. the
    problem is the maximum width the BitmapData object will support is
    2880 pixels, which makes each level to short. I'd like it to be at
    least 50% bigger.
    Does any body know a way to increase this? Please help.

    use more than one bitmap.

  • Array size limitations... and prime number programs.

    What are they? I am an ameteur programmer who wrote an extremely efficient prime finding program, if i do say so myself. The program doesnt find high primes, just prime numbers from 1-1,000,000,000 so far. Thats one version, the next version i use is faster, but because it uses an ArrayList, it can only do the primes 1-10,000,000. I am trying to work the programs up to primes with 50 or more digits, but ArrayList runs out of space, Integer is too small, I dont quite know how to switch all my code to be compatable with BigIntegers, and I cannot find a limit to arrays. I guess that I could find big primes if a) Integer was bigger, b)I tried, but what i wanted to do, because the second program is way more effecient, is to use an array to store primes instead of an ArrayList. The only problem? Arrays cannot be appended, so I need to know the limit in size for an array... So far, from my tests, I have found the limit to be somewhere around, 15,380,277. The only problem with this is that every time i compile, i can use a different number. So I would like it if a) somone could tell me the limit to an array's size, b) ideas for programs that can find primes with 50 or more digits, c) Down below is the code, could someone tell me how to convert it to BigIntegers?
      private void primeFinder1root(int beg, int end){
        int tmp = 0;
        int counter = 0;
        int counter2 = 2;
        boolean flag;
        for(int n = 3; n < end; n+=2){
          if(n%5!=0){
            flag = true;
            for(int d = 3; d <= Math.sqrt(n) && flag; d+=2){
              counter++;
              if(n%d == 0)
                flag = false;
            if(flag){
              System.out.println(n);
              counter2++;
              tmp = n;
              if(counter2%100000 == 0)
                System.out.println(n);
        System.out.println();
        System.out.println("The program looped " + counter + " times. There were " + counter2 + " primes found. "
                             + tmp + " was the last prime found.");
      }That is the first progam that does not use an ArrayList, but is still extremely effecient, it only looped 1,744,118,556 times to find the primes 1-100,000,000 which seems like a lot, but it truely isn't. I realize that by using counters and printing, I am slowing the program down immensly, but i am fine with that.
      public void primeFinder(){
        boolean flag;
        int tmp = 0;
        int tmp2 = 0;
        int counter = 0;
        primes.add(2);
        for(int n = 3; n < end; n+=2){
          if(n%5!=0){
            flag = true;
            for(int i = 0; i < primes.size()/2 && ((Integer)primes.get(i)).intValue() <= Math.sqrt(n) && flag; i++){
              tmp = ((Integer)primes.get(i)).intValue();
              if(n%tmp == 0)
                flag = false;
              counter ++;
            if(flag && n!=1){
              System.out.println(n);
              primes.add(n);
              tmp2 = n;
              if(primes.size() % 100000 == 0)
                System.out.println(n);
        primes.add(0, 1);
        System.out.println(counter + " " + primes.size() + " " + tmp2);
      }This is the code that stores all the primes it finds in an ArrayList, and then compares all numbers to the ArrayList of primes. This is extremely more effecient than the first, looping only 278,097, 308 times to find the primes 1-10,000,000 (the other looped 868,772,491 times). I used 10,000,000 as my example because this program cannot go to 100,000,000 because there are 5,761,455 primes and the ArrayList can only hold ~4,000,000 objects. Because of this ~4,000,000 object limitation on the ArrayList I have set my sites on the Array. This is the reason why I want to know the limitations of an Array if you could please tell me. If you could also, I would like help making my code compatable with BigIntegers as well.
    Well, sorry for the very long post, but thank you if you answer it and took the time to read it.
    Dumber_Child

    I too was in the quest to develop the most efficient prime number code few years ago when I was a student.
    Here is a more fine tuned version for your code
       public static long findPrimes(int max, ArrayList out){
          long count=0;
          FastList primes = new FastList((int)Math.sqrt(max));
          primes.add(2);
          for (int i=3; i<=max; i+=2)
             int al_size = primes.size();
             double sqrt = Math.sqrt(i);
             boolean prime_flag =true;
             boolean loop_flag = prime_flag;
             for (int j=0; j<al_size && loop_flag; j++)
                int val = primes.get(j);
                if (i%val == 0)
                   loop_flag = prime_flag = false;
                else if (val > sqrt)
                   loop_flag = false;
                count++;
             if (prime_flag)
                primes.add(i);
          primes.addToArrayList(out);
          return count;
    Following a data structure to store the prime numbers while processing
    Since this holds first number of primes in an array instead of in an ArrayList
    the get will work much faster and for those elements the casting is not required
       static class FastList
          ArrayList list = new ArrayList();
          int cache[];
          int pointer = 0;
          int fastAreaSize;
          public FastList(int fastAreaSize){
             cache = new int[fastAreaSize];
             this.fastAreaSize = fastAreaSize;
          public void add(int i){
             if (pointer < fastAreaSize)
                cache[pointer] = i;
             list.add(new Integer(i));
             pointer++;
          public int size(){
             return pointer;
          public int get(int i){
             if (i<fastAreaSize)
                return cache;
    else
    return((Integer)list.get(i)).intValue();
    public void addToArrayList(ArrayList al){
    for (int i=0; i<pointer; i++)
    if (i<fastAreaSize)
    al.add(new Integer(cache[i]));
    else
    al.add(list.get(i));
    When running in my pc
    above code detected primes within range 0-10000000 in 281809517 iterations within 6.718secs
    while your original code did the same in 278097308 iterations within 13.687 secs.
    By the way i removed the check for '5' thats why my code does more iterations.
    Notice that I have relocated code like Math.sqrt and ((Integer).....).intValue() to reduce the re calculating the same thing. That saved a lot of time.

  • SQL Developer replacement for Query Builder?

    Our developers use query builder to quickly develop sql code w/conditions, joins, where clauses, etc. I just installed release 2 of SQL Developer and am trying to figure out if this is a viable replacement for query builder. Is there a graphical "query builder" component as part of SQL Developer?

    Barry, I would agree that it should be clean and easy to use. I started using Data Browser 2.0 and migrated to Query Builder 6.0.7.1.0 and while Query Builder is no longer it's own product it's integrated into Reports Developer 10g. The basic layout and operation has been consistent thru the life cycle and I would think you'd keep the "look and feel" consistent with what's currently available. (might simplify integrating it too). My biggest complaint with Query Builder 6.0.7 (se don't use the 10g version) is that it errors when opening a schema with more than 8192 objects. As an E-Business Suite 11.5.10 customer this is an issue for our developers when logging in as the standard APPS user.
    So here would be my "wish list"
    1. consistent look and feel with "older" versions of Browser/Query Builder
    2. removal of 8192 object limitation
    3. ability to open older .brw format files
    Certainly more improvements would be possible by integrating with SQL Developer and would be welcomed - this wish list is coming from a user who has MANY .brw files in the older 6.0.7 format.

  • User roles and role mapping

    I've just start as an intern in Change Management team that is helping to implement SD. My two tasks are to "develop SAP user roles specific to the new business processes" and "manage the role to position mapping for provision of security roles." None of the real employees in my team has ever done this, and my manager is now on three weeks leave. I'm new to SAP and I don't really know where to start. Can anyone offer any advice, or point me to some references? Thanks.

    Intern,
    Its a pretty cold manager who will dump a task on a inexperienced subordinate without any guidance or mentoring,  and then take three weeks off.
    Anyhow, you first need to get some insights as to what the expectations of the client are:  What type of users will there be?  What tasks will each user be responsible for carrying out?
    You also will want to collect a list of names of the actual users. Your Basis people will tell you which bits of data will have to be collected in order to create users on the system
    Next, you need to talk to the SD expert on your team about the solutions that will be implemented.  Quotes? Consignment? Scheduling agreements? Pricing? Customer Service? Marketing?  Customer Master? Material Master? The SD expert should be able to tell you at a very minimum which transactions should be made available.
    There are standard roles available delivered in the system.  These are pretty much un-usable as delivered, but they make a good starting point.  Review http://help.sap.com/erp2005_ehp_04/helpdata/EN/b4/3f9c41919eae5fe10000000a1550b0/frameset.htm
    and
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/06/57683801b5c412e10000009b38f842/frameset.htm
    Once you have all the info needed from the client and your SD experts, you then design the supporting roles at a high level. I usually use an Excel Spreadsheet with two tabs:  One tab listing roles to be developed, with all the transactions and authorization object limitations for each one;  and another tab listing Users and the supporting data needed to create a user.  If you are a Basis expert, you already know the next steps.  If not, then you typically hand your designs to the Basis team for creation of the actual Roles.
    Good luck.  Remember not to treat your interns the same way you have been treated.
    DB49

  • Query Builder Component added to Raptor?

    Have briefly tried out Raptor and have liked what I've seen so far. I'm sure the features will continue to grow over time too.
    Our developers use Oracle Query Builder on a daily basis to develop queries as well as ad-hoc reporting. As an E-Business Suite Customer we've been running into issues with the 8192 object limitation in QB. I'm wondering if plans are in the works to add some sort of graphical Query Builder functionality to Raptor and to remove the 8192 object limitation.

    I would recommend that you look at the Paradox for Windows query by example as a way of implementing this feature in Raptor. While the interface may appear to be old (less sexy) compared to some of the newer graphical methods of performing this activity (i.e. MS Access or HTMLDB); I find that it is an efficient tool for minimizing the amount of text that I have to write for SQL. Once I've developed the bulk of my query graphically, I then use the SQL converter and customize from there. The QBE metaphor I find much easier to manage then all of the drag and drop found in some of the interfaces today.
    Another recommendation that I have is that you look at using the newer join syntax than the old (e.g. left outer join).

  • Super() constructor must be first. Is it right?

    My previous discussion abot the this issue has been removed. I must reconstruct it to be aware of advantages, disadvantages and bypasses over the restriction.
    Obvously, the restriction prevents us from writing some perfectly valid code:
    class Unsigned7BitOscillatorInputStream extends OscillatorInputStream {
         Unsigned7BitOscillatorInputStream(
              PeriodicSignal signal,
              float amplitude,
              float signalFrequency,
              AudioFormat f,
              long framesLength)
              // call super substituting the audio format
              super(
                   signal,
                   amplitude,
                   signalFrequency,
                   new AudioFormat(AudioFormat.Encoding.PCM_UNSIGNED,
                        f.getSampleRate(),
                        8, f.getChannels(),
                        f.getFrameSize(),
                        f.getFrameRate(),
                        false
                   framesLength);
         }It is the same if one writes
    class Unsigned7BitOscillatorInputStream extends OscillatorInputStream {
         Unsigned7BitOscillatorInputStream(
              PeriodicSignal signal,
              float amplitude,
              float signalFrequency,
              AudioFormat f,
              long framesLength)
              // create audio format for super class, overriding bps
              AudioFormat format = new AudioFormat(
                   AudioFormat.Encoding.PCM_UNSIGNED,
                   f.getSampleRate(),
                   8,
                   f.getChannels(),
                   f.getFrameSize(),
                   f.getFrameRate(),
                   false);
              super(signal, amplitude, signalFrequency, format, framesLength);
         }Fortunately, the new AudioFormat obect can be created inline with super call. Nevertheless, actually, anyway, the instantiation of the local object preceeds the super call. So what was the reason to prevent java programmers from writing strightforward code, make them decieving themselves and looking for ugly bypasses? What if some comuptation must be done and some fields initialized before super can be called? What if I want to log the arguments passed before calling the super constructor? I wonder how java designers know in which order the fields must be initialized/accessed. IMO, it is exactly constructor's competence.

    Java was specifically constructed with limitations in
    mind based on experience with common problem areas.
    Notice that I said "specifically".Does it mean they all the OOP practices are the best? Should I use them as a reference? Which kind of errors does it protect me from? I would like to see a document telling that calling super constructor is evil, really. Peahaps the reasons lie not in best OOP practice but rather some technical/historical sad fact. It would be amusing if you are defending somenes mistake. I'm sure is java did not initially had the real type (float point) for the reason of its support complexity, the java bigots today would prove that the real numbers are evil and the requirement to use this type in your program points to a flaw in your design. The paramount evidence on the best programming practices will, of course, be the JLS bible. Like they prove there must not be a Set.get(key) in the set class/interface. They tell me that there must be none need to request an object of the same class ("class" in math terms) equivalent to that one you have put into the set. They tell me that if I need the method, my design is flawed. However, actually, the lack of the method in java.collections is nothing more than their designer's mistake. The most obvious illustration are relational tables in DB: the keys are parts (fields, members) of records (entities, objects) while tables are nothing more than sets of objects. Using maps in this case is inappropriate and inferes redundancy. But that's another story... Though the superfirst requirement also adds the extra complexity and must be thoroghly justified. Any artifical overcomlication/redundancy must have a solid evidence. IMO, the objected limitation looks like a 5th wheel in a java-telega.
    I have programmed with gotos many years ago. My experiance proves the Dykstra's (one of the structural programming fathers) thesis stating the "gotos are evil". The gotos break the flow of control making it hard to follow. Therefore, I readitly accept it. I have never seen the thesis, but I'm sure I can find and refer it to you if you like. At the same time, I'm sure there is nothing similiar in OOP domain telling the "deferred" supercons are evil. Otherwise, you would be more specific in references. Suppose, someone confines you into a prison, argumenting this is a "specific tool/limitation" to prevent "some" problems. Will you agree? You will insist on some more serious criminations? Be happy - they tell you - you are allowed to argument against the confinement appealing to our community (the community are the people believing the authority blindly)! No, it is accepted in justice that any limitation of freedom must be justified, not the contrary! This is called a presumption of innocence. I suffer, I break my mind when I know that the following invocations are the same but the second does not work for some queer reason:
              B() {
              super(computePrereq());
         B() {
              r = computePrereq(); // why this is worse than above? Where is the justice?
              super(r);
    I think you can find more than a couple of C++
    best practices books and articles that will tell
    you how C++ constructors should be built.Wow, you refer me to C++ as a citadel for OOP best practices! OK. CPP::STL "package" has the get_by_key method in the Set class/template. Will this fact chagne the mind of java bigots? Obviously not, they immediately refuse your arguments just hearing c++. I can, but just please be more specific. I may to carefully explore a ton of books without finding any sign of super second is evil.
    My favorite lang on Wintel platform is Delphi. Many thoughtful people consider it as a Wintel native java. However, there is even no recommendation to use superconstructors first. And the broad experience suggests me that there is nothing harmful in deferring the superconstruction. I do not feel any nasty frustration similar to exploiting gotos. The super destructors must be called last indeed, since they ultimately call the root superclass's (TObject) destructor, which deallocs the mem and the object ceases to exist after the call. But constructors do not alloc any mem for the object fields, they just fill the fields. The order of initialization is absolutely free. The initialization of all the fields may even be made in parallel for performance.
    Delphi also has a interface-resolution mechanism. Two interfaces conflict when they specify two aliasing methods. Will you tell that it is a bad design if happens that two interfaces conflict? The Divide&Concure disign rule tells the contrary, it is bad design when you must care about other blocks developing one. It is java which has design flaw - you must care about names of methods in other interfaces. This is the experiance collected from wast SW/HW methodologies I can share with java community.
    Show me why I cannot construct a human without
    first
    producing an ape?
    False analogy. I doubt that anyone would ever
    construct an inheritence hierarchy like that.I do not belive in God either ;)
    The evolution took itself.
    Or construct a tree without first
    producing a plant? And that would be logically inconsistent. A tree is
    a plant. If plant does not exist then a tree can not
    either.OK. Which plant should I create before proceeding to construct a tree? Do you understand that the Plant and Tree are the classes while an instance of the Tree just belongs to these classes? The calsses are loaded prior to construction. Now, I'm creating a tree. It is not your matter how the tree is created. You just get a tree and if it is a valid tree; that is, your tree implements all methods of the interface the tree class commits to implement, then it is also a plant. The construction technology is not important. The constructor knows it better when to create a steam and leafs.
    Your analogies are a bit absurd. There are other
    more relevent ones though...
    Employee (parent) -> surpervisor (child).
    Based on your argument it is stupid to have a
    supervisor who is an employee. Whereas in the world
    I live in the vast majority of the time supervisors
    are employees. And in the cases where they aren't
    (rare contractual situations) then building a
    hierarchy that reflects that relationship is a design
    flaw which has nothing to do with construction.Where did I argument against this relatioship? Nevertheless, if it bothers you I can tell you my opinion. In the manager-workers pattern, the emploee (also called a manager) is a job, whos duty consists of finding the workers, hand out a job to them and accepting (supervising) the work done. It may also stimulate/motivate the workers, but that is not important. What's more important is that the surpervision is one of manager's duties. Any Manager is a supervisor by definition. Thus, I can explain why Ape and Human are two different classes in the ancestor-descendant relationship, but I cannot explain you manager-supervisor subclassing... I would establish an Employee class implementing its duty interfaces listed. At least, since multiinheritance is not supported in java and you'll have a problem deriving the Emploee from both task initiator and inspector. If you like.
    The plant appears
    altogether with a tree. Talking in a scientigic
    language, being a plant is nothing more than an
    attribute of a tree (which cannot appearahead of the
    object). They appear simultaneously and are
    indistinguishble as a single whole.You said it not me. If you are using attributes and
    nothing else to construct inheritence then your
    design is flawed.Excuse me for not understanding/appretiating your humor. The "Tree" objects are attributed to "Plants" class. Don't you agree of what?
    To put it bluntly, this is you who seem to use inappropriete terminology. In programming, the parent-children relationship is used for tree structures (has-a). Parent may have (or own) a set of children. However, no parent construction precedes the construction of every "child" object. Otherwise, we would get too many parents. While we need descedants. The descedants have attributes of parents but they are not separate objects. Like keyfields in a DB records. All the fields are equal (if we do not toudch the pragmatic level). The ancestor-descendant terminology is adoped in OOP (the talks about inheritance). In java, super/sub-class notation is accustomed. This sharpens my confusion on which "parents" should I create before initializing any new object specific fields.
    Looks like a demagogic sentence. Let's think of a
    human as a brain-increased ape without tail. Do
    again?) or proceeded directly?
    Again you are using a very bad example for
    inheritence.I fail to see why the natural exapmles of inheritance are somehow "bad". OK, not all apes are brain-low, since humen break this rule.
    No, but I can make a soudspeaker without bringingits
    "parent" (presumably, a soundsystem) into "a valid
    state". I can write a java program withoutbringing
    some "abstract" program into a valid state.
    Even worse example. A "sound system" is a
    collection of other objects.
    There is no way that a speaker is ever a child or a
    parent of an amplifier.Thereofore, an abstract soundsystem may not have any concrete descendants? A soundsystem consumes a stream of databits or continous electric signal and produces sound waves. A laudspeaker near to me is full of controls - volume, bass, ets.. What is it if not an incarnation of a sound system?
    OK, here is another example. A floppy-disk is a disk. Should one produce a disk before making it floppy? Or microwave owen is an owen. How do you think should I create an owen prior to making it a microwave? Or electric field, how do I create a field prior to adding the electric parameters? Or how do I construct a hammer if you require me to precede its construction by construction of abstract instrument? How do I manufacturers produce "parent" computers before attaching "personal" properties to it? I'm sure there are millions of ways to construct a PC but none of them requires producing some "parent".
    OK, which chapter of CS tells that "parents" mustbe
    created prior to a "children"? Should Intel start
    manufacturning their Pentium chips by producing8088
    core first? I am beginning to think that you have a substantially
    different view than most people about what
    inheritence means.
    I find it hard to believe that anyone, in any
    situation, would ever attempt to create an
    inheritence tree using 8080 as the parent and a
    pentium as the child. This is a historical
    relationship not a child parent one.The Pentioum extends functionality of 8088 (both HW and SW). Engeneers started more and more features at the core. Finally they got a huge monster. It still starts as 8088. A program can call for extended capabilities if it knows about them. The old programs work on Pentium using its 8088 inherited interface.
    What do you mean telling "parent in valid
    state"? In construction, only ONE object isproduced,
    which complies to its ancestor interface.
    Constructors just aid the process. May be youwanted
    to tell there is no analogy between OOP and the
    nature?A tree is a plant. It is nonsense to suggest that a
    tree can exist before plant exists.As I have told, I have never suggested for such a nonsense. The Plant class exits long before you create an instance of a tree.
    That has nothing to do with the relationship between
    an acorn and an oak however. Nor does the fact that
    an oak produces acorns mean that there is a child
    parent relationship in terms of OO in there.I have never told that the seeds are plants. Moreover, I do not see any relation here with your proir argument that the creation of instance of an oak must be preceeded by the creation of instantance of some abstract plant (belonging to a Plant class) and the tree instance constructor then must decorate the the basic structure of that plant instance.
    Please refer me to a book telling that fields of superclass must be initialized prior to (not after and not concurrently) with the new fields exposed by a subclass.

  • Condition-when 2 movie clips touch

    hi everyone
    i want to make a condition which happend if one movie clip
    touches another one.
    i meen specific movie clips.
    if (a touches b)
    i thought of (when a passes the _x fo b, and the same for the
    _y, but that would wwork only if its a squar or a rectangle
    where the _y and _x are the objects limits, but otherwise, it
    won't work
    don't hurry to answer and if you have something more fun or
    important to do, do it.
    i can wait

    that will work for mine too! mine are only circles that I
    wanted to make touch.

  • SSM 7 - How to create more than 100 standard KPIs on admin interface?

    Good day all,
    Has anybody successfully increased the the number of standard kpis that can be created in the Admin inteface to greater than 100 standard kpis?
    I have followed the instructions on page 39 of  the 'Server configuration guide for SAP Strategy Management for sp 2 or higher' pdf to modify the
    kpi limits but I still get a popup on the Admin interface telling me that only 100 standard kpis can be created when I try to add the 101th kpi.
    btw the procedure in the pdf mentioned above refers to changing the scorecard.pro file but in this file the wording for a kpi i.e.
    '&$KP99PARAM0STD' is not the found but the wording "P" is use instead of "PARAM". In the file jscorecard.pro the wording "PARAM" is used.
    I have adding 100 extra parameters to these files. But I still get the restrictions of only creating 100 kpis as mentioned above.
    Please assist in this matter.
    Thanks in advance,
    T

    Hi Taariq,
    In the same server config guide, please check the section about Modifying the Objective Limits. The parameter you have to change here will influence both the Objective and KPI limits. Unfortunately this is not stated in that document yet...
    So in addition to the steps you have already implemented, try changing this property!
    Best regards,
    Ricardo Vieira

  • [REQUEST] Digital Paint: Paintball 2

    Hi to everyone:
    Could anyone package this?: Digital Paint: Paintball 2
    Paintball2 is a fast-paced first-person game with capture the flag, elimination, siege, and deathmatch (free-for-all) styles of gameplay. This project focuses on enhancing the Quake2-based engine it uses.
    With its high-value team objectives, limited range projectiles, and intense maneuvering, Paintball 2 offers a very unique experience that will appeal to both hardcore and novice players. It only takes one shot to eliminate somebody, but the fast-paced, arcade-style movement from Quake 2 allows for quick dodging, insane jumps, and breakneck speeds. Best of all, parts of the game have been released as free software.
    More info here
    A lot of thanks.

    Bom dia Carlos,
    Isto acontece para 100% dos casos ou é individual?
    Você fez upgrade recentemente? Instalou o SLL-NFE-JWS SP 11?
    Após a instalação fez os refreshs de cache (SXI_CACHE e na web full CPA cache refresh)?
    Atenciosamente, Fernando Da Ró

  • Custom Functionality / New Functionality / Utilities question

    Hypothetical question for everyone :
    If you could change anything about HFM or FDM, what would it be and why?
    If you could have any new features / utilities / tools what would they be and why?

    Tony,
    The 2 GB limit your are referring to is most likely the User mode virtual address space limit which is not .NET specific and affects any 32 bit process in windows
    User-mode virtual address space for each 32-bit process
    Limit in on X86 - 2 GB (3 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE and 4 GT)
    To get the 3GB, you would have to alter the executable PE header to enable the flags referenced above.
    This is also referred to as 4-Gigabyte Tuning ( http://msdn.microsoft.com/en-us/library/bb613473(v=vs.85).aspx)
    Limit in 64-bit Windows
    2 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE cleared (default)
    4 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE set
    For a complete memory breakdown (and a brutal headache from trying to memorize all of these requirements), please see : http://msdn.microsoft.com/en-us/library/aa366778(v=vs.85).aspx#memory_limits
    .NET does; however, have some memory limitations that can cause headaches for developers and users of their programs.
    The .NET limitation is not a 'per thread' or even 'per program' limitation, rather it's a 'per object' limitation. A heap object cannot exceed 2GB of contiguous memory. (no longer an issue with 64 bit versions). An example of this would be if you created an array object that would hold all records from an FDM file. If you had a data load file that was > 2GB AND the software were to load all of that into one object (say array or file object), then you would have a problem.
    With that said a couple comments :
    - Situations where needed more than 2GB per object are VERY RARE.
    * In 32 bit Windows environments, .NET would almost never be the limitation as the process is limited to 2GB and after overhead, there wouldn't even be 2GB of memory left for .NET limitation to become the issue.
    * In 64 bit Windows, not an issue anymore.....
    - Are you REALLY hitting the 2GB limit?
    * Pointed out earlier, the limit is up to 2GB of contiguous space. Perhaps due to a lot of instantiating/releasing of objects, the address space is fragmented and the most you can allocate is much less than 2GB?
    * You may also not be releasing other objects properly which would reduce the amount of available memory. It could be that the current .NET code has poor memory management and was just trusting Garbage collection to eventually clear everything up? For basic programs, this would work just fine. For something that could potential push the upper limits, this wouldn't work so fine .....
    * To look at both of these scenarios, you'd probably want to make use of the CLR Profiler tool as this could answer some questions.
    - There are workarounds for 2GB limit in Win 32 that are not that difficult to implement! If the FDM code base is relatively clean and this is "the" issue, I'd be curious to understand why they are not looking at work arounds? They could create a few 'memory manager' classes and plug everything into those and keep on trucking.
    Anyway, I find it implausible they are going to rewrite FDM simply because of a 2 GB .NET limitation. It could be a factor (especially if the base code is ugly), though.

  • Limitation on number of objects in distributed cache

    Hi,
    Is there a limitation on the number (or total size) of objects in a distributed cache? I am seeing a big increase in response time when the number of objects exceeds 16,000. Normally, the ServiceMBean.RequestAverageDuration value is in the 6-8ms range as long as the number of objects in the cache is less than 16K - I've run our application for weeks at a time without seeing any problems. However, once the number of objects exceeds the magic number of 16K the average request duration almost immediately jumps to over 100ms and continues to climb as more objects are added.
    I'm fairly confident that the cache is indexed properly (as Dimitri helped us with that). Are there any configuration changes that could possibly help out here? We are using Coherence 3.3.
    Any suggestions would be greatly appreciated.
    Thanks,
    Jim

    Hi Jim,
    The results from the load test look quite normal, the system fairly quickly stabilizes at a particular performance level and remains there for the duration of the test. In terms of latency results, we see that the cache.putAll operations are taking ~45ms per bulk operation where each operation is putting 100 1K items, for cache.getAll operations we see about ~15ms per bulk operation. Additionally note that the test runs over 256,000 items, so it is well beyond the 16,000 limit you've encountered.
    So it looks like your application are exhibiting different behavior then this test. You may wish to try to configure this test to behave as similarly to yours as possible. For instance you can set the size of the cache to just over/under 16,000 using the -entries parameter, set the size of the entries to 900 bytes using the -size parameter, and set the total number of threads per worker using the -threads parameter.
    What is quite interesting is that at 256,000 1K objects the latency measured with this test is apparently less then half the latency you are seeing with a much smaller cache size. This would seem to point at the issue being related to or rooted in your test. Would you be able to provide a more detailed description of how you are using the cache, and the types of operations you are performing.
    thanks,
    mark

  • Memory limitation for session object!

    what is the memory limitation for using session objects?
    venu

    as already mentioned there is no actual memory limitation within the specification, it only depends on the jvm's settings
    how different app-server handle memory management of session objects is another part of the puzzle, but in general you should not have problems in writting any object to the session.
    we had the requirement once to keep big objects in session, we decided to do a ResourceFactory that returns us the objects, and only store unique-Ids into the session.
    We could lateron build on this and perform special serialization tasks of big objects in the distributed environment.
    Dietmar

Maybe you are looking for