Algorithm practice

so i am playing around with some different search algorithms, one such one is:
public static int bubbleUp (String list[], String partNumber)
    // declares comparison to be used as a counter
    int comparisons = 1; 
    // declares moves as a counter to be used later on
    int moves = 0;
    // declares sum of both comprisons and moves variables
    int sumNumbers = 0;
    // gets the variable partNumber and searches for it in list
    for (int i = 0; i < list.length; i++)
      if (!partNumber.equals (list ))
// add one to comparisons
comparisons++;
else
//if statement used to add the appropriate number of moves to the counter
if (!partNumber.equals (list [0]))
list [i] = list [i - 1];
list [i - 1] = partNumber;
moves+=2;
break;
// calculates sum of moves and comparisons
sumNumbers = comparisons + moves;
// return the number of comparisons
return sumNumbers;
now i am trying to use a two dimensional array rather than a single to search for different objects.
My question is, how would you change my code i wrote above to function for 2d arrays?

There's a saying: "code to the interface, not the implementation" and
that saying can be applied here; all you want to know in your algorithm is this:public interface BubbleUpable {
   public Object get(int index);
   public void set(int index, Object obj);
   public int length();
}One dimensional arrays can be wrapped in a simple wrapper class like this:public class ArrayWrapper implements BubbleUpable {
   private String[] a;
   public ArrayWrapper(String[] a) { this.a= a; }
   // interface implementation
   public Object get(int index) { return a[index]; }
   public void set(int index, Object obj) { a[index[= (String)obj; }
   public int length() { return a.length; }
}I'm sure a bit of Java 1.5 can take away a bit of explicit casting by applying
some generics here.
If you rewrite your algorithm such that it uses a BubbleUpable instead
of a String[] array, you can implement another wrapper (see above) that
wraps around a two dimensional array. Your algorithm wouldn't care at
all about the implementation though.
kind regards,
Jos

Similar Messages

  • LRCC Face recognition - best practices?

    Ok so we are all new to the wonderful world of face recognition in LR.  I'm trying to work out what would be the best practices for using this.
    A little bit of background - I have a catalog of over 200,000 images.  In addition to portrait and wedding clients, a significant part of my work is with models and another significant part of my work is theatre photography.  I have be wanting some sort of face recognition to help with both for some time.
    What are your namining conventions for people? - here's mine:
    Ideally I would label people as "surname, firstname" so that I can keep members of a family together in "named people" display, but commas are not allowed in names.  Also the professional name of many models doesn't fit that pattern eg "Strawberry Venom" or "Cute as Sin" are to models I have worked with.
    I am trying to come up with a sensible naming convention at the moment it is "Surname/ Firstname" for clients, theatre folk and friends/family.  Models are still a problem, at present I am thinking of "Surname/ Firstname (model name(s))"  While I may not be able to remember the real names of models, I do usually know the names from model releases.  This naming will still permit me to filter/find them in the keyword List panel by just entering the model name.
    On final addition I am making to this this naming convention is the use of a hashtag suffix to the name:  #F for friends and family, #C for clients, #T for theatre/actors and #M for models.  This enables me to filter on just models, or just actors, or just friends and familiy.  Where people fall into multiple categories I add multiple hashtags.  So photos of me would be keyworded with "Butterfield/ Ian #F #T"
    Unknown / unidentified people.
    What I am not yet certain about is how to handle unknown / unidentified people.  Unidentified people fall into a number of different categories.
    People I don't know and I am never likely to know (Eg random strangers on the street, local tour guides on holiday, random people in the background etc)
    This group is relatively easy to deal with - that is to simply delete the face recognition, End of story.
    People I don't know the names of yet but I am likely to find out (Eg actors in a production for which I don't have a programme)
    For these people I am making up a unique name using the format "date/ Context-Gendernn" Eg an unknown male actor at Stockport Garrick Theatre would be named as "20150313/ SGT-M01"  Although this may appear a complex solution it has a number of advantages.  If/when I do learn the name of the individual (Eg I photograph them in a different production) it is simply a case of renaming the people keyword.  Creating a unique name and not simple assigning all unknowns to a bucket name will help the face recognition algorithms find this person without it being confused by have different faces assigned to the same name. I am also using the hashtage #U to make it easier to filter the unknown faces when I need to.
    People I don't know the names of and there is only a slim possibility of meeting/photographing again (Eg guests as a client weeding)
    It feels as though I out to just delete the face recognition and have done with it, and this is what I would do except for thing. Other than manually drawing face regions I have not yet found a way to get lightroom to rescan a folder for faces if you have previously deleted the face recognition.  This means that deleting face regions from a large number of people is something that cannot be easily reversed.  I might just leave these people in the "Unnamed People" category... at lease until such time as there is a way to rescan a folder or colectoin.
    Summary
    My practices are still evolving. But I hope these thoughts and idea will help others think through the issues and come up with solutions that work for their situation.  I am interested in hearing how other people are using the face recognition system.  Especially if anyone is aware of any 'best practices' that Adobe or anyone else has recommended.

    Glad it helped.
    Yes and no.  You can still put the people keywords into hierarchies within the keyword list - you can arrange them just like any other keywords.so you just create a "smith family" keyword and store "john smith" under it.  What you can't do is apply BOTH smith family and john smith the the same face.
    My use of the hash tags came about because I initially had a top level keyword for models, one for clients, one for theatre peple and one for family and firends.  Then discovered that some of the theatre folk were also clients (headshots) and what to do when a friend is also a client.  So the hash tag system means a person can be both a friend, a model, an actor as well as being a client!  (#T #C #M #F).

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

  • Best practice for Video over IP using ISDN WAN

    I am looking for the best practice to ensure that the WAN has suffient active ISDN channels to support the video conference connection.
    Reliance on load threshold either -
    Takes to long for the ISDN calls to establish causing the problems for video setup
    - or is too fast to place additional ISDN calls when only data is using the line
    What I need is for the ISDN calls to be pre-established just prior to the video call. Have done this in the past with the "ppp multilink links minimum commmand but this manual intervention isn't the preferred option in this case
    thanks

    This method is as secure as the password: an attacker can see
    the hashed value, and you must assume that they know what has been
    hashed, with what algorithm. Therefore, the challenge in attacking
    this system is simply to hash lots of passwords until you get one
    that gives the same value. Rainbow tables may make this easier than
    you assume.
    Why not use SSL to send the login request? That encrypts the
    entire conversation, making snooping pointless.
    You should still MD5 the password so you don't have to store
    it unencrypted on the server, but that's a side issue.

  • Bandwidth Utilization Avg or Max for capacity Planning best practice

    Hello All - This is a conceptual or Non-Cisco product question. Hope you can help me to get this best industry practice
    I am doing a Capacity planning for the WAN Link Bandwidth. To study the last month bandwidth utilization in the MRTG graph, i am seeing  two values
    Average
    Maximum.
    To measure how much bandwidth my remote location is using which value i have to use. Average or Max?
    Average is always low eg. 20% to 30%
    Maximum is continuous 100% for 3 hour in 3 different intervals in a day and become 60% in rest of the day
    What is the best practice followed in the networking industry to derive the upgrade size of the bandwidth by using the Utilization graph
    regards,
    SAIRAM

    Hello.
    It makes no sense to use average during whole day (or a month), as you do the capacity management to avoid business impact due to link utilization; and average does not help you to catch is the end-users experience any performance issues.
    Typically your capacity management algorithm/thresholds are dependent on traffic patterns. As theses are really different cases if you run SAP+VoIP vs. youtube+Outlook. If you have any business critical traffic, you need to deploy QoS (unless you are allowed to increase link bandwidth infinitely).
    So, I would recommend to use 95-percentile of maximum values on 5-15 minutes interval (your algorithm/thresholds will be really sensitive to pooling interval, so choose it carefully). After to collect baseline (for a month or so)  - go and ask users about their experience and try to correlate poor experience with traffic bursts. This would help you to define thresholds for link upgrade triggers.
    PS: proactive capacity management includes link planning for new sites and their impact on existing links (in HQ and other spoke).
    PS2: also I would recommend to separately track utilization during business hours (business traffic) and non-business (service or backup traffic).

  • 2K8 - Best practice for setting the DNS server list on a DC/DNS server for an interface

    We have been referencing the article 
    "DNS: DNS servers on <adapter name> should include their own IP addresses on their interface lists of DNS servers"
    http://technet.microsoft.com/en-us/library/dd378900%28WS.10%29.aspx but there are some parts that are a bit confusing.  In particular is this statement
    "The inclusion of its own IP address in the list of DNS servers improves performance and increases availability of DNS servers. However, if the DNS server is also a domain
    controller and it points only to itself for name resolution, it can become an island and fail to replicate with other domain controllers. For this reason, use caution when configuring the loopback address on an adapter if the server is also a domain controller.
    The loopback address should be configured only as a secondary or tertiary DNS server on a domain controller.”
    The paragraph switches from using the term "its own IP address" to "loopback" address.  This is confusing becasuse technically they are not the same.  Loppback addresses are 127.0.0.1 through 127.255.255.255. The resolution section then
    goes on and adds the "loopback address" 127.0.0.1 to the list of DNS servers for each interface.
    In the past we always setup DCs to use their own IP address as the primary DNS server, not 127.0.0.1.  Based on my experience and reading the article I am under the impression we could use the following setup.
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  127.0.0.1
    I guess the secondary and tertiary addresses could be swapped based on the article.  Is there a document that provides clearer guidance on how to setup the DNS server list properly on Windows 2008 R2 DC/DNS servers?  I have seen some other discussions
    that talk about the pros and cons of using another DC/DNS as the Primary.  MS should have clear guidance on this somewhere.

    Actually, my suggestion, which seems to be the mostly agreed method, is:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    The tertiary more than likely won't be hit, (besides it being superfluous and the list will reset back to the first one) due to the client side resolver algorithm time out process, as I mentioned earlier. Here's a full explanation on how
    it works and why:
    This article discusses:
    WINS NetBIOS, Browser Service, Disabling NetBIOS, & Direct Hosted SMB (DirectSMB).
    The DNS Client Side Resolver algorithm.
    If one DC or DNS goes down, does a client logon to another DC?
    DNS Forwarders Algorithm and multiple DNS addresses (if you've configured more than one forwarders)
    Client side resolution process chart
    http://msmvps.com/blogs/acefekay/archive/2009/11/29/dns-wins-netbios-amp-the-client-side-resolver-browser-service-disabling-netbios-direct-hosted-smb-directsmb-if-one-dc-is-down-does-a-client-
    logon-to-another-dc-and-dns-forwarders-algorithm.aspx
    DNS
    Client side resolver service
    http://technet.microsoft.com/en-us/library/cc779517.aspx 
    The DNS Client Service Does Not Revert to Using the First Server in the List in Windows XP
    http://support.microsoft.com/kb/320760
    Ace Fekay
    MVP, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.
    I agree with this proposed solution as well:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    One thing to note, in this configuration the Best Practice Analyzer will throw the error:
    The network adapter Local Area Connection 2 does not list the loopback IP address as a DNS server, or it is configured as the first entry.
    Even if you add the loopback address as a Tertiary DNS address the error will still appear. The only way I've seen this error eliminated is to add the loopback address as the second entry in DNS, so:
    Primary DNS:  The assigned IP of another DC (i.e. 192.168.1.6)
    Secondary DNS: 127.0.0.1
    Tertiary DNS:  empty
    I'm not comfortable not having the local DC/DNS address listed so I'm going with the solution Ace offers.
    Opinion?

  • Best practice for maximum performance

    Hi all
    OS : Linux AS 4.2
    Oracle 9.2.0.8
    My database size is around 4TB (Every day increasing 5GB) and we are following a method like analyze the main table's latest partitions only because the entire statistics collection will take more time. Is that a good practice?
    Is there any difference between analyze table command and using dbms_stats package to collect statistics?
    Collecting entire statistics or estimate 33 % or 15% will make any difference (Performance level)?
    Indexes are one of the main factors which will effect to the database performance. As a DBA, what all the things I should do in a daily basis to maintain indexes in a proper state.
    I just wanted to know a best way of managing the statistics and indexes properly
    Many thanks in advance
    Nishant Santhan

    Nishant Santhan wrote:
    Hi all
    OS : Linux AS 4.2
    Oracle 9.2.0.8
    My database size is around 4TB (Every day increasing 5GB) and we are following a method like analyze the main table's latest partitions only because the entire statistics collection will take more time. Is that a good practice? Nishant,
    it depends, as always. If you have (important or large) queries that rely on global-level statistics then the statistics generated by "aggregation" (provided you've complete statistics on partition-level) might be insufficient, or they might become more and more outdated if you've generating them once and don't update them any longer (provided you're using DBMS_STATS, ANALYZE is not capable of generating genuine global-level statistics).
    Note that 11g offers here a significant improvement with the "incremental" global statistics. See Greg Rahn's blog note about this interesting enhancement:
    http://structureddata.org/2008/07/16/oracle-11g-incremental-global-statistics-on-partitioned-tables/
    >
    Is there any difference between analyze table command and using dbms_stats package to collect statistics? Definitely there are, more or less subtle ones. Regarding the partitioning, as already mentioned above, DBMS_STATS is capable of generating "real" top-level statistics at the price of "duplicate" work (analyzing the table at whole potentially reads the same data again that has been used to generate the partition-level statistics), but see the comment about 11g above.
    There are more subtle difference, e.g. regarding the calculated average row length, etc.
    >
    Collecting entire statistics or estimate 33 % or 15% will make any difference (Performance level)?It all depends on your data. 1% might be sufficient to get the plans right, but sometimes even 33% might not be good enough. 11g adds here another significant improvement using a cunning "AUTO_SAMPLE_SIZE" algorithm that seems to get it quite close to "computed" statistics but taking far less time:
    http://structureddata.org/2007/09/17/oracle-11g-enhancements-to-dbms_stats/
    Indexes are one of the main factors which will effect to the database performance. As a DBA, what all the things I should do in a daily basis to maintain indexes in a proper state. B*tree indexes should in general be in a good condition without manual interference, there are only a few circumstances where you should consider performing manual maintenance. And most of these can be covered by the "COALESCE" operation, an actual "REBUILD" should not be required in most cases.
    You might want to read Richard Foote's blog about indexes:
    http://richardfoote.wordpress.com
    and Jonathan Lewis' notes about index rebuilding:
    http://www.jlcomp.demon.co.uk/indexes_i.html
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Finished taking Sprinkler CLD Practice Exam

    I am planning on taking my CLD this coming week, and just finished taking this practice exam. Since I studied the car wash and ATM solutions I decided to go for the Sprinkler practie exam. The "Sprinkler CLD.zip" file is the results of 4 dedicated hours of my Saturday.
    I ran the VI analyzer on all VIs and CTLs and I'm not impressed with myself. Could somebody tell me how they think I would score?
    I looked at the solution for the Sprinkler.vi and it's clear that my approach is nothing like the solution from NI. This could be a good or a very bad thing. 
    It appears quick comments could mean alot if the graders depend heavily on the VI Analyzer.  It appears that I should have at least two comments in each VI, and not only have the documentation section filled in the VI but the same for controls.
    It's clear that I missed some wires when I resized my case select boxes.
    After finishing the exam and then looking back i see there is a possible lock out condition on initialization that would prevent the VI from reading the CSV file. I shouldn't have created a  "READ CSV" State. If i would have placed the "READ CSV FILE" inside the "Power Up Configuration" state there would be no issues. I should have restarted labview in my last hour.  If the VI starts up with the Water Pressure above 50% and No Rain then the CSV file is read and there is no problem. This would have been an obvious mistake had I restarted labview.
    I realize that I missed some of the specifications. For example if it starts raining during a sequence it is suppose to restart the sequence, not pause it.
    There are few comments in the code. I usually add many comments to my code, but this is my first time using a simple state engine.
    At work I have a large infrastructure already in place complete with error handling and task management.  I am also use to working on multiple monitors. During the test I only used one. Even if I didn’t pass this practice exam at least having a dry run outside my normal work condition was very good practice.
    I spent time practicing earlier and can build the Timer.VI in about 8 minutes. A functional global timer seems to be a common theme in the practice exams.
    Does anybody have any ideas or suggestions?
    Do you think I would have passed the CLD exam with this test?
    Comments?
    Regards,
    Attachments:
    VI Analyzer Results.zip ‏4 KB
    Sprinkler CLD.zip ‏377 KB

    There are a lot of good things in your code, you are nearly there. I haven't run your code, so this is more style and documentation comments.
    If I were you, I would concentrate on the following:
    Wire Error through all your subVI's put your subvi code in an error/no error case structure. If you had done that, you didn't need the flat sequence structure in your code.
    You haven't even wired error to the subvi's with error terminals, this will cost you points.
    Label any constants on the block diagram.
    Brief description of Algorithm on each VI block diagram.
    You could have avoided using local variables, for example Run Selector as this control is available in the cluster. So just a unbundle by name would have given you the value of that control. If you do use them, then make sure you state why (for example efficiency etc.) in a block diagram comment.
    Some subVis are missing VI documentation, this wont be taken lightly.
    Using default value when unwired (for your while loop stop) is not recommended. This was specifically discussed during a CLD preparation class I attended not so long ago.
    While icons are pretty, I wouldn't waste time trying to find glyphs for your subvi's just consistent text based icon scheme is perfectly acceptable. You can do this if you do have extra time, it wont fetch your extra points though.
    LabVIEW 2012 has sub diagram labels, you can enable this by default in Tools>>Options, adding comments in each of the cases is recommended.
    The main thing is time management and make sure you read other posts/blogs on CLD. I would also recommend quick drop, if you haven't started using this it may not be a good idea to do so now for your exam next week. But in general it is very useful and saves time.
    Hope this helps.
    Beginner? Try LabVIEW Basics
    Sharing bits of code? Try Snippets or LAVA Code Capture Tool
    Have you tried Quick Drop?, Visit QD Community.

  • Color Picker scripting or Levels algorithm help

    First question is: Does anyone know of a good explanation of the Levels algorithm as in how each setting affects a pixel in an image. If I change the midpoint of the levels, how would a specific pixel change in relation to that? I've been experimenting for hours and can't figure a common factor other than it seems to be a binary type relationship. The reason I ask this is because I'm trying to script something that will balance colors.
    If that method isn't practical, I can go to the old fashioned trial and error method but this way also presents a roadblock to me. I set a color picker point and the script can obtain the values from that point exactly as it is in the Info panel. If I put a levels adjustment layer over top and adjust it, I now see the original color value and the adjusted color value in the Info panel, but I can't figure out how to obtain the adjusted value with a script. It still returns the original value. Does anyone know a way to obtain the adjusted value?
    I hope I explained this right.

    Thanks, Michael.
    I'll have to look through that post on ps-scripts.com in more detail. That might be what I need.
    This little snippet you wrote:
    Michael L Hale wrote:
    This thread may help with the levels part. http://ps-scripts.com/bb/viewtopic.php?t=2498
    As for the adjustment layer you need to get the color twice. Once with the adjustment layer visible then again with it not visible.
    var csColor = activeDocument.colorSamplers[0].color;
    activeDocument.layers.getByName('Levels').visible = false;
    var csColor2 = activeDocument.colorSamplers[0].color;
    alert( csColor2.rgb.red + " : " + csColor.rgb.red );
    doesn't get me the before and after values. Example: The point I selected has a red value of 226. I added a Levels adj layer on top and moved the midpoint so the red value at that point (adjusted) was 234. I ran your code and it came back with 225.591439688716 : 225.591439688716. It isn't showing the adjusted value of that point.

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • ACE Load Balancing algorithm

    Team,
    I was reading Designing Content Switching Solutions last night. I came across a page that suggested Round Robin for HTTP connections, Least Conns for FTP connections, dst Hash for caching connections and so on.
    Could someone please provide information or a link on which load balancing algorithm to use based on the application, is there some form of best practice for this?
    Thank you,
    John...

    John,
    there is no best practices.
    It depends on your applications and needs.
    For example, for caching, some people prefer to optimize the disk space, and other the response time.
    So, if you do destination hash, you guarantee that all traffic for one site is always handled by the same cache.
    Therefore you optimize the disk space since you will not find the same object on all caches.
    BUT if one site attracts a lot of connections, the cache device that handles that site will be overloaded (for example youtube.com)
    Leastconn is a good option in theory.
    The device that has less connections should receive the next one.
    The problem is if you have flapping links or servers crashing or if you do a lot of maintenance and add/remove servers frequently.
    This confuses the algorithm and is the source of a lot of bugs.
    My recommendation is to go with roundrobin unless you have identify that you really need another algorithm.
    And you can always start with roundrobin and see what happens...
    Gilles.

  • What's the best practice to manage the page file?

           
    We have one Hyper-v Server running windows 2012 R2 with 128 GB RAM and 2 drives (C and D). It setup Automatically manage page file size for all drives. What's the best practice to manage the page file?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    For Hyper-V systems, my general recommendation is to set the page file to 1-4 GB. This allows for a mini-dump should something happen. 99.99% of the time, Microsoft will be able to figure out the cause of the problem from the mini-dump. It does not make
    sense on a Hyper-V system to set aside enough space to capture all the memory on the system because only a very small portion of that memory is used by the parent partition. Most of the memory is under control of the individual VMs.
    Yes, I had one of the Hyper-V product group tell me that I should let Windows manage it.  A couple of times I saw space on my system disk disappear because the algorithm decided it wanted all the space for the page file.  Made it so I couldn't
    patch my systems.  Went back in and set the page file to 1-4 GB and have not had any issues since.
    . : | : . : | : . tim

  • Best practice: managing FRA

    I was not sure this was the most appropriate forum for this question; if not, please feel free to make an alternative suggestion.
    For those of us who run multiple databases on a box with shared disk for FRA, I am finding the extra layer of ASM and db_recovery_file_dest_size to be a minor inconvenience. The Best Practice white papers I have found so far say that you should use db_recovery_file_dest_size, but they do not specify how you should set it. Currently, we have been setting db_recovery_file_dest_size rather small, as the databases so far are small and even at 3x the database size, the parameter is still significantly smaller than the total disk available in that diskgroup.
    So, my question; is there any downside to setting db_recovery_file_dest_size equal to the total size of the FRA diskgroup for all databases? Obviously, this means that the amount of free space in the diskgroup may be consumed even if db_recovery_file_dest_size is not yet full (as reflected in the instance V$RECOVERY_FILE_DEST). But is that really a big deal at all? Can we not simply monitor the FRA diskgroup, which we have to do anyway? This eliminates the need to worry about an additional level of disk management. I like to keep things simple.
    The question is relevant to folks using other forms of volume management (yes, I know, ASM is "not a volume manager"), but seems germane to the ASM forum because most articles and DBAs that I have talked to are using ASM for FRA.
    Most importantly, what ramifications does "over-sizing" db_recovery_file_dest_size have? Aside from the scenario above.
    TIA

    As a general rule, the larger the flash recovery area(db_recovery_file_dest_size ), the more useful it becomes. Ideally, the flash recovery area should be large enough to hold a copy of all of your datafiles and control files, the online redo logs, and the archived redo log files needed to recover your database using the datafile backups kept under your retention policy.
    Setting the size of DB_FILE_RECOVERY_DEST_SIZE must be based on following factors.
    1) your falshback retention target,
    2) what all files you are storing in flashback and
    3) if that inclueds backup then the retention policy for them or how often you move them to tape
    The bigger the flash recovery area, the more useful it becomes. Setting it much larger or equal to you FRA disk group does not cause any [b]overhead that is not known to Oracle.
    But there are reasons why Oracle lets you define a disk limit, which is the amount of space that Oracle can use in the flash recovery area out of your FRA disk group.
    1) A disk limit lets you use the remaining disk space for other purposes and not to dedicate a complete disk for the flash recovery area.
    2)Oracle does not delete eligible files from the Flash Recovery Area until the
    space must be reclaimed for some other purpose. So even though your database size is 5GB your restention target is very small but if your recovery_dest_size is much larger it will just keep filling.
    3)Say in my case I have one FRA disk group of 150GB shared by 3 different databases. Based on the nature and criticaly of the database I have different size requirement of flashback recovery area for these databases. So I use varrying db_file_recovery dest_size (30GB, 50GB,70GB) respectively for meeting my retention target or the kind of files and backup I want to store them in FRA for these databases.
    Oracle Internal Space management mehcanism for Falshback recovery area itself is designed in such way that if you define your db_recovery_file_dest_size and DB_FLASHBACK_RETENTION_TARGET at a optimal value, you wont need any further administration or management. If a Flash Recovery Area is configured, then the database uses an internal algorithm to delete files from the Flash Recovery Area that are no longer needed because they are redundant, orphaned, and so forth. The backups with status OBSOLETE form a subset of the files deemed eligible for deletion by the disk quota rules. When space is required in the Flash Recovery Area, then the following files are deleted:
    a) Any backups which have become obsolete as per the retention policy.
    b) Any files in the Flash Recovery Area which has been already backed up
    to a tertiary device such as tape.
    c) Flashback logs may be deleted from the Flash Recovery Area to make space available for other required files.
    NOTE: If your FRA is 100GB and 3 databases have thier DB_RECOVERY_FILE_DEST set FRA then logically the total of db_recovery_file_dest_size for these 3 databases should not exceed 100GB. Even though practically it allows you to cross this limit.
    Hope this helps.

  • What means "best practice"?

    Hello everyone,
    I have always seen some methods are described as "best practice". I am wondering what means "best practice", an algorithm?
    Thanks in advance,
    George

    Thanks RXon,
    It means they are usually faster, more memory
    efficient, and just generally better than the
    [old/other] oneSo, it is a general concept (means a best method) and not specific to any topic or any technologies. Am I correct?
    regards,
    George

Maybe you are looking for

  • Extra Carriage Return While Using GUI_DOWNLOAD in ASC Mode

    Dear All, Happy New Year..First of All! We are facing a problem in using FM GUI_DOWNLOAD for downloading internal table data in ASC Format. The FM seems to be adding one extra Line Feed / Carriage Return in the end of the file. We have debugged the c

  • All photos merged in a single event

    after a crash more than 200 events to only one, all photos merged in a sigle event, is there a way to fix it ?

  • Converting to different color space

    Dear Color Management Guru's, I have a question regarding converting digital files between color spaces. Is it correct to convert a file from a large color space like Adobe RGB to a smaller color space such as sRGB, or visa versa? Thanks in advance,

  • From Adobe Workspace+Flex : ArrayCollection to ArrayList

    Hi all I have created a flex application which sends an ArrayCollection to LCDS 3.1 installed on JBoss. This ArrayCollection is converted into ArrayList which successfuly is converted into Array by a server java program. The Problem: I incorporate th

  • Assigning a thumbnail to a production

    How do I assign a specific thumbnail to a production? When the Apple trainers were here the said it would be the top file in alphabetical order. But after working with FCS for a day or two it seems to be the first file you put into the production, wh