WLC 5508 - config best practices - large deposit

Hello everyone, I recently added a Cisco 5508 to control many LAPs on a very large deposit.
We have indoor and outdoor LAPs on the environment. Many Hand Helds using 802.11b.
I was wondering if anyone can tell me if there is a best practice for environments with:
- 802.11b client devices.
- 802.11g client devices.
- indoor/outdoor LAP.
- high-density mineral water/soda on the deposit.
- LAPs are 1240 (indoor) and 1310 (outdoor).
- Antennas are 1728 and 2506 respectively.
Among that, on the log I see continous entries of reassociate and reauthentication for the same devices, but not equal logs of deauthentication.
Thanks in advance.
Regards

for instance, I have this recent log:
0 Mon Dec 10 17:39:50 2012 Client Authenticated: MAC Address:00:a0:f8:b9:58:73 base Radio MAC:00:24:c4:e0:3d:00 Slot: 0 User Name:unknown IP Addr:181.142.126.150 SSID:HH
1 Mon Dec 10 17:39:50 2012 Client Association: Client MAC:00:a0:f8:b9:58:73 Base Radio MAC :00:24:c4:e0:3d:00 Slot: 0 User Name:unknown IP Addr: 181.142.126.150
2 Mon Dec 10 17:39:27 2012 Client Authenticated: MAC Address:00:a0:f8:b9:58:73 base Radio MAC:00:24:c4:e0:3e:10 Slot: 0 User Name:unknown IP Addr:181.142.126.150 SSID:HH
3 Mon Dec 10 17:39:27 2012 Client Association: Client MAC:00:a0:f8:b9:58:73 Base Radio MAC :00:24:c4:e0:3e:10 Slot: 0 User Name:unknown IP Addr: 181.142.126.150
4 Mon Dec 10 17:39:26 2012 Client Deauthenticated: MACAddress:00:a0:f8:b9:58:73 Base Radio MAC:00:3a:9a:7d:20:80 Slot: 0 User Name: unknown Ip Address: 181.142.126.150 Reason:Unspecified ReasonCode: 1 
5 Mon Dec 10 17:39:23 2012 Client Authenticated: MAC Address:00:a0:f8:b9:58:73 base Radio MAC:00:3a:9a:7d:20:80 Slot: 0 User Name:unknown IP Addr:181.142.126.150 SSID:HH
6 Mon Dec 10 17:39:22 2012 Client Association: Client MAC:00:a0:f8:b9:58:73 Base Radio MAC :00:3a:9a:7d:20:80 Slot: 0 User Name:unknown IP Addr: 181.142.126.150
Could be this because of excessive roaming?. If yes, how can I prevent it?.
Two of the 3 LAPs reported are very close one of each other, but the 3° LAP is over 60 meters or more from the other two.
The 3 LAPs are using 5,2 dBi antennas.

Similar Messages

  • Command Authorization Config best practice using ACS

    Hi
    Is there any best practices for configuring Command authorization (for router/switch/asa) in CS-ACS? To be specific, is there any best practices to configure authorization for a set of commands allowed for L1,L2,L3 support levels?
    Regards
    V Vinodh.

    Vinodh,
    The main thing here is to ensure that we have backup/fall-back method configured for command authorization, inorder to avoid lockout situation or do wr mem once you are sure configs are working fine.
    Please check this link,
    http://www.cisco.com/en/US/products/sw/secursw/ps2086/products_configuration_example09186a00808d9138.shtml
    Regards,
    ~JG
    Do rate helpful posts

  • Services-config best practices for ADEP

    I need to establish a services-config.xml to connect to LCDS from my ADEP project.  Are there any best practices I should be aware of with respect to ADEP / LCDS interaction in this regard?
    Thanks in advance!

    Hi Mike. You can use the CRXDE Lite editor to modify the services-config.xml file for a Data Services application deployed to the Experience Server.
    When you are in CRXDE Lite, navigate to:
    /etc/aep/config/dataservices/services-config.xml
    Open the services-config.xml, make your changes, and click the Save All button.
    After that, when you create a new "Flex for ADEP Experience Services" project in Flash Builder, the channelset-config information will be copied from the services-config.xml file into a channelset-config.xml file that gets added to the dataservices directory of your Flex project and is also added to the mxmlc compiler options for your project.

  • 5508 design - best practice

    So thanks to Nicolas, I have our 5508 wireless controller up and running in the lab
    with 2 AP's. Right now I have 2 WLANS defined, 1 for internal users and 1 for guests. They
    are on seperate vlans which allows me to ACL the guest connections to internet ONLY.
    The next step is to have our internal users connect to internal WLAN using their domain credentials.
    Most of the time our users will undock their laptops and head off to a meeting room, which I would like
    for them to connect to the WLAN with domain credentials.
    My first question is this realistic ? Does the process work seamlessly or am I better off just
    pushing the internal WLAN encrypted profile to each of their laptops via our in house config management s/w ?
    That way when the WLAN is in range and they are undocked, they will connect.
    2ndly if it is worth the effort to use the domain credentials, it appears that I will need
    to spark up a radius server our one of our DC's which will then query AD.
    I have the design guide but cant seem to find any info on the setting up the radius connection.
    Any help would be appreciated.
    Cheers
    Dave

    Yes you need a radius server for best operations. This allows an easy peap-mschapv2 with windows native supplicant on the clients.
    A simple AD GPO can configure their clients to connect to your SSID doing peap and using their domain credentials.
    It will be seemless in the way that they won't have to type anything but their ip address will change if they undock and they will disconnect/reconnect quickly.
    Nicolas

  • Best practices for network design on WLC 2504 and 5508

    Dear all:
    I'm looking for some recommendations on WLC 2504 and 5508 about the the following:
    Maximum amount of AP per port
    The scenario when to use all ports in both WLC
    Maximum number of clients(users) per port
    Bandwidth comsumption of  management vs data in order to assign one port for management
    I've just found this:
    Cisco 5508 controllers have eight Gigabit Ethernet distribution system ports, through which the controller can manage multiple access points. The 5508-12, 5508-25, 5508-50, 5508-100, and 5508-250 models allow a total of 12, 25, 50, 100, or 250 access points to join the controller. Cisco 5508 controllers have no restrictions on the number of access points per port. However, Cisco recommends using link aggregation (LAG) or configuring dynamic AP-manager interfaces on each Gigabit Ethernet port to automatically balance the load. If more than 100 access points are connected to the 5500 series controller, make sure that more than one gigabit Ethernet interface is connected to the upstream switch.
    http://www.cisco.com/c/en/us/td/docs/wireless/controller/6-0/configuration/guide/Controller60CG/c60mint.html
    Thanks for your help.

    The 5508-12, 5508-25, 5508-50, 5508-100, and 5508-250 models allow a total of 12, 25, 50, 100, or 250 access points to join the controller.
    This is an old document.  5508 can now support up to 500 APs if you run firmware 7.X.  2504 can support up to 75 APs if you run firmware 7.4.X.
    I'm looking for some recommendations on WLC 2504 and 5508 about the the following:
    Best practice and recommendation is to LAG all ports so you will be able to form a link redundancy.  If one link goes down, you have other link to push traffic. 

  • Best practice for loading config params for web services in BEA

    Hello all.
    I have deployed a web service using a java class as back end.
    I want to read in config values (like init-params for servlets in web.xml). What
    is the best practice for doing this in BEA framework? I am not sure how to use
    the web.xml file in WAR file since I do not know how the name of the underlying
    servlet.
    Any useful pointers will be very much appreciated.
    Thank you.

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best Practice Regarding Large Mobility Groups

    I was reading the WLC Best Practices and was wondering if anyone could put a number to this statement regarding the largest number of APs, end users, and controllers which can contained in a Mobility Group.
    We would be deploying WiSMs in two geographically dispersed data centers. No voice is being used or is planned.
    "Do not create unnecessarily large mobility groups. A mobility group should only have all controllers that have access points in the area where a client can physically roam, for example all controllers with access points in a building. If you have a scenario where several buildings are separated, they should be broken into several mobility groups. This saves memory and CPU, as controllers do not need to keep large lists of valid clients, rogues and access points inside the group, which would not interact anyway.
    Keep in mind that WLC redundancy is achieved through the mobility groups. So it might be necessary in some situations to increase the mobility group size, including additional controllers for
    redundancy (N+1 topology for example)."
    I would be interested in hearing about scenarios where a Catalyst 6509 with 5 WiSM blades is deployed in data centers which back each other up for cases of disaster recovery.
    Can I have one large Mobility group? This would be easier to manage.
    or
    Would it be better to back up each blade with a blade in the second data center? This would call for smaller Mobility Groups.
    Be glad to elaborate further if anyone has a similar experience and needs more information.
    All responses will be rated.
    Thanks in advance.
    Paul

    Well, that is a large group indeed, and I would say most organizations use nested groups instead of adding these behemoths to the directory as they are quite difficult to work with.  If it's a one-time thing, you could create it manually in bite-sized
    chunks with LDIF or the like, so that FIM only has to do small delta changes afterwards.
    The 5,000 member limit mostly applies to groups prior to the change to linked value storage.  What is your forest functional level, and have you verified that this group is using linked values?
    Steve Kradel, Zetetic LLC

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • WLC 5508 running 7.4.110.0 unable to tftp upload config from controller

    Hi,
    Two WLC 5508 running identical code version. One is 50 license Primary, the second is HA. Identical config on both. HA WLC can upload its config to the TFTP or FTP server but Primary cannot. The operation fails for  both CLI and GUI and for different protocols i.e. TFTP, FTP.
    #### Primary Controller
    (Cisco Controller) >show sysinfo
    Manufacturer's Name.............................. Cisco Systems Inc.
    Product Name..................................... Cisco Controller
    Product Version.................................. 7.4.110.0
    Bootloader Version............................... 1.0.20
    Field Recovery Image Version..................... 7.6.95.16
    Firmware Version................................. FPGA 1.7, Env 1.8, USB console 2.2
    Build Type....................................... DATA + WPS
    System Name...................................... PRODWC7309
    System Location..................................
    System Contact...................................
    System ObjectID.................................. 1.3.6.1.4.1.9.1.1069
    Redundancy Mode.................................. Disabled
    IP Address....................................... 10.1.30.210
    Last Reset....................................... Power on reset
    System Up Time................................... 18 days 18 hrs 51 mins 35 secs
    System Timezone Location......................... (GMT+10:00) Sydney, Melbourne, Canberra
    System Stats Realtime Interval................... 5
    System Stats Normal Interval..................... 180
    Configured Country............................... AU - Australia
    Operating Environment............................ Commercial (0 to 40 C)
    --More-- or (q)uit
    Internal Temp Alarm Limits....................... 0 to 65 C
    Internal Temperature............................. +34 C
    External Temperature............................. +17 C
    Fan Status....................................... OK
    State of 802.11b Network......................... Enabled
    State of 802.11a Network......................... Enabled
    Number of WLANs.................................. 8
    Number of Active Clients......................... 138
    Memory Current Usage............................. Unknown
    Memory Average Usage............................. Unknown
    CPU Current Usage................................ Unknown
    CPU Average Usage................................ Unknown
    Burned-in MAC Address............................ 3C:08:F6:CA:52:20
    Power Supply 1................................... Present, OK
    Power Supply 2................................... Present, OK
    Maximum number of APs supported.................. 50
    (Cisco Controller) >debug transfer trace enable
    (Cisco Controller) >transfer upload start
    Mode............................................. TFTP
    TFTP Server IP................................... 10.1.22.2
    TFTP Path........................................ /
    TFTP Filename.................................... PRODWC7309-tmp.cfg
    Data Type........................................ Config File
    Encryption....................................... Disabled
    *** WARNING: Config File Encryption Disabled ***
    Are you sure you want to start? (y/N) Y
    *TransferTask: Jun 02 10:41:15.183: Memory overcommit policy changed from 0 to 1
    *TransferTask: Jun 02 10:41:15.183: RESULT_STRING: TFTP Config transfer starting.
    TFTP Config transfer starting.
    *TransferTask: Jun 02 10:41:15.183: RESULT_CODE:1
    *TransferTask: Jun 02 10:41:24.309: Locking tftp semaphore, pHost=10.1.22.2 pFilename=/PRODWC7309-tmp.cfg
    *TransferTask: Jun 02 10:41:24.393: Semaphore locked, now unlocking, pHost=10.1.22.2 pFilename=/PRODWC7309-tmp.cfg
    *TransferTask: Jun 02 10:41:24.393: Semaphore successfully unlocked, pHost=10.1.22.2 pFilename=/PRODWC7309-tmp.cfg
    *TransferTask: Jun 02 10:41:24.394: tftp rc=-1, pHost=10.1.22.2 pFilename=/PRODWC7309-tmp.cfg
    pLocalFilename=/mnt/application/xml/clis/clifile
    *TransferTask: Jun 02 10:41:24.394: RESULT_STRING: % Error: Config file transfer failed - Unknown error - refer to log
    *TransferTask: Jun 02 10:41:24.394: RESULT_CODE:12
    *TransferTask: Jun 02 10:41:24.394: Memory overcommit policy restored from 1 to 0
    % Error: Config file transfer failed - Unknown error - refer to log
    (Cisco Controller) >show logging
    *TransferTask: Jun 02 10:41:24.393: #UPDATE-3-FILE_OPEN_FAIL: updcode.c:4579 Failed to open file /mnt/application/xml/clis/clifile.
    *sshpmReceiveTask: Jun 02 10:41:24.315: #OSAPI-3-MUTEX_FREE_INFO: osapi_sem.c:1087 Sema 0x2b32def8 time=142 ulk=1621944 lk=1621802 Locker(sshpmReceiveTask sshpmrecv.c:1662 pc=0x10b07938) unLocker(sshpmReceiveTask sshpmReceiveTaskEntry:1647 pc=0x10b07938)
    -Traceback: 0x10af9500 0x1072517c 0x10b07938 0x12020250 0x12080bfc
    *TransferTask: Jun 02 10:39:01.789: #UPDATE-3-FILE_OPEN_FAIL: updcode.c:4579 Failed to open file /mnt/application/xml/clis/clifile.
    *sshpmReceiveTask: Jun 02 10:39:01.713: #OSAPI-3-MUTEX_FREE_INFO: osapi_sem.c:1087 Sema 0x2b32def8 time=5598 ulk=1621801 lk=1616203 Locker(sshpmReceiveTask sshpmrecv.c:1662 pc=0x10b07938) unLocker(sshpmReceiveTask sshpmReceiveTaskEntry:1647 pc=0x10b07938)
    -Traceback: 0x10af9500 0x1072517c 0x10b07938 0x12020250 0x12080bfc
    #### HA Controller
    (Cisco Controller) >show sysinfo
    Manufacturer's Name.............................. Cisco Systems Inc.
    Product Name..................................... Cisco Controller
    Product Version.................................. 7.4.110.0
    Bootloader Version............................... 1.0.20
    Field Recovery Image Version..................... 7.6.95.16
    Firmware Version................................. FPGA 1.7, Env 1.8, USB console 2.2
    Build Type....................................... DATA + WPS
    System Name...................................... PRODWC7310
    System Location..................................
    System Contact...................................
    System ObjectID.................................. 1.3.6.1.4.1.9.1.1069
    Redundancy Mode.................................. Disabled
    IP Address....................................... 10.1.31.210
    Last Reset....................................... Software reset
    System Up Time................................... 18 days 19 hrs 1 mins 27 secs
    System Timezone Location......................... (GMT+10:00) Sydney, Melbourne, Canberra
    System Stats Realtime Interval................... 5
    System Stats Normal Interval..................... 180
    Configured Country............................... AU - Australia
    Operating Environment............................ Commercial (0 to 40 C)
    --More-- or (q)uit
    Internal Temp Alarm Limits....................... 0 to 65 C
    Internal Temperature............................. +34 C
    External Temperature............................. +17 C
    Fan Status....................................... OK
    State of 802.11b Network......................... Enabled
    State of 802.11a Network......................... Enabled
    Number of WLANs.................................. 4
    Number of Active Clients......................... 0
    Memory Current Usage............................. Unknown
    Memory Average Usage............................. Unknown
    CPU Current Usage................................ Unknown
    CPU Average Usage................................ Unknown
    Burned-in MAC Address............................ 3C:08:F6:CA:53:C0
    Power Supply 1................................... Present, OK
    Power Supply 2................................... Present, OK
    Maximum number of APs supported.................. 500
    (Cisco Controller) >debug transfer trace enable
    (Cisco Controller) >transfer upload start
    Mode............................................. FTP
    FTP Server IP.................................... 10.1.22.2
    FTP Server Port.................................. 21
    FTP Path......................................... /
    FTP Filename..................................... 10_1_31_210_140602_1050.cfg
    FTP Username..................................... ftpuser
    FTP Password..................................... *********
    Data Type........................................ Config File
    Encryption....................................... Disabled
    *** WARNING: Config File Encryption Disabled ***
    Are you sure you want to start? (y/N) y
    *TransferTask: Jun 02 10:51:31.278: Memory overcommit policy changed from 0 to 1
    *TransferTask: Jun 02 10:51:31.278: RESULT_STRING: FTP Config transfer starting.
    FTP Config transfer starting.
    *TransferTask: Jun 02 10:51:31.278: RESULT_CODE:1
    *TransferTask: Jun 02 10:52:05.468: ftp operation returns 0
    *TransferTask: Jun 02 10:52:05.477: RESULT_STRING: File transfer operation completed successfully.
    *TransferTask: Jun 02 10:52:05.477: RESULT_CODE:11
    File transfer operation completed successfully.
    Not upgrading to 7.4.121.0 because of bug CSCuo63103. Have not restarted the controller yet.
    Any one else had this issue ? Is there a workaround ?
    Thanks,
    Rick.

    Thanks Stephen, In my deployments of 7.4.110.0 version I have not seen this issue so may be controller reboot will fix it (we do have HA to minimize the impact). I will keep the thread updated with findings and may request TAC for the special release 7.4.121.0 if the still not happy with 7.4.110.0
    Rick.

  • Best practice for using messaging in medium to large cluster

    What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
    I will be glad to hear any suggestion or to learn from others experience.
    Shimi

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Best practice for frequently needed config settings?

    I have a command-line tool I wrote to keep track of (primarily) everything I eat and drink in the course of the day.  Obviously, therefore, I run this program many times every day.
    The program reads a keyfile and parses the options defined therein.  It strikes me that this may be awfully inefficient to open the file every time, read it, parse options, etc., before even doing anything with command-line input.  My computer is pretty powerful so it's not actually a problem, per se, but I do always want to become a better programmer, so I'm wondering whether there's a "better" way to do this, for example some way of keeping settings available without having to read them every single time.  A daemon, maybe?  I suspect that simply defining a whole bunch of environment variables would not be a best practice.
    The program is written in Perl, but I have no objection to porting it to something else; Perl just happens to be very easy to use for handling a lot of text, as the program relies heavily on regexes.  I don't think the actual code of the thing is important to my question, but if you're curious, it's at my github.  (Keep in mind I'm strictly an amateur, so there are probably some pretty silly things in my code.)
    Thanks for any input and ideas.

    There are some ways around this, but it really depends on the type of data.
    Options I can think of are the following:
    1) read a file at every startup as you are already doing.  This is extremely common - look around at the tools you have installed, many of them have an rc file.  You can always strive to make this reading more efficient, but under most circumstances reading a file at startup is perfectly legitimate.
    2) run in the background or as a daemon which you also note.
    3) similar to #1, save the data in a file, but instead of parsing it each time save it instead as a binary.  If you're data can all be stored in some nice data structure in the code, in most languages you can just write the block of memory occuppied by that data structure to a file, then on startup you just transfer the contents of the file to a block of allocated memory.  This is quiet do-able - but for a vast majority of situations this would be a bad approach (IMHO).  The data have to be structured in such a way they will occupy one continuous memory block, and depending on the size of the data block this in itself may be impractical or impossible.  Also, you'd need a good amount of error checking or you'd simply have to "trust" that nothing could ever go wrong in your binary file.
    So, all in all, I'd say go with #1, but spend time tuning your file read/write proceedures to be efficient.  Sometimes a lexer (gnu flex) is good for this, but often times it is also overkill and a well written series of if(strncmp(...)) statements will be better*.
    Bear in mind though, this is from another amateur.  I c ode for fun - and some of my code has found use - but it is far from my day job.
    edit: *note - that is a C example, and flex library is easily used in C.  I'd be surprised if there are not perl bindings for flex, but I very rarely use perl. As an after thought, I'd be surprised if flex is even all that useful in perl, given perl's built-in regex abilites.  After-after-thought, I would not be surprised if perl itself were built on some version of flex.
    edit2: also, I doubt environment variables would be a good way to go.  That seems to require more system calls and more overhead than just reading from a config file.  Environment variables are a handy way for several programs to be able to access/change the same setting - but for a single program they don't make much sense to me.
    Last edited by Trilby (2012-07-01 15:34:43)

  • Best practice for storing/loading medium to large amounts of data

    I just have a quick question regarding the best medium to store a certain amount of data. Currently in my application I have a Dictionary<char,int> that I've created, that I subsequently populate with hard-coded static values.
    There are about 30 items in this Dictionary, so this isn't presented as much of a problem, even though it does make the code slightly more difficult to read, although I will be adding more data structures in the future with a similar number of items.
    I'm not sure whether it's best practice to hard-code these values in, so my question is, is there a better way to store this information, retrieve and load it at run-time?

    You could use one of the following methods:
    Use the application.config file. Upside is that it is easy to maintain. Downside is a user could edit it manually as its just an xml file.
    You could use a settings file. You can specify where the setting file is persisted including under the user's profile or the application. You could serialize/deserialize your settings to a section in the settings. See
    this MSDN help section
    on details abut the settings.
    Create a .txt, .json, or .xml file (depending on the format you will be deserializing your data) in your project and have it be copied to the output path with each build. The upside is that you could push out new versions in the future of the file without
    having to re-compile your application. Downside is that it could be altered if the user has O/S permissions to that directory.
    If you really do not want anyone to access it and are thinking of pushing out a new application version every time something changes you could create a .txt, .json, .xml file (depending on the format you will be deserializing your data) just like the previous
    step but this time mark it as an embedded resource in your project (you can do this in the properties of the  file in visual studio). It will essentially get compiled in your application. Content retrieval is outlined in
    this how to from Microsoft and then you just deserialize the retrieved content the same as the previous step.
    As far as formats of your data. I recommend you use either XML or JSON or a text file if its just a flat list of items (ie. list of strings). Personally I find JSON much easier to read compared to XML and change and there are plenty of supported serializers
    out there. XML is great too if you need to be strict as to what the schema is.
    Mark as answer or vote as helpful if you find it useful | Igor

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • Best Practice for config upgrade

    Hi
    Is there a best practice guide to follow to performing a network upgrade. So I am replacing a stack of 5 3750 switches and replacing with 2 stacks of 2 3750x switches, but then also introducing a 4500x vss pair aswell as the new core. Is there an easy migration process to migrate the config or do I have to understand what the current network is configured as and build a new config for the new topology.
    Im new to this process so just wondering if I can find an easier way, Thanks

    Is there an easy migration process to migrate the config or do I have to understand what the current network is configured as and build a new config for the new topology.
    You really need to understand how the current network is configured before you can do the upgrade otherwise if something doesn't work you will have no idea of where to begin troubleshooting.
    That doesn't mean you need to build an entirely new configuration because you may well be able to apply most of the configuration on your existing switches to the new switches although it's not clear what functions the new switches will be doing ie.
    currently you have a stack of 3750s doing what ? eg. access for users, routing for vlans etc.
    you are replacing them with 3750x's switches and 4500x switches so are you splitting up the functions of the 3750s between them.
    If you are then obviously you cannot take the existing configuration and copy it to both the 3750x and the 4500x switches.
    When I did upgrades like this I would download the existing configurations to my laptop and make any necessary edits. If I couldn't use the existing configuration I just created new ones. When I came to upgrade I had all the configurations ready to load onto the switches.
    Bear in mind that some configuration just won't copy across ie. if you have QOS on your current switches and you try to copy that to the 4500x it probably wont work as they use different commands for a lot of the same functions.
    But the key thing is you must understand what you are trying to do ie. simply trying to copy across configurations without the understanding could create a lot of problems.
    Jon

  • Best practice when doing large cascading updates

    Hello all
    I am looking for some help with tackling a fairly large cascading update.
    I have an object tree that needs to be merged using JPA and Toplink.
    Each update consists of 5-10000 objects with a decent depth as well.
    Can anyone give me some pointers/hints towards a Best practice for doing this? Looping though each object with JPA's merge takes minutes to complete, so i would rather not do that.
    I have never actually used TopLinks own API before, so i am especially interested if TopLink has an effective way of handling this, preferably with a link to some related reading material?
    Note that i have a somewhat duplicate question on (Noting for good forum practice)
    http://stackoverflow.com/questions/14235577/how-to-execute-a-cascading-jpa-toplink-batch-update

    Not certain what you think you can't do. Take a long clip and open it in the Viewer. Set In and Out points. Drop that into the Timeline. Now you can move along in the Viewer clip and set new Ins and Outs and drop that into the Timeline. Clips in the Timeline are created from the Ins and Outs you set in the Viewer.
    Is that what you want to do? If it is, I don't where making copies of the clip would work for you
    Later, if you want to match up a clip in the Timeline to that master clip, just use Match Clip (find) in the timeline to find where it correaltes to your main clip
    You can have FCE automatically create subclips at camera cut points by using DV Stop/Start Detect if that is what you're looking for

Maybe you are looking for

  • Blue Screen when connecting to internet

    I have this problem: blue screen appear in Win XP randomly when i try to connect to internet using a Genius modem. The error notification is this: "A problem has been detected and windows has been shut down to prevent damage to your computer. DRIVER_

  • Sent Touch in for repair...they're sending it back...are my files gone?

    Hello all, I just sent my iPod Touch 3G (64gb) to Apple for repair. I recently got the following email... Thank you for choosing AppleCare Service. Our technicians have performed full diagnostic checks on your IPOD TOUCH (3RD GENERATION) and have bee

  • What's wrong with my iPhone 5 screen?

    Hey all, When my iPhone 5 gets hot or the cpu is under pressure the screen turns all fuzzy and only shows 3 colours. Red, green and blue. It looks like a rainbow. I have no idea what to do. can anyone help me please? I cannot take a screenshot of it.

  • About SINAD and SNR

    I am confused about the definitions of SINAD and SNR,mainly the formula. SINAD,It is defined as the ratio of the "RMS signal amplitude" to the RMS sum of all other spectral components, including the harmonics but excluding DC. But I don't know what t

  • User exit export parameter

    Hi Now we setup po mail output. But we found when po relased, the release person become the po mail sender. this is not we wanted. We checked with sap, they mentioned There is a user exit in program RVCOMFZZ 'userexit_komkbea_fill'.   In this user ex