Best Practice - Long Daily Record List

Hi,
I'm new to doing this type of thing in Numbers and I'm struggling with how best to manage a growing journal list. As in:
- I have a table where I enter a new row every morning. In the row I store values of things like calories, sleep, performance, time (it's mainly to track my cycling) from the day before.
However, I'm starting to have problems:
- Now that the table is three months old, it's starting to get really long. What's the best practice here? Sort descending and add a new row at the top every morning? Or is there a way to create a "view" that, for example, only shows the last 14 days?
- Also, averages that I calculate from a column have become all but meaningless because a single day has so little weight. So what's the best practice here? Is there a way to calculate averages based on, say, the last 14 days only? If I sort descending as above, I guess I could just average the first 14 rows of each column. But I imagine there's a more elegant way to do this that involves date ranges up to the current date (at least, that's how I'd tackle it in FileMaker Pro).
I suppose this begs the question: is Numbers the right tool? I want to keep this record of my daily cycling stats going forward, hopefully for many years. It would be great to stay with Numbers as it's a pleasure to use and the charts are beautiful. But maybe a database tool is the way to go? Any thoughts? All I really need are a few charts and a few averages. A database does seem like overkill to me.
Many thanks in advance,
Pat

Hi Badunit,
Thanks for the pointer on extracting the last x values into a new table!  Took me a while to reason it through.
Slightly more convenient than:
  =OFFSET(Table 1::A$1,ROWS(Table 1::A)−16+ROW(),0)
May be:
  =INDEX(Data::$A:$B,COUNT(Data::$A)−14+ROW())
That one doesn't require remembering how to adjust the 14 when only 14 items are wanted.
It seems COUNT($A) only counts the body rows while ROWS() includes Header and Footer Rows too.
SG

Similar Messages

  • _msdcs subdomain best practice with NS records?

    I have the _msdcs subfolder under my domain (the grey folder). example below
    It has only one DC inside of it for a NS server. This DC is old and no longer exists. I checked my test environment and it has the same scenario (an old DC that does that not exist). example below
    I'm just wondering:
    1) Is this normal, should this folder update itself with other servers?
    2) should I be adding one of my other DC's? and removing the original?
    I have a single forest, single domain setup 2008 functional level. My normal
    _msdcs Zone does behave as expected and removes and add the appropriate records. Thanks.

    I apologize for the late response. I see you've gone further than what I've recommended.
    No, you shouldn't have deleted the _msdc.parent.local zone!!!!!! I'm not sure why you did that. Are you working with someone else on this that recommended to do that? If not,
    you're over-thinking it. I provide specifics to fix it by simply  updating the NS records, that's it. If you only found the _msdcs folder had the wrong record, then that's all you had to change.
    In cases where DCs are removed, replaced, upgraded, etc, it's also best practice to check a few things to make sure things are in order, and one of them is check the NS records on all zones and delegations. Delegation's NS records won't update
    automatically with changes, but zone NS records will if DCs are properly demoted.
    The _msdcs delegated zone is required by Active Directory. And yes, based on your thread subject, it's best practice. When Windows 2000 came out, and IF you had created the initial domain with it, it did not have it this way, but all domains initially created
    with Windows 2003 and newer are designed this way. If you had upgraded from 2000 to 2003, then one of the steps that we must perform is to create the _msdcs delegation.
    Please re-create it in this order:
    In the DNS console, right-click Forward Lookup Zones, and then click
    New Zone. Click Next
    On the Zone Type page in the New Zone Wizard, click
    Primary zone, and then click to select the Store the zone in Active Directory check box. Click
    Next
    On the Active Directory Zone Replication Scope page, click "To all DNS servers in the Active Directory forest parent.local.
    On the Zone Name page, in the Zone Name box, type
    _msdcs.parent.local
    Complete the wizard by accepting all the default options.
    After you've done that:
    Delete the _msdcs subfolder under parent.local.
    Right-click parent.local, choose New Delegation.
    Type in _msdcs
    In the Nameserver page, type in the name of your server, and its IP address.
    Complete the wizard
    You should now see a grayed out _msdcs folder under parent.local.
    Go to c:\windows\system32\config\ folder
    Find netlogon.dns and rename it to netlogon.dns.old
    Find netlogon.dnb and rename it to netlogon.dnb.old
    Open a command prompt
    Run ipconfig /registerdns
    Run net stop netlogon
    Run net start netlogon
    Wait a few minutes, then click on the _msdcs.parent.local zone, and click the F5 button to refresh it.
    You should see the data populate.
    Ace Fekay
    MVP, MCT, MCITP/EA, MCTS Windows 2008/R2 & Exchange 2007, Exchange 2010 EA, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Technical Blogs & Videos: http://www.delawarecountycomputerconsulting.com/
    This post is provided AS-IS with no warranties or guarantees and confers no rights.

  • Best Practice for caching global list of objects

    Here's my situation, (I'm guessing this is mostly a question about cache synchronization):
    I have a database with several tables that contain between 10-50 rows of information. The values in these tables CAN be added/edited/deleted, but this happens VERY RARELY. I have to retrieve a list of these objects VERY FREQUENTLY (sometimes all, sometimes with a simple filter) throughout the application.
    What I would like to do is to load these up at startup time and then only query the cache from then on out, managing the cache manually when necessary.
    My questions are:
    What's the best way to guarantee that I can load a list of objects into the cache and always have them there?
    In the above scenario, would I only need to synchronize the cache on add and delete? Would edits be handled automatically?
    Is it better to ditch this approach and to just cache them myself (this doesn't sound great for deploying in a cluster)?
    Ideas?

    The cache synch feature as it exists today is kind of an "all or nothing" thing. You either synch everything in your app, or nothing in your app. There isn't really any mechanism within TopLink cache synch you can exploit for more app specific cache synch.
    Keeping in mind that I haven't spent much time looking at your app and use cases, I still think that the helper class is the way to go, because it sounds like your need for refreshing is rather infrequent and very specific. I would just make use of JMS and have your app send updates.
    I.e., in some node in the cluster:
    Vector changed = new Vector();
    UnitOfWork uow= session.acquireUnitOfWork();
    MyObject mo = uow.registerObject(someObject);
    // user updates mo in a GUI
    changed.addElement(mo);
    uow.commit();
    MoHelper.broadcastChange(changed);
    Then in MoHelper:
    public void broadcast(Vector changed) {
    Hashtable classnameAndIds = new Hashtable();
    iterate over changed
    if (i.getClassname() exists in classAndIDs)
    classAndIds.get(i.getClassname()).add(i.getId());
    else {
    Vector vc = new Vector();
    vc.add(i.getId())
    classAndIds.add(i.getClassname(),vc);
    jmsTopic.send(classAndIds);
    Then in each node in the cluster you have a listener to the topic/queue:
    public void processJMSMessage(Hashtable classnameAndIds) {
    iterate over classAndIds
    Class c = Class.forname(classname);
    ReadAllQuery raq = new ReadAllQuery(c);
    raq.refreshIdentityMapResult();
    ExpressionBuilder b = new ExpressionBuilder();
    Expression exp = b.get("id").in(idsVector);
    roq.setSelectionCriteria(exp);
    session.executeQuery(roq);
    - Don

  • Best practice for Book recording vocals...

    My wife has written a children's book that we want to record and offer as a downloadable MP3 set. She will do the recording and we have the mic/studio GarageBand all working just fine. My question is what is the best way to make sure that all the tracks for each chapter are equalized the same?
    I know that we are trying to record it all the same, but I just wanted to see if there was anyway with GB or some other app to take all the seperate vocal tracks and make them all the same in volume, other than looking at the settings in GB which we don't change as far as input goes...
    Hope that is clear and makes sense... I just would like to know what I should do to make it as good as it can be with what we have, then stop worrying about it.
    Cheers,
    Cory

    Hmmm, thanks AppleGuy, that is interesting. I knew that I could create my own presets, but didn't think about this in this way...
    So, if I took the first chapter that we did, adjusted it to my liking, ie. reverb to just fill it out a bit, then saved it as a preset, I could then apply this preset to all my other recordings so that they would all have the same properties? I know this wouldn't actually fix the possible increase or decrease of volume from different recording sessions, but it would allow me to make sure they all have the same effects settings, yes?
    How do you do this to tracks that are already recorded?
    Thanks,

  • What is Best Practice: Array or typed List?

    Hello,<br /><br />I am just starting with BlazeDS and Flex.  I want to exchange a strongly typed collection between client and server.  I was wondering if it is better to use Java Arrays (MyClass[]) or typed lists List<MyClass><br /><br />What is the best approach?  Are there differences?<br /><br />Thanks,<br /><br />Tobias

    > What is the best approach? Are there differences?
    Hi Tobias,
    This probably falls under personal preference, but the List class is more
    flexible (sizing, inserting in to the middle, etc) so I would prefer to use
    it in Java. Actually, I might even go with ArrayList, that would split the
    difference.
    In any case, they would all translate to an ArrayCollection of types objects
    on the Actionscript side, when your Actionscript classes have the right
    'alias=' property set.
    Tom Jordahl

  • Best practice calling only records belonging to a category

    This is one of those questions that should be simple and straightforward. I am just not sure how to implement.
    Here is what is working currently:
    I have an app that adds, updates, deletes, and plays video clips from a mySQL database, specifically using the "videoclips" table. Each video has a unique ID (videoID). Each video belongs to one of two categories: (videocategory) "Youtube Video", or "Yahoo Video". I also have a table named "video_category" with a primary id of "videocategoryID"  I use the "video_category" table for a cfselect form input to populate the "videocategory" field of the "videoclips" table.
    This all works nicely, but here is one of my hangups and the first part of my question: I am not sure if I should create and populate a videocategoryID field in the "videoclips" table, and I don't know the best way to setup and match the videocategoryID in the "videocategory"table. My database is currently a "myISAM". Should I be using "innoDB" and foreign keys to match up tables, and will this solve the problem of populating the videocategoryID in two separate tables? Yes I know this seems more SQL related, but please read on.
    There are two different directions I am considering and need help implementing:
         1. I would like to know how to select and pass mutiple videos belonging to a specific category and display them on a page named for that category "youtubvideo.cfm", or "yahoovideo.cfm".
    or this option:
         2. I would like to add a formfield on the same page as the videos table where I can select the category and post back to the same page with the results, i.e. the specific category videos.
    I am generating bean, DAO, and Gateway cfc's from the wizard. Using the gateway method, I am able to display all of the videos, and select a specific video to play.  Here are snippets of the page:
    videopage.cfm
    <!---player defaults to first clip in the database--->
    <cfparam name="url.videoid" default="1">
    <!---gets all videoclips--->
    <cfscript>
        qVideoclips = CreateObject("component", "myvideoapp.Components.VideoclipsGateway").getAllAsQuery();
    </cfscript>
    <!---gets all videocategories--->
    <cfscript>
        qVideoCategory=CreateObject("component", "myvideoapp.Components.video_categoryGateway").getAllAsQuery();
    </cfscript>
    <!---gets the specific video selected from clicking the Details link--->
    <cfset videoComp=CreateObject("component", "myvideoapp.Components.videoclipsGateway")>
    <cfif isdefined("url.videoID")>
    <cfset video=videoComp.getById(url.videoID)>
    <cfelse> No data matches this query
    </cfif>
    <body>
    <!---table that displays the videoclips--->
    <!---When details link is selected, the page refreshes with the selected videoID ready to play--->
    <cfoutput query="qVideoclips">
                  <tr>
                    <td>#qVideoclips.currentrow#</td>
                    <td>#qVideoclips.videocategory#</td>
                    <td>#qVideoclips.videosubject#</td>
                    <td>#qVideoclips.videotitle#</td>
                    <td>#qVideoclips.videocomments#</td>
                    <td><a href="videopage.cfm?videoID=#qVideoclips.videoID#">Details</a></td>
                  </tr>
                </cfoutput>
    <!---below is the category selection form - I would like this to post back to this page with the above table showing videos from the selected category--->
    <fieldset id="cflayoutleft">
            <legend id="cflayoutleftLegend">Categories</legend>
            <cfform>
            <p>
              <label class="top" for="videocategory">Video Category</label>
              <cfselect name="videocategory" id="videocategory" query="qVideoCategory" value="videocategoryname" display="videocategoryname" selected="#video.getvideocategory()#"></cfselect>
            </p>
            <p>
              <cfinput type="hidden" name="videocategoryname" value="#video.getvideocategory()#" validateat="onSubmit">
            </p>
            <div class="submit">
            <p>
              <cfinput type="submit" name="Submit" class="submit" id="Submit" value="Submit">
            </p>
            </cfform>
            </fieldset>
    <!---video player--->
    <cfoutput>#video.getvideoembed()#</cfoutput>
    Keep in mind, I don't have a category ID yet.
    So in summary,
    I want to display only the videos from a selected category.
    I need to know if I can pass a string, which means I could use one table and pass the categoryname; I just don't know how.
    If I can only pass "videocategoryID" I would need to create and populate a field in the "videoclips" table called videocategoryID.
    I already have a table called video_category table. How can I incorporate it to get the field populated in the "videoclips" table?
    I hope this is clear. Help would be greatly appreciated!
    Thanks,
    Marty P
    MP e-commerce

    There are a couple of options here if you need to get proxy disabled
    1) pinhole with an ACL that allows dhcp to pass your internal servers
    2) run dhcp on a switch, router, or firewall in the dmz
    3) if you are using a cab,e modem or dsl for the guest users, you can let that do the dhcp
    In general I've seen most of these in play, but I like option 2 myself
    Sent from Cisco Technical Support iPad App

  • Best Practice to save record in a table with appropriate trigger

    Hi,
    At the same time, five users are inserting client information in a client table through Oracle form. Client table has the following fields:
    CLIENT ID
    CLIENT NAME
    CLIENT ADDRESS
    CLIENT ID is generating automatically by calling a procedure. In this procedure I am using MAX function to get maximum value of CLIENT ID. After that, I store newly generated CLIENT ID in a DATA BLOCK item say :MASTER.CLIENT ID.
    The problem is that all five users do get same MAX value(suppose 40) of CLIENT ID at the same time, and oracle form will surely throw an exception at the time of insertion the record in client table. CLIENT ID is PK and it is a member of MASTER DATA BLOCK.
    I hope all above will clearly illustrate the problem, further please guide, can PRE-INSERT trigger may handle this problem efficiently ? , if so, then how ? ...
    Thanks,

    Hello,
    Welcome to the forum!
    CLIENT ID is generating automatically by calling a procedure.So, in which trigger you are calling that procedure?
    After that, I store newly generated CLIENT ID in a DATA BLOCK item say :MASTER.CLIENT ID. I would guess that you are using the code generation procedure in the WHEN-CREATE-RECORD trigger block-level. Because this trigger normally intialize the value for new record insertion. If no then specify.
    further please guide, can PRE-INSERT trigger may handle this problem efficiently ? ,Yes, PRE-INSERT will work without any problem.
    if so, then how ? ... Because, PRE-INSERT will fetch the MAX number from the table at the time of save not at the time of creation. So, suppose five user will enter record at the same time but when they will save then PRE-INSERT will get the max sequence number. And it will never create problem. Because in five user's insertion sure there will be some time difference. So, max number can be get easily without any problem.
    -Ammad

  • Best practice for updating a list that is data bound

    Hi All,
    I have a List component and the data is coming in from a bindable ArrayCollection. When I make changes to the data in the bindable ArrayCollection, the change is not being reflected in the list. I notice that if I resize the browser the component redraws I suppose and then the list updates. But how can I show the update when I change the data in the bindable ArrayCollection instantly?
    Thanks,
    Ryan

    ok thanks for that, I have it sorted out now and found out where the problem was. I got a hint from your statement: "truly [Bindable]"..
    Yes, the List is using a bindable ArrayCollection but I'm also using a custom item renderer and this item renderer takes the data and sets the label fields which are not binded. I didnt know that I had to carry the "binding" all the way through. I'm overriding the "set data" function and setting the label fields similar to: myLabel.text = _data.nameHere inside that function. That's where the problem was.
    It works great now that I bind the data directly to the Label fields in my custom item renderer. I'm also using functions to parse certain pieces of data too. Is this taxing on the application? I notice that the List updates everytime I scroll, and resetting / calling all the functions in my Labels in the custom itemrender (for example: myDate.text = "{parseDate(_data.date)}")
    Thanks!

  • ACE access-list best practice

    Hi,
    I was wondering what was the best practice for the access-list's on the Cisco ACE.
    Should we permit Any in the access-list, and classify the traffic in the class-maps as seen in a brief example:
    access-list ANY line 10 extended permit ip any any
    access-list EXCH-DMZ-INTERNET-OUT line 10 extended permit tcp 10.134.10.0 255.255.254.0 any eq www
    access-list EXCH-DMZ-INTERNET-OUT line 15 extended permit tcp 10.134.10.0 255.255.254.0 any eq https
    class-map match-all EXCH-DMZ-INTERNET-OUT
      2 match access-list EXCH-DMZ-INTERNET-OUT
    policy-map multi-match EXCH-DMZ-OUT
    class EXCH-DMZ-INTERNET-OUT
        nat dynamic 1 vlan 1001
    interface vlan 756
      description VLAN 744 EXCH DMZ BE
      ip address 10.134.11.253 255.255.255.0
      alias 10.134.11.254 255.255.255.0
      peer ip address 10.134.11.252 255.255.255.0
    access-group input ANY
      service-policy input EXCH-DMZ-OUT
    Or should we also also the access-list for the access-group in the interface as seen bellow:
    access-list EXCH-DMZ-INTERNET-OUT line 10 extended permit tcp 10.134.10.0 255.255.254.0 any eq www
    access-list EXCH-DMZ-INTERNET-OUT line 15 extended permit tcp 10.134.10.0 255.255.254.0 any eq https
    class-map match-all EXCH-DMZ-INTERNET-OUT
      2 match access-list EXCH-DMZ-INTERNET-OUT
    policy-map multi-match EXCH-DMZ-OUT
    class EXCH-DMZ-INTERNET-OUT
        nat dynamic 1 vlan 1001
    interface vlan 756
      description VLAN 744 EXCH DMZ BE
      ip address 10.134.11.253 255.255.255.0
      alias 10.134.11.254 255.255.255.0
      peer ip address 10.134.11.252 255.255.255.0
      access-group input EXCH-DMZ-INTERNET-OUT
      service-policy input EXCH-DMZ-OUT
    Regards,

    Hello,
    I don't think you'll find a "best practice" for this scenario.  It really just comes down to meeting your needs.  The first example you have a far and away the more commonly seen configuration, as you'll only NAT the traffic matching the EXCH-DMZ-INTERNET-OUT, but all other traffic will be forwarded by the ACE whether it is load balanced or not.  The second way will only allow NAT'd traffic, and deny all others.
    Hope this helps,
    Sean

  • OS X Server 3.0 new setup -- best practices?

    Alright, here's what I'm after.
    I'm setting up a completely new OS X Server 3.0 environment.  It's on a fairly new (1.5 year old) Mac Mini, plenty of RAM and disk space, etc.  This server will ONLY be used interally.  It will have a private IP address such as 192.168.1.205 which will be outside of my DHCP server's range (192.168.1.10 to .199) to prevent any IP conflicts.
    I am using Apple's Thuderbolt-to-Ethernet dongle for the primary network connection.  The built-in NIC will be used strictly for a direct iSCSI connection to a brand new Drobo b800i storage device.
    This machine will provide the following services, rougly in order of importance:
    1.  A Time Machine backup server for about 50 Macs running Maverics.
    1a.  Those networked Macs will authenticate individually to this computer for the Time Machine service
    1b.  This Server will get it's directory information from my primary server via LDAP/Open Directory
    2.  Caching server for the same network of computers
    3.  Serve a NetInstall image which is used to set up new computers when a new employee arrives
    4.  Maybe calendaring and contacts service, still considering that as a possibility
    Can anyone tell me the recommended "best practices" for setting this up from scratch?  I've done it twice so far and have faced problems each time.  My most frequent problem, once it's set up and running, is with Time Machine Server.  With nearly 100 percent consistency, when I get Time Machine Server set up and running, I can't administer it.  After a few days, I'll try to look at it via the Server app.  About half the time, there'll be the expected green dot by "Time Machine" indicating it is running and other times it won't be there.  Regardless, when I click on Time Machine, I almost always get a blank screen simply saying "Loading."  On rare occasion I'll get this:
    Error Reading Settings
    Service functionality and administration may be affected.
    Click Continue to administer this service.
    Code: 0
    Either way, sometimes if I wait long enough, I'll be able to see the Time Machine server setup, but not every time.  When I am able to see it, I'll have usability for a few minutes and then it kicks back to "Loading."
    I do see this apparently relevant entry in the logs as seen by Console.app (happens every time I see the Loading screen):
    servermgrd:  [71811] error in getAndLockContext: flock(servermgr_timemachine) FATAL time out
    servermgrd:  [71811] process will force-quit to avoid deadlock
    com.apple.launchd: (com.apple.servermgrd[72081]) Exited with code: 1
    If I fire up Terminal and run "sudo serveradmin fullstatus timemachine" it'll take as long as a minute or more and finally come back with:
    timemachine:command = "getState"
    timemachine:state = "RUNNING"
    I've tried to do some digging on these issues and have been greeted with almost nothing to go on.  I've seen some rumblings about DNS settings, and here's what that looks like:
    sudo changeip -checkhostname
    Primary address = 192.168.1.205
    Current HostName = Time-Machine-Server.local
    The DNS hostname is not available, please repair DNS and re-run this tool.
    dirserv:success = "success"
    If DNS is a problem, I'm at a loss how to fix it.  I'm not going to have a hostname because this isn't on a public network.
    I have similar issues with Caching, NetInstall, etc.
    So clearly I'm doing something wrong.  I'm not upgrading, again, this is an entirely clean install.  I'm about ready to blow it away and start fresh again, but before I do, I'd greatly appreciate any insight from others on some "best practices" or an ordered list on the best way to get this thing up and running smoothy and reliably.

    Everything in OS X is dependant on proper DNS.  You probably should start there.  It is the first service you should be configuring and it is the most important to keep right.  Don't configure any services until you have DNS straight.  In OS X, DNS really stands for Do Not Skip.
    This may be your toughest decision.  Decide what name you want the machine to be.  You have two choices.
    1: Buy a valid domain name and use it on your LAN devices.  You may not have a need now for use externally, but in the future when you use VPN, Profile Manager, or Web Services, at least you are prepared.  This method is called split horizon DNS.  Example would be apple.com.  Internally you may name the server tm.apple.com.  Then you may alias to it vpn.apple.com.  Externally, users can access the service via vpn.apple.com but tm.apple.com remains a private address only.
    2: Create an invalid private domain name.  This will never route on the web so if you decide to host content for internal/external use, you may run into trouble, especially with services that require SSL certificates.  Examples might be ringsmuth.int or andy.priv.  These type of domains are non-routable and can result in issues of trust when communicating with other servers, but it is possible.
    Once you have the name sorted out, you need to configure DNS.  If you are on a network with other servers, just have the DNS admin create an A and PTR record for you.  If this is your only server, then you need to configure and start the DNS service on Mavericks.  The DNS service is the best Apple has ever created.  A ton of power in a compact tool.  For your needs, you likely need to just hit the + button and fill out the New Device record.  Use a fully qualified host name in the first field and the IP address of your server (LAN address).  You did use a fixed IP address and disabled the wireless card, right?
    Once you have DNS working, then you can start configuring your other services.  Time Machine should be pretty simple.  A share point will be created automatically for you.  But before you get here, I would encourage starting Open Directory.  Don't do that until DNS is right and you pass the sudo changeip -checkhostname test.
    R-
    Apple Consultants Network
    Apple Professional Services
    Author, "Mavericks Server – Foundation Services" :: Exclusively in the iBooks Store

  • Best practice for getting all Activities for a Contact

    Hello,
    I am new to the Eloqua API and am wondering what is the best practice for downloading a list of all activities for a given contact.
    Thanks

    Hi Mike,
    For activities in general, Bulk 2.0 Activity Exports will be the best way to go. Docs are here: http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCBB/index.html
    But it can be a complex process to wrap your head around if you're new to the Eloqua API. So if you're in a pinch and don't care about the association of those activities to campaigns, and only need to pull activities for a few contacts, you can resort to using REST API calls.
    The activity calls are visible (from Firebug or Chrome console) if you open any contact record and navigate to the "Activity Log" tab. If you have it set to all activities, it will fire off a dozen or more calls or you can choose an individual one from the picklist to inspect that call in more detail.
    Best regards,
    Bojan

  • Best practice on dynamically changing drop down in page fragment

    I have a search form, which is a page fragmant that is shared across many pages. This form allows users to selct a field from the drop down list, and search for a particular value, as seen in the screenshot here:
    http://photos1.blogger.com/blogger2/1291/174258525936074/1600/expanded.5.gif
    Please note that the search options are part of page fragmant embedded within a page - so that I can re-use it across many pages.
    The drop down list changes,based on the page this fragment is embedded. For users page, it will be last name, first name, etc. For parts page, it will be part number, part desc., etc.
    Here is my code:
              Iterator it=getTableRowGroup1().getTableColumnChildren();
            Option options[]=new Option[getTableRowGroup1().getColumnCount()];
            int i=0;
            while (it.hasNext()){
                TableColumn column=(TableColumn)it.next();
                if (column.getSort()!=null){
                    options=new Option(column.getSort().toString(), column.getHeaderText());
    }else{
    options[i]=new Option(column.getHeaderText(), column.getHeaderText());
    i++;
    search search=(search)getBean("search");
    search.getSearchDDFieldsDefaultOptions().setOptions(options);
    This code works, but it gives me all fields of the table available in the drop down. I want to be able to pick and choose. May be have an external properties file associated with each page, where I can list the fields available for search drop down??
    What is the best practice to load the list of options available for drop down on each page? (i.e last name, first name, etc.) I can have the code embedded in prerender of each page, or use sort of a resouce bundle for each page or may be use a bean?

    I have to agree with Pixlor and here's why:
    http://www.losingfight.com/blog/2006/08/11/the-sordid-tale-of-mm_menufw_menujs/
    and another:
    http://apptools.com/rants/menus.php
    Don't waste your time on them, you'll only end up pulling your hair out  :-)
    Nadia
    Adobe® Community Expert : Dreamweaver
    Unique CSS Templates |Tutorials |SEO Articles
    http://www.DreamweaverResources.com
    Book: Ultimate CSS Reference
    http://www.sitepoint.com/launch/005dfd4/3/133
    http://twitter.com/nadiap

  • Best Practice to save the contacts in the Database

    Hi everybody,
    I´m looking for some tips about the data in the Database. I would like to know what are the best practices saving a record. For example, I want to save the First Name, Second Name and the Last Name in the DB, How can I save them, I mean, the first letter in Uppercase, or all the record in lowercase.
    Ig, Austin Martin or austin martin
    What are the best way or best practice to do that??
    Can someone tell me? I would appreciate it, thx.
    Fabián Carrillo
    Siebel CRM Consultant

    Hi!
    Not quite sure what you're after here. Generally, store the data in the way it's going to be presented - sentence case in the examples you've given.
    If you're thinking along the lines of case sensitivity in querying within Siebel, then take a look at the Case Insensitivity Wizard in Siebel Bookshelf:
    http://download.oracle.com/docs/cd/E14004_01/books/UPG/UPG_DB_Upg_Util10.html
    Regards,
    mroshaw

  • Layout Best Practices

    <b>Problem Description</b>
    A developer needs to be able to create a rich client interface and avoid basic layout issues that tend to be pervasive across screens and applications. There are a number of low-level layout issues within ADFv that:
    <ol>
    <li>Tend to repeat themselves across screens and across applications.</li>
    <li>Impact perception of usability, quality, and fit ‘n’ finish.</li>
    <li>Are annoying to users in the aggregate. Although no single item is terrible, but a collection of 6 items, repeating themselves on multiple screens gets frustrating and detracts from the message.</li>
    </ol>
    In most cases, there’s a simple approach and best practice that will assist developers in avoiding these pitfalls.
    <b>Technical Best Practice Description</b>
    This Layout Best Practices document provides a list of known layout issues that are encountered when developing a Rich Client Interface and how to avoid them in your application development.
    Various ADF Components are described and demonstrated in this document including document, showDetailItem, decorativeBox, panelSplitter, panelStretchLayout, panelBorderLayout, and so on. It takes many of these components in combination to achieve the desired layout for a page and / or an application.
    Click here to see the document that describes these best practices.
    Edited by: Richard Wright on Nov 17, 2009 5:26 PM

    Hi,
    Try this link: [Layout Best Practices|http://www.oracle.com/technology/products/adf/patterns/11/layoutBestPractices.html]
    Regards,
    Edited by: Richard Wright on Nov 17, 2009 5:27 PM

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

Maybe you are looking for

  • Problem with static display...

    Hi All, I have a problem with my computer displaying a static screen... See model and OS below. Sometimes when I am booting up, the screen will display a static-like display. This happens only once in a great while. Sometimes it will happen when waki

  • New eprint verification code

    Hi there All was going well but today hp seem to have lost my deskjet 4625 from hpconnected. I went to set it up again but need a new 4 digit pin code. How can I get a new one of these? The old one has expired. Thanks in advance Jeremy

  • Bridge CC crashes when opened in Premiere Pro CC

    Bridge CC tends to crash regularly when open in Premiere Pro CC. Sometimes it takes longer to crash than other times. sometimes it doesn't crash at all but it appears if I leave both programs open for an hour and then come back to work on a job it wi

  • How to back up a Windows partition created in Boot Camp in Lion.

    Hello. I have Lion installed and I already had Windows 7 installed aswell. I used boot camp. Boot Camp booted Win. 7 just fine. Then, yesterday, I tried to boot into Win. 7 and all that apeared was a Black screen with a dash in the upper left hand co

  • Readind APO tables from ECC ABAP program

    Hi, I want to know if it is possible to read APO tables from a ABAP program in ECC 6.0. If it is possible , please lt me know how? Regards