Date Logic Question

I need to create an online retail sales report that compares sales from one year to sales from the prior year. The start date and end date are selected by the user. The report comes in two comparison types:
1. date range for date range (i.e. 6/1/05 - 6/5/05 compared to 6/1/04 -6/5/04) .
2. Same day range for same day range (i.e. 6/1/05 Wed- 6/5/05 Sun compared to 6/2/04 Wed-6/6/04 Sun).
I figure with the first type that I can use the GregorianCalendar add method. I'm sure I'll need to take leap year into account in some way (i.e. do I compare 2/29 to 2/28 or 3/1).
I'm stumped at how to handle the second type - day for day.
I'm still rather new to java programming and would appreciate any suggestions.
Thanks,
bfrmbama

You could subtract 52 weeks: cal.add( Calendar.WEEK_OF_YEAR, -52 )
?

Similar Messages

  • MDX calculation based on date logic for the Jan 1 of current year through the 15th of the previous month

    Hello, 
    We need some help with an SSAS MDX query based on date logic. One of the problems is that I don't have access to the Cube but have been given a query example with the logic needed for the calculation. Here's the scenario; 
    The ETL process will run on the first Tuesday after the 15<sup>th</sup> of a given month. The Analysis Cube data queried should include the current year up to the end of the previous month. For example, on May 19<sup>th</sup>
    (the first Tuesday on or after the 15th) the query should include data from January 1<sup>st</sup> through April 30<sup>th</sup>.
    The 15<sup>th</sup> of the month is not part of the query, it is a factor in when the query is run. The query will always be in terms of complete months.
    SELECT
                    NON EMPTY { [Measures].[Revenue Amount],
                    [Measures].[Utilization],
                    [Measures].[AVG Revenue Rate],
                    [Measures].[Actual Hours] }
    ON
                    COLUMNS,
                    NON EMPTY { ([dimConsultant].[User Id TT].[User Id TT].ALLMEMBERS * [dimConsultant].[Full Name].[Full Name].ALLMEMBERS * [dimConsultant].[Employee
    Type].[Employee Type].ALLMEMBERS ) } DIMENSION PROPERTIES MEMBER_CAPTION,
                    MEMBER_UNIQUE_NAME
    ON
                    ROWS
    FROM
                    ( SELECT
    ( { [dimDate].[Week Date].[1/4/2015], [dimDate].[Week Date].[1/11/2015], [dimDate].[Week Date].[1/18/2015], [dimDate].[Week Date].[1/25/2015], [dimDate].[Week Date].[2/1/2015] } )
                    ON
                                    COLUMNS
                    FROM
                                    ( SELECT
    ( { [dimIsBillable].[Is Billable].&[True] } )
                                    ON
    COLUMNS
                                    FROM
    [SSASRBA]
    WHERE
                    ( [dimIsBillable].[Is Billable].&[True], [dimDate].[Week Date].CurrentMember ) CELL PROPERTIES VALUE,
                    BACK_COLOR,
                    FORE_COLOR,
                    FORMATTED_VALUE,
                    FORMAT_STRING,
                    FONT_NAME,
                    FONT_SIZE,
                    FONT_FLAGS

    Hi Hans,
    Thank you for your question.  
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.  
    Thank you for your understanding and support. 
    Regards,
    Simon Hou
    TechNet Community Support

  • A few simple Logic questions...please help.

    I have a few probably simple Logic questions, that are nonetheless frustrating me, wondering if someone could help me out.
    1. I run Logic 8, all of the sounds that came with logic seem to work except organ sounds. I can't trigger any organ sounds (MIDI) on Logic, they won't play. I have a Yamaha Motif as my midi controller.
    Any idea why?
    2. I've starting running into a situation where I will record a MIDI track, the notes are recorded but they won't playback. The only track effected is the one that was just recorded. All other midi tracks playback.
    I have to cut the track, usually go out of Logic and back in, re record for it to playback properly. Any idea why this may be happening?
    3. How important is it to update to Logic 9. Are there any disadvantages down the road if I don't upgrade. If I purchase the $200 upgrade, do I get a package of discs and material, or it just a web download.
    Any help is appreciated!
    Colin

    seeren wrote:
    Data Stream Studio wrote:
    3) You get a full set of disks and manuals.
    They're including manuals now?
    I think his referring to the booklets ...on how to install etc
    It would be great to see printed manuals though ...I love books especially Logic/Audio related !!
    A

  • Why doesn't Aperture use iPhoto's date logic?

    This is how image date logic works in iPhoto. This image date value is used in queries and sorts.
    1. Image date is stored in database.
    2. Initial value taken from image metadata if available.
    3. If image metadata not available, use file date.
    4. User can edit database date (does not change file header).
    This is how it seems to work in Aperture 1.5
    1. Image date is stored in database.
    2. Initial value taken from iPhoto database if import from iPhoto.
    3. If not import from iPhoto, initial value taken from image metadata if available.
    4. If image metadata not available, use file date.
    5. User CANNOT edit database date.
    I was shocked (really, I'm still stunned) that Aperture 1.5 still does not equal iPhoto's date management capabilities. Since Aperture is a Pro application it's reasonable to have several date fields, but one of them should behave like iPhoto. That metadata field should be useable in queries/filters, sorts, display, book printing, etc. It should be the default date metadata element for all date related operations.
    Please note, it's fine that Aperture does not muck with image file (exif) metadata -- that's dangerous. EXIF is a quasi-standard badly implemented.
    So now, my question. Why?
    Aperture 1.5 was a huge update. They fixed much more than I thought was possible. It could have been labeled Aperture 3.0. Offering it free to users was a genuine apology, gratefully accepted I'm sure. It took all my strength of will not to order it immediately, waiting instead for the initial reviews to come in.
    So there had to be a reason Aperture's Product Manager, Joe Schorr, had not to implement iPhoto's image date behavior. Something he'd thought carefully about.
    What was it?

    Perhaps it's just that they don't have infinite resources ?
    The big deal (for me) was that they can "import" files but leave them on the filesystem. In true Apple fashion though, they took the extra steps of allowing management of the images within the application (consolidate and relocate). Perhaps the knock-on effects of all the things they did implement prevented them from doing other things...
    We see Apple as a corporate entity, master of its domain, all-seeing, all-knowing, all-powerful within the confines of an Infinite Loop. In reality it's a bunch of people who only have a fixed number of hours in the day to work. Someone takes the decision that X is more important than Y and Y is dropped for this release - it's that simple IMHO.
    In any event, I'd like to know how many people worked on Aperture 1.5 - the difference between the Aperture update and the Lightroom update was huge! Do they have lots more engineers/QA, or did they just get more done in the same time-frame ?
    -=C=-
    Mac Pro   Mac OS X (10.4.7)  

  • Data Warehouse Question

    Couldn't find the right topic to post this under but if anyone could answer - I'd be really grateful.
    We are working on a new project that is a data warehousing type system for transactional information. I spent several days of time pouring over the Data Warehousing guide and some of the other documentation but I couldn't find the answers...
    The project will use Sun Servers and an EMC SAN.
    We are considering using partitioning to segregate the data from month-to-month, but we'd like to have the data, once it passes a year of age, to be migrated to a different storage facility , possibly a NAS solution. This solution will have a 7-10 year retention and then after the data can be legally destroyed. I'm calling this facility near-line storage.
    Can paritioning support this type of migration (just copy the files and switch the pointers in the database)?
    Can the partitions be offlined and onlined when requested? (I'm thinking of a scenario similar to offline read-only tablespaces).
    Are there any whitepapers or solutions that describe how to handle schema change and database upgrades through this 10 year retention without loss or impact to the data? Basically what I mean is if we add a new column to a table, do we have to add it to all paritions or can we just leave them be? I was looking at the Workspace (for schema versioning) option to help with this, but I'm not sure if its a good fit.
    Should we use a media management product (i.e. Legato) to help with this. I've seen it used with RMAN and tape backups but is it used for the above application?

    There are 2 parts that you need to manage in order to handle the above.
    Partition is on how you separate the data logically.
    Tablespace is how you handle the data storage physically.
    Partitions are built on tablespace.
    If a table is partitioned, altering the table will change all partitions.
    Theoretically, if you make a query into a partitioned table, base on the conditions it will access the right partition for the data, therefore the users will not know that you store the first 12 months on EMC while the others on NAS except the query performance as I believe it take longer to retrieve data from NAS.
    You can move partition from TableSpace1(SAN) to TableSpace2(NAS) online using the following command (I/O intensive)-
    Alter Table TableName Move Partition PartitionName Tablespace TableSpaceName Nologging;
    Transportable Tablespace as it stated is to bring the tablespace offline and copied it from SAN to NAS and re-attach it back.

  • Last Consumption & Last receipt date logic

    Hi
    I am facing some issue while developing BI Stock report where user wants last consumption date & last receipt date in the report.
    But issue is that when i am going to develop BI query how can i differentiate date for both consumption & receipt because consumption & receipt has differentiate with movement type (suppose Mov type : 101 is for receipt & 261 is for consumption then if i use exceptional aggregation and put last value & reference char as Mov type then all 101 & 261 mov type data will come.
    Please guide for developing last consuption & last receipt date logic
    Regards,
    Gaurav

    Hi Anshu
    Thanks for your helpful reply.
    This is my last option as you suggested. But i will share you my concern that i have developed report on 0IC_C03 & in our business scenario, If i change ETL Structure then we need to delete target data, & we have millions of records if i reconstruct the data then it will take 4-5 days times. Also, some times our marker is interrupted then we need to again the same activity so i want to avoid the same activity in Workbench side and do all required changes in BI Query side.
    Can we use customer variable for time characteristics for the same issue.
    If there is any other idea in your mind, Please share.

  • The DATA CAP MEGA THREAD....post data cap questions or comments here.

    In the interest of keeping things orderly....This is the DATA CAP MEGA THREAD....post data cap questions or comments here.
    Please keep it civil.
    Comcast is testing usage plans (AKA "data caps") in certain markets.
    The markets that are currently testing usage plans are:
    Nashville, Tennessee market: 300 GB per month and additional gigabytes in increments/blocks ( e.g., $10.00 per 50 GB ). 
    Tucson, Arizona market: Economy Plus through Performance tiers receive 300 GB. Those customers subscribed to the Blast! Internet tier receive 350 GB; Extreme 50 customers receive 450 GB; Extreme 105 customers receive 600 GB. Additional gigabytes in increments/blocks of 50 GB for $10.00 each in the event the customer exceeds their included data amount. 
    Huntsville and Mobile, Alabama; Atlanta, Augusta and Savannah, Georgia; Central Kentucky; Maine; Jackson, Mississippi; Knoxville and Memphis, Tennessee and Charleston, South Carolina: 300 GB per month and additional gigabytes in increments/blocks ( e.g., $10.00 per 50 GB ) Economy Plus customers have the option of enrolling in the Flexible-Data plan.
    Fresno, California, Economy Plus customers also have the option of enrolling in the Flexible-Data plan.
    - If you live outside of these markets you ARE NOT currently subject to a data plan.
    - Comcast DOES NOT THROTTLE your speed if you exceed your usage limits.
    - You can check out the Data Usage Plan FAQ for more information.
     

    I just got a call today that I reached my 300GB limit for the month.  I called and got a pretty rude response from the security and data usage department.  The guy told me in so many words that if I do not like or agree with the policy that I should feel free to find another service provider.!!! I tried to explain that we watch Netflix and XFinity on-demand alot and I was told that that can not be anywhere close to the data usage. I checked my router and watching a "super HD, dolby 5.1" TV show on Netflix will average about 5-6 GB per hour (1.6MB/s) ... sp this means that I can only watch no more than 1-2 Super HD TV shows a day via Netflix before I run out of my data usage.    This seems a bit redicilous doesn't it? Maybe the TV ads about the higher speed than the competition should be accompanied with "as long as you don't use it too often"   Not a good experience ... 

  • Data archieving  questioner required

    Dear All,
    We have been approched by one of our cleint for DATA ARCHIVING from R/3 system.
    Management has pushed my name in this.
    Requirement is to prepare DATA ARCHIVING QUESTIONER template.
    Can please anybody help me out in this regard from MM point of view.
    Thanking you in advance.
    Regards
    Nasir Chapparband.

    Hi,
    Refer following link;
    [http://itmanagement.earthweb.com/datbus/article.php/3109221]
    SAP Data Archiving
    1.0 Introduction to Enterprise Data Archiving
    Currently, a large number of enterprises use SAP R/3 as a platform for integration of business processes. The continuous usage of SAP results in huge amounts of enterprise data, which is stored in SAP R/3. With passage of time, the new and updated data is entered into the system while the old data still resides in the SAP enterprise system.
    Since some of the old data is critical, it cannot be deleted. The difficulty is keeping the data you want, and deleting the data you do not want. Hence, a SAP database keeps on expanding rapidly and enterprise systems, which have limited data retention abilities for a few years, suffer from problems such as data overflow, longer transaction processing times, and performance degradation.
    The solution of this problem has led to the concept of Data Archiving in SAP. Data Archiving removes out-of-date data from the SAP database that the R/3 system does not need online, but can be retrieved on a later date, if required. This data is known as archived data and is stored at an offline location. Data Archiving not only consistently removes data from the database but also ensures data availability for future business requirements.
    One rule of thumb is that in a typical SAP enterprise system, the ratio of data required to be online and instantly accessible to old data, which could be archived, and stored offline is 1:6. For example, if an enterprise has 2100 GB of SAP database, the online data, which is frequently used by SAP users will be 300 MB and the rest (1800 MB) will be scarcely used and hence can be archived.
    1.1 Data Archiving u2013 Features
    It provides a protection layer to the SAP database and resolves underperformance problems caused by huge volumes of data. It is important that SAP users should keep only minimal data to efficiently work with database and servers. Data archiving ensures that the SAP database contains only relevant and up-to-date data that meet your requirements.
    Data archiving uses hardware components such as hard disks and memory. For efficient data archiving, minimum number of disks and disk space should be used.
    It also reduces the system maintenance costs associated with the SAP database. In the SAP database there are various procedures such as, data backup, data recovery, and data upgrade.
    SAP data archiving complies with statutory data retention rules that are common and well-proven techniques.
    SAP data archiving can be implemented in two ways. In the next section both options will be discussed in detail.
    Also refer following link;
    [SAP Data Archiving Tutorial|http://www.thespot4sap.com/articles/SAP_Data_Archiving_Overview.asp]

  • Difference between data logic operation and business operation

    hiii, DI core, main component of DI API, performs all <b>data logic operaton</b> where as OBServer.dll perform <b>bussiness logic operation</b> at database level...
    what is the difference between these operations????

    Hi Nirdesh,
    The DI API offers a number of objects/methods that allow you to access the information stored in the database (tables) by using objects.
    The DI API is represented by the SboBobCOM.dll, in your code you only add a reference to this COM dll. This dll offerst the objects/methods you can access.
    The SbobobCOM.dll is using the OBServer.dll, this dll contains all the business logic rules (like for example: you cannot delete a BusinessPartern if it has already some documents created) that will control all actions you are trying to do with the DI API methods/objects.
    You have a schema representing both dlls in the DI API help file -> Introduction -> DI API components section.
    Regards
    Trinidad.

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • In which package would I find the recalculate payment due date logic

    In R12 Invoice Workbench screen when you change certain fields the payment due date is recalculated.  I want to look at the logic behind that functionality.  I am looking at the code for the form but struggling as I haven't looked at Oracle forms in years and never anything this complex.  Can someone point me to the pl/sql package that contains the logic?  Thanks!

    The information comes from multiple tables.
    But you can get most of the information from mtl_material_transactions. It will give you transaction date, type, subinventory, quantity, uom , transaction value
    You will have to join the record with po tables to get PO#
    Hope this answers your question,
    Sandeep Gandhi

  • Data Modeler: Relational data model questions

    1. Can a different notation be specified for relational data models' constraints? Specifically, I'd like crow's feet. BTW, the docs show crow's feet and parent pointer (with the arrowhead), but there's no such thing in the actual modeler.
    2. Is there any way to manually route FK constraints lines?
    3. When forward engineering from logical, is there any way to indicate the preferred name for keys and indexes (primary, unique, foreign)?
    4. Mandatory/optional indicator on tables: what exactly does 'N' or 'A' stand for? I can understand 'N' meaning "Not optional", but 'A'? Wouldn't it be simpler to use '*' and 'o' like in the logical?
    Man, do I ever miss Designer!
    Thanks,
    Patrick

    Here's one more question:
    I've transformed several super/sub entities to relational, and some of the tables do not allow me to open Properties (on the table). I can use the navigator to open column properties, but cannot open table properties (neither from diagrammer nor from navigator). Some of the tables are two or three subtype levels deep, and I haven't figured out why some open and some don't.

  • Data Integrator Questions

    Hello,
    I was hoping for some guidence on how i can access sap data without using a BW system. The Data Integrator and RapidMarts seemed to be the main option. Just wondered if this was a suitable appraoch for my scenario.
    Im the UK, our clients sap infrastructure is based in Germany. We do all our development on that system and it holds all our custom tables which i need to report from.
    I initially installed the SAP Integration Tools for Crystal. I can build a report and dircetly see Tables and Function Modules, but the speed is really bad and prone to crashing.
    My new plan is to install Data Integrator on our Network. Then use that to pull all the tables and data we need into a RapidMart on our network. Then i can use our BO XI 3.1 edge installation to host all the reports / dashboards built off the mart.
    Is this a reasonable approach, am i missing anything or is this all theoretically possible??
    Any advice is great appreciated.
    Carston.

    One at the time....
    A RapidMart is nothing else than a jump-start. Imagine this: I have been asked to build a Data Warehouse for Cost Center Accounting. So I have asked:
    "What kind of reports?" "Costs per CostCenter, CostElement, Period".
    "Where does the data come from?" "This and that SAP table"
    "How can I implement the delta logic" "Using this approach...."
    "Are you fine with this Data Model to support your reports?"
    "Are you okay with the Naming Conventions I invented?"
    At the end I had a very good working Data Warehouse project for my customer. Then I got another Customer engagement where I again had to build a Cost Center Accounting Data Warehouse. Same questions, very similar answers. But instead of developing the data model, the ETL flow,.... all from scratch, I would have reused the previous code.
    In other words, a Rapid Mart is us running through a very professional Data Warehouse project with a couple of customers simultaniously and then we provide the code to you. So you are trading investment costs against Consulting costs and time and uncertainty. You might not like the Data Model, feel free to modify it. You need more, its just a first version of your Data Warehouse. You can't use 90% of wha we did - don't by the RapidMart but take your time to implement the Data Warehouse yourself.

  • Model Object / Business Logic question

    Hello,
    Question about how to architect the model objects and services in our system. We are defining our own Model Object / Transfer Objects without the use of an ORM tool (long story). Some of these model objects have to maintain Association objects such as:
    public class Teacher {
         private List<StudentAssociation> kids;
    public class StudentAssociation {
         private Teacher teacher;
         private Student student;
         private Date fromDate;
         private Date toDate;
         // Getters / Setters ....
    }My question is, where should the logic be to add, remove student associations? Originally we had it defined in the Teacher model object with:
    public void addStudentAssociation(Student student, Date fromDate, Date toDate);
    public void removeStudentAssociation(Student student, Date fromDate, Date toDate);But I am hesitant to put such logic in the Model Object. I always thought those should be pretty empty of any sort of business logic. Instead I want to have a TeacherService that does that and instead just a getter/setter on the Teacher object for the Association List. However, in doing that, I find I have to create a new List of Associations then call the setAssociations method on the Teacher object, which seems kind of strange.
    Is it bad to put the add/remove method in the model object itself? The remove logic has a bit of business logic in it, so it seems weird being there.

    Not sure I can be of much help, but here's my two cents worth:
    You have a teacher object and student object. A teacher doesnt really have an association with a student. I think you need another object such as a class object. I class has a teacher and the students (an association can be a business association, social association, etc). That same teacher may be assigned to other classes (provided they dont occur at the same time since the teacher cant be in the two classes at the same time. Likewise, for a student. Complicating things is the fact that a teacher of one class may be a student in another class. Also, a class may exist that has students but no assigned teacher yet (your original idea wouldn't be able to handle this since you will need a teacher before you add students). Another case, you have a class without students. Still another case, you have people (either future students, or teaching staff that haven't been assigned as teachers yet), but no classes yet., I think it would be best to figure out a database schema first (you can use Oracle Lite, MySql, etc).
    here is an example:
    Assuming you are putting this in a database I would create tables and fields something like this:
    Person ((a person is either a student or teacher, or just someone that is no longer a student or teacher but may be one day))
    personId not null
    firstname not null
    middleName nullable
    lastname not null
    ssn not null
    Class ( a class is where a teacher teaches students))
    classId not null
    nameOfCourse not null
    building nullable (may not be assigned yet)
    room nullable (may not be assigned yet)
    startDate ((when the class starts)) nullable
    endDate ((when the class ends)) nullable
    teacherId ((this is the parentId from the person table)) nullable
    Students
    personId ((this is the personId from the person table))
    classId
    associations:
    a class has 0 or 1 teacher,
    0 or many students
    a teacherId must exist in the person table as a personId
    a studentId must exist in the person table as a personId
    (if you delete a personId in the person table, it cascades deletes any teacherId or studentId of the same value)
    your class has a collection of students. therefore:
    private List<Student> students
    Your class will also have something like
    addStudent(Student student) (throw exception if student already exists) (note you dont pass in the start and
    end date since its the class's responsibility for start and end date, not student)
    isStudent(Student student) (return true if student is already in class, false if not)
    deleteStudent(Student student) ((returns true if student found and deleted, false if student not found to delete)
    Your business logic can check things such as if the start date occurs before the end date (an error), you have a class with no teacher or no students, etc. The first case you can do in the database with constraints if you want, but the second you cant since you want to store info for a class even if a teacher isn't assigned to it yet.
    One way to do this is to have a validate() function in your class object. The validate() function checks such things as a class with no teacher and returns a collection of warnings if it finds anything wrong. All the above is just something off the top of my head. So there may be some issues with my approach that you will have to work out.
    Good Luck!

  • Logical question, it's not enought to know LabVIEW, it's necessary intelligence...

    Suppose that you have a while loop that generate random numbers between 0 and 9. Have you INTELLIGENT way to build an array in which you save data, but with FIFO LOGICAL: first in, first out. For example you generate this sequence:
    1 3 9 2 8 4 8 2 9 1 2 0 1 2 3 2 0
    SUPPOSE that array, that save data, is of 5 elements:
    0 0 0 0 0
    2 0 0 0 0
    3 2 0 0 0
    1 2 3 2 0
    0 1 2 3 2
    2 0 1 2 3
    etc. etc. When the loop finish? The loop must finish when arrive, for example the number 9, and when there is this situation it's necessary save FIFO array in EXCEL file: when it's generated 9 the loop must finisch and there is necessary to save data. If you have INTELLIGENT SOLUTION post VI for LW8.

    Michelle,
    I'm am simply trying to offer some advice. I, as most people here have a limited amount of time to volunteer here to help people. In almost everythread you have startedyou ask people to post their code solving the problem you are asking about. I have yet to see a single piece of code, or even a screen shot of any code that you have written. It is clear that English is not your primary language and it may be difficult to clearly express what you are trying to ask. Regardless, samples of what you have tried will go a long way to show that you are indeed putting effort into these problems and not simply asking for someone to do the work for you. Your original question in this post is not very clear about what you are even trying to accomplish. Therefore it is very difficult to even begin to offer any suggestions. All I am trying to say is that you will get more help when you show that you are trying different things and need some assistance with a particular issue. If you are posting this as a thought problem and challenge to see what types of solutions are possible than say that. I got the impression that you are trying to implement this problem and seeking someone to post the solution for you. If this is incorrect then I apologize.
    Mark
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

Maybe you are looking for